Algorithm accuracy vs calculator precision
|
11-12-2024, 12:46 AM
(This post was last modified: 11-12-2024 05:44 PM by carey.)
Post: #1
|
|||
|
|||
Algorithm accuracy vs calculator precision
A recent thread appears to me to highlight an important distinction between algorithm accuracy and calculator precision. Precision doesn't guarantee accuracy.
We're all accustomed to reading claims that one calculator is more precise than another (e.g., 10, 12, 14, 16 digits, or more, of precision). The above thread appears to show that a change in the WP34S integration algorithm resulted in an inaccurate integration result for a specific function (the WP34S is a great calculator and the general lesson here applies to any calculator). Using the familiar bullseye analogy for accuracy vs precision in which arrows near the bullseye are accurate, while arrows clustered together (even if away from the bullseye) are precise, this situation describes low accuracy but high precision. One reason we don't encounter low accuracy, high precision situations more often is that it is an issue more likely to occur for integration than root-finding. In root-finding, there is a target condition (e.g., f(x) = 0) and convergent iterative algorithms can ensure the target condition is met to within the precision of the calculator. Hence root-finding can take advantage of the full calculator precision. However, for integration, where contributions are summed, there’s not a direct target condition that ensures accuracy matches (or exceeds) precision. Also, some functions (e.g., highly oscillatory) can cause difficulty even with otherwise sound integration algorithms. Of course, all is not lost in ensuring integration accuracy as step size can be adaptively altered and if the result doesn't change with step size to within a tolerance we can choose to be satisfied with the result. Further, if Richardson extrapolation is used (as in Romberg integration), consistently convergent extrapolations imply convergence to a correct result. Given a choice of extremes, i.e., between an accurate but low precision answer and an inaccurate but high precision answer, an imprecise but accurate answer seems preferable to knowing an inaccurate answer to many digits of precision. So when I read that a calculator has N digits of precision, I read that claim as assuming algorithms are "exact" where "exact" means algorithm accuracy equals or exceeds calculator precision for the functions in question. Hence claims of calculator precision seem to me to assume a best case scenario where only calculator precision is the limiting factor (and not algorithm accuracy for specific functions). When algorithm accuracy doesn’t attain full calculator precision (as in the above thread), extra digits of precision beyond digits of accuracy seem of little (if any) value. [Note that the popular topic of guard digits and related precision fine points, while interesting topics, are irrelevant to this discussion about algorithm accuracy]. Do you share the view that algorithm accuracy for various types of functions, though not always easy to specify, deserves as much attention as calculator precision, especially for integration algorithms? |
|||
11-12-2024, 05:44 PM
Post: #2
|
|||
|
|||
RE: Algorithm accuracy vs calculator precision
I certainly share your view. I think Steve will hopefully chime in with a Prof Kahan quote about the algorithms of the HPs being superior even though the digits were less. I guess HP persisted with 10 digits for a while also when other manufacturers offered more, and maybe they felt that having better algorithms was more useful than digits.
From a user perspective I'd prefer less digits if better accuracy, especially if accuracy could be guaranteed. Hidden precision is somewhat relevant though because it allows built-in functions to utilise more digits. That may or may not be beneficial to the accueacy though, and really hiding the precision means "I am not allowing you to use this precision in your calculations or your programs, just for my built-in functions". which is an odd stance. When I went on my dive of TVM calculators I noticed very varied results depending on the algorithm. I plotted a 'digit efficiency' graph to show how well the calculators did for their precision. I'll dig out my latest graph and post it here later. |
|||
11-12-2024, 05:49 PM
Post: #3
|
|||
|
|||
RE: Algorithm accuracy vs calculator precision
(11-12-2024 05:44 PM)dm319 Wrote: ...Prof Kahan... Yeah that guy knew his stuff! From a different era. A1 HP-15C (2234A02xxx), HP-16C (2403A02xxx), HP-15C CE (9CJ323-03xxx), HP-20S (2844A16xxx), HP-12C+ (9CJ251) |
|||
11-12-2024, 05:57 PM
(This post was last modified: 11-12-2024 05:58 PM by carey.)
Post: #4
|
|||
|
|||
RE: Algorithm accuracy vs calculator precision
(11-12-2024 05:44 PM)dm319 Wrote: I certainly share your view. I think Steve will hopefully chime in with a Prof Kahan quote about the algorithms of the HPs being superior even though the digits were less. I guess HP persisted with 10 digits for a while also when other manufacturers offered more, and maybe they felt that having better algorithms was more useful than digits. Thank you for your reply and your great point about preferring "less digits if better accuracy, especially if accuracy could be guaranteed." With the range of possible functions that algorithms can be applied to and in which accuracy can vary, it argues strongly for either: (i) robust algorithms even if other algorithms might be more accurate in special cases, or (ii) a choice of algorithms. Yes, Professor Kahan quotes are always informative and posting your latest "digit efficiency" graph for TVM calculators would be of interest! |
|||
11-12-2024, 06:33 PM
(This post was last modified: 11-12-2024 06:37 PM by Commie.)
Post: #5
|
|||
|
|||
RE: Algorithm accuracy vs calculator precision
Hi,
As far as I am aware, mathematical transcandential functions are calculated using polynomial approximations for a given function. If a high digit count is used, the designer/programmer needs to increase the degree of the polynomial in order to take advantage of the high precision and thereby achieving high accuracy, generally, this means using more terms in the said polynomial.The downside of this, is bloated firmware, more iterations and slower calculations. Obviously, bcd should be implemented. Cheers Darren |
|||
11-12-2024, 07:40 PM
Post: #6
|
|||
|
|||
RE: Algorithm accuracy vs calculator precision
OK quote from Kahan (stolen from one of Steve's posts of course:
Kahan through the medium of Steve Wrote:It’s true that the HP-45’s arithmetic was somewhat grotty in spots, but it wasn’t that bad. But what TI was doing was clever. You see, the 45 did its arithmetic to ten significant decimals, period. Everything was done to ten significant decimals, including the internal algorithms that computer logs and exponentials. TI was doing their arithmetic internally carrying 13 significant decimals, but they only showed you ten. So that meant that, though you type ten digits in, as soon as you did some arithmetic, you had 13 decimal digits. But you only saw ten significant decimals. Well, that could hide a lot of sins, couldn’t it? The TI thing was cheaper, but that’s because Hewlett-Packard can’t do anything that’s cheap there. Their whole culture is such that, whatever they do, it’s going to be expensive. So I discovered that if you did this log exponential thing seven times, then the last digit would change. You see, their arithmetic at the 13th digit was grottier, if anything could be grottier, than the 45. And this is my plot of digit efficiency in TVM calculations: |
|||
11-12-2024, 08:15 PM
Post: #7
|
|||
|
|||
RE: Algorithm accuracy vs calculator precision | |||
11-12-2024, 08:27 PM
Post: #8
|
|||
|
|||
RE: Algorithm accuracy vs calculator precision
Ha ha, can I summarise this in one paragraph? Here goes:
Take a collection of 13 difficult TVM problems and use arbitrary precision maths in R to solve the problems giving a highly accurate reference value. Then test the calculators, extracting all the digits you can from the answer. Calculate an absolute error by taking the difference from the reference value. Sometimes take the relative error by dividing the absolute error by the reference value (to normalise for very big and small results). Then take the -log10 of these values to give you a measure of accuracy that is roughly comparable to number of digits. Finally, divide by the actual number of digits precision of the calculator. There is more here and here if that didn't put you off. |
|||
11-12-2024, 08:38 PM
Post: #9
|
|||
|
|||
RE: Algorithm accuracy vs calculator precision
One perspective on algo accuracy is, is it better to have a more sophisticated routine that achieves adequate accuracy in fewer cycles? Or a simpler one that needs more cycles? Which is actually faster in terms of run time on a given machine? and which is likely to be more reliable if you just let it run without worrying about the details?
On my old TI58, it came with integration built in (to a module that came with the machine), using Simpson's rule. This is a step or two more accurate than a basic trapezium method, but by no means the most sophisticated method. But I took great confidence in it from having studied it in detail at high-school, so I knew about what it was doing. On contempory hp calcs such as the 15c, it's much cleverer and adapts to the accuracy of the display. The results are better but its more of a black-box. I'm less confident about how it is working. |
|||
11-12-2024, 08:58 PM
Post: #10
|
|||
|
|||
RE: Algorithm accuracy vs calculator precision
(11-12-2024 06:33 PM)Commie Wrote: As far as I am aware, mathematical transcandential functions are calculated using polynomial approximations for a given function. The classic HP calculators use the CORDIC algorithm. The main reason was that only addition and shift was available. Here's an article that explains how it is done: Exploring the CORDIC algorithm with the WP-34S |
|||
11-12-2024, 09:12 PM
(This post was last modified: 11-12-2024 09:20 PM by Commie.)
Post: #11
|
|||
|
|||
RE: Algorithm accuracy vs calculator precision
(11-12-2024 08:58 PM)Thomas Klemm Wrote: The classic HP calculators use the CORDIC algorithm. Ha, I was waiting for someone to pull me up with cordics, I have studied cordics extensively and put it head to head against poly's and the end results are more or less the same, in fact, poly's are faster unless you are using fpga based on shift add logic, there is no benefit using cordics over poly's. Cordics in a microcontroller are slow but use less code(many iterations), poly's are faster but use greater code length. Cheers Darren |
|||
11-12-2024, 09:14 PM
Post: #12
|
|||
|
|||
RE: Algorithm accuracy vs calculator precision
(11-12-2024 08:38 PM)Johnh Wrote: The results are better but its more of a black-box. I'm less confident about how it is working. I'm not. The 15C was Professor Kahan's "baby" and IMHO he did a fine job on it's (admittedly) proprietary algorithms. IIRC the 12C algorithms originally came from the HP-92 which Kahan also worked on. So, while "40+ years old", both models are just as good if not better, than many newer designs although they may not look/act as "cool". The 12C is considered the "gold standard" in the financial world, and is still sold, for a reason. And the 15C is highly admired by scientists/engineers for similar reasons. Some confuse "newer" with "better". Sometimes. Other times not. A1 HP-15C (2234A02xxx), HP-16C (2403A02xxx), HP-15C CE (9CJ323-03xxx), HP-20S (2844A16xxx), HP-12C+ (9CJ251) |
|||
11-12-2024, 09:37 PM
Post: #13
|
|||
|
|||
RE: Algorithm accuracy vs calculator precision
(11-12-2024 09:12 PM)Commie Wrote: Ha, I was waiting for someone to pull me up with cordics, (…) Another example of Cunningham's Law in the wild. |
|||
11-12-2024, 09:54 PM
Post: #14
|
|||
|
|||
RE: Algorithm accuracy vs calculator precision
(11-12-2024 09:37 PM)Thomas Klemm Wrote: Another example of Cunningham's Law in the wild. Thomas, I am sorry if I offended you but my previous post is truth. Cheers Darren |
|||
11-12-2024, 10:05 PM
Post: #15
|
|||
|
|||
RE: Algorithm accuracy vs calculator precision | |||
11-13-2024, 12:27 AM
Post: #16
|
|||
|
|||
RE: Algorithm accuracy vs calculator precision | |||
11-13-2024, 04:59 AM
Post: #17
|
|||
|
|||
RE: Algorithm accuracy vs calculator precision
(11-12-2024 09:12 PM)Commie Wrote: Cordics in a microcontroller are slow but use less code(many iterations), poly's are faster but use greater code length. Whether they are faster or slower depends _greatly_ on the processor architecture. On a multi-digit BCD word architecture, like either the 1970s-1990s HP and TI calculators, CORDIC is _MUCH_ faster than a polynomial algorithm. Even if HP or TI had more ROM available, they almost certainly would still have used CORDIC. With regard to HP calcualtors, this isn't just my opinion. In at least one article about the origin of HP calculators, an HP employe specifically stated they that chose CORDIC for speed. I don't recall which specific article. Even on short-word-length binary processors, and processors without any efficient floating point add and multiply hardware, CORDIC is often faster than polynomial. However, on all modern general-purpose processors (x86, high-end ARM, etc.), with efficient floating point hardware, polynomial is much faster than even hardware CORDIC. |
|||
« Next Oldest | Next Newest »
|
User(s) browsing this thread: 1 Guest(s)