Is super-accuracy matters?
|
10-06-2023, 04:36 PM
Post: #1
|
|||
|
|||
Is super-accuracy matters?
(10-06-2023 08:40 AM)J-F Garnier Wrote: The main goal of the PI/4 double precision value was not to manage huge exact arguments, since this has no real-life interest and there will always be a limit unless storing 500 digits, but to guarantee 12-digit accuracy trig results for all "realistic" arguments, especially SIN(X) with X close to PI. I've been thinking about this for a while. Many years ago, I made strod-fast.c that return correctly rounded decimal → binary result. To do this required, on rare occasion, upto 751 digits exact math calculations. (all this hard work to get a 1 bit answer of which float to return!) If we can accept very rare 1 ULP error, there is no need for all this (see strtod-lite.c) Someone asked why I wasted time getting perfect (correctly rounded) result. Float is "supposed" to be fuzzy ... 1 ULP error doesn't matter. I have no good answer, and asked Rick Regan, blog author of https://www.exploringbinary.com/ This is a great site, if we want to know about binary/decimal. Comments from https://www.exploringbinary.com/visual-c...ll-broken/ Albert Wrote:Just curious, how to you convince someone who say the last ULP bad conversion does not matter ? Rick Regan Wrote:I don’t know how to convince someone. I guess it depends on the context of their application. On the other hand, many software products have come to support correct conversion over the years that I’ve been writing about conversion inaccuracy, so that might be an indirect argument in itself. Answer somehow felt unsatisfactory. We do it because others do it? Now, I think we should shoot for better accuracy, whether or not it matters in real life. No need to convince others! What do you think? |
|||
10-06-2023, 04:42 PM
Post: #2
|
|||
|
|||
RE: Is super-accuracy matters?
I think it does not matter.
But it is fun, it is. |
|||
10-06-2023, 05:05 PM
Post: #3
|
|||
|
|||
RE: Is super-accuracy matters?
On a second thought, if one needs more exact results for professional use (not my case, due serious brain damage), just use a HP 50g in exact mode. After finishes the calculations, just press the the right keys and voila, you have numerical results with the accuracy needed.
But don't try this on a TI NSPIRE cx non CAS, it's impossible. But let's forgive it, it's a just a TI. Cheers JL |
|||
10-06-2023, 05:23 PM
Post: #4
|
|||
|
|||
RE: Is super-accuracy matters?
(10-06-2023 05:05 PM)Jlouis Wrote: ... if one needs more exact results for professional use, just use a HP 50g in exact mode. After finishes the calculations, just press the the right keys and voila, you have numerical results with the accuracy needed. That's what I do every day. You can also use Spigot or the Android calculator and get as many digits as you need. There are many applications for which speed is more important than provably exact results, where C/Java Doubles can be used. Problems only arise when one uses the wrong tool for the job at hand. |
|||
10-07-2023, 03:22 AM
Post: #5
|
|||
|
|||
RE: Is super-accuracy matters?
With exact result, numerical issues may remained (unless we don't need approx. answer)
HP50g, approx(pi - 355/113) = -.00000026676 Better: approx((pi-3) - 16/113) = -.000000266764 Digits cut in half! For pi's next convergent error, digits cut in half again! Another example, where I have formula, but some inputs unable to evaluate accurately. (04-27-2020 07:54 PM)Albert Chan Wrote: I tried to re-use my formula, for log(probability of no repetition) Let s = -n*x ln_P(n,s) = x/(12*n*(x+1)) - (n*(log1p(x)-x) + (n*x+1/2)*log1p(x)) If n ≫ s ≫ 1/2, x is tiny, we have: ln_P(n,s) ≈ x/(12*n) - (n*(-x^2/2) + (n*x)*x) ≈ -n*x^2/2 (log1p(x)-x) = -x^2/2 + O(x^3), but dropped terms may be important, for modest sized (n,s) Here, super-accuracy matters. See thread Accurate x - log(1+x), for other examples. For consistency, I changed naming convention, as func_sub(x) --> log1p_sub(x) = (log1p(x)-x) Code: function atanh_sub_tiny(x) -- abserr ≈ 256/2760615 * x^15 |
|||
10-07-2023, 07:00 AM
Post: #6
|
|||
|
|||
RE: Is super-accuracy matters?
(10-06-2023 04:36 PM)Albert Chan Wrote:(10-06-2023 08:40 AM)J-F Garnier Wrote: ... this has no real-life interest ... It's a good question. I think the problem comes from the difference between engineer-mindset and mathematician-mindset. Each of us may adopt one or the other at different times. When we adopt engineer-mindset we tend to think of numbers as measurements and apply to something in the physical world. Such measurements have some particular accuracy. Calculations with them will have some limit to their accuracy and computing to greater precision seems odd. In this mindset commentary from the other mindset can seem bafflingly misguided. When we adopt mathematican-mindset we tend to think of numbers as Platonic ideals, or as abstract entities. 1E22 is exactly that number, just as 3 is exactly that number. Pi is an irrational and almost every calculator will need to round it off at some point. Questions about values of sin and cos are questions about functions which apply to any real argument, not just to angles. In this mindset commentary from the other mindset can seem bafflingly misguided. When we count ULPs, we are in some way adopting both mindsets: we know that the inputs are limited precision and we know that the outputs are limited precision, and we'd like to get the best result, regardless of which mindset the calculation might have been performed in. Implementing a calculator is an engineering problem, at least to some extent. But implementing a calculator with more than 10 digits of precision is doing something which is more than just satisfying the users who are adopting the engineer-mindset. Where things go wrong, I think, is when two people can only see one of those two perspectives, different ones, and are arguing at cross-purposes. |
|||
10-07-2023, 07:24 AM
(This post was last modified: 10-07-2023 07:25 AM by EdS2.)
Post: #7
|
|||
|
|||
RE: Is super-accuracy matters?
I think this is related: we might ask ourselves, where in the real world might we need unexpectedly great accuracy?
In physics, the Fine-structure constant seems to have been experimentally confirmed to 11 digits, and that is held to be the most extreme example. Notably, some other apparently very accurate physical constants are now exact numbers by way of re-definition. (For example the speed of light is exactly 299792458 ms⁻¹, and the Avogadro number is exactly 6.02214076E23) One of the examples often seen is in finance: calculations involving some small rate of interest applied over very many terms ultimately involve transcendental functions and scaled integers. A friend of mine, at the time working in finance, was sorely challenged to reconcile the value of some financial instrument to some precise number of Japanese Yen. Of course, the calculation had to agree with some official way of doing the maths. In the US, at least, I gather that the HP-12C has now become enshrined as the official way to get the one true exactly correct answer. But getting a million dollar sum correct to one cent is still only ten digits or so. Even a trillion dollars of value expressed in yen is just 15 digits. It might be that 30 digits of accuracy will suffice in realistic TVM calculations even in this case. For myself, I'm very attracted by the 40-odd digits in Free42 and DM42, and very much appreciate efforts to get errors down to as few ULPs as is practical. And I'm thrilled at the as-many-digits-as-you-have-time-to-scroll accuracy of the Android calculator. |
|||
10-07-2023, 02:31 PM
(This post was last modified: 10-07-2023 02:35 PM by Albert Chan.)
Post: #8
|
|||
|
|||
RE: Is super-accuracy matters?
(10-07-2023 03:22 AM)Albert Chan Wrote: HP50g, approx(pi - 355/113) = -.00000026676 We want accurate sin(x), even for "unrealistic" x, because it may be used in "other" ways. Example, if we have accurate sin(pi ≈ \(\pi\)), we can evaluate above accurately. sin(pi) = sin(\(\pi\) - pi) ≈ \(\pi\) - pi → \(\pi\) ≈ pi + sin(pi) 355 = 113*pi + ε // ε = smod(355,pi). Since ε>0, replace smod with mod (\(\pi\) - 355/113) ≈ ((pi + sin(pi))*113 - (113*pi + ε)) / 113 HP71B Wrote:>radians |
|||
10-07-2023, 03:44 PM
Post: #9
|
|||
|
|||
RE: Is super-accuracy matters?
The need for a more accurate pi comes to play when you are implementing and trying to produce correct results for example sin(π + ε) with small values of ε (but you don't know how small).
As you noted, sin(π + ε) ≈ ε so it's an easy answer... Except you are given x = π + ε in floating point, so how do you calculate ε? ε = x - π of course, but the "exact" value of pi, not the approximated floating point version. If x is very close to π, and that subtraction is off by 1 ULP, that 1 ULP might be of huge relative magnitude with respect to ε, (let's say the 1ULP is 10 times smaller than ε, then you only have 1 significant digit correct on the answer, which is a "bad" answer if you are trying to implement sin() in a correct way. Now when people tell me "but the input already comes with an uncertainty", my answer is: that by itself doesn't justify needlessly introducing more uncertainty by disrespecting the calculations. 1ULP doesn't seem much, but when you are solving a system with 6000 equations, those tiny errors pile up like crazy over thousands of operations. The idea here is that calculations should be as accurate as possible, such that the answer after many operations still has an uncertainty based on the input uncertainty, not the input + all this garbage random uncertainty I introduced because I was lazy on my implementation. It's my opinion, of course, and I'm an engineer so I regularly use 3 or 4 digits and that's plenty for hand calculations. But when I solve an initial value problem with 100k time steps of integration... I need to trust that those ULP's are fine, otherwise my results can diverge quickly into garbage territory. |
|||
10-07-2023, 03:55 PM
(This post was last modified: 10-07-2023 04:01 PM by Claudio L..)
Post: #10
|
|||
|
|||
RE: Is super-accuracy matters?
HP71B Wrote:>radians newRPL can help here: Set it to higher number of digits, then: π0 355 113 / - 355 113 / SIN - and you get what the error should be in the sin(π-x)~x approximation for this case: 3.16396255835E-21 Since the answer is 2.667E-7, you could be getting 21-7 = 14 good digits using that approximation if you didn't introduce any more ULP errors in intermediate operations. |
|||
10-08-2023, 04:37 AM
Post: #11
|
|||
|
|||
RE: Is super-accuracy matters?
Well I'm an engineer, and in my field, nothing is really known to within better than 1%, and honestly, that could be 10%! Surveying is probably the most accuracy-requiring field I'm familiar with, and an extreme example might be say 1mm in 1km, so basically 6 sig figs.
Another case is GPS satellite navigation, which is a field where accurate resolution depends on minuscule differences in timing. But in any case, I reckon to be able to follow through outstanding mathematical accuracy is still a very respectable and worthwhile venture, even if its just sbstract, just because! Even if the need for it is not obvious, it's still fully justifiable as a piece of mathematics, and, so many times in science and theory, the solution precedes the problem. So although the maths is out of my range, I reckon keep going! |
|||
10-08-2023, 10:04 PM
Post: #12
|
|||
|
|||
RE: Is super-accuracy matters?
HP71B Reference Manual, p241 Wrote:RED is the remainder function defined by the IEEE Floating Point Standard. I just learned HP71B has remainder(). Does HP50g has it too? x = float(n/d) n = x*d + red(n,x) // equality *exact* sin(x) ≈ \(\pi\) - x // if x close enough to \(\pi\) \(\pi\) - n/d ≈ sin(x) - red(n,x)/d >n=355 @ d=113 @ x=n/d >sin(x) - red(n,x)/d -2.66764189063E-7 |
|||
10-09-2023, 07:12 PM
(This post was last modified: 10-09-2023 07:14 PM by johnb.)
Post: #13
|
|||
|
|||
RE: Is super-accuracy matters?
(10-07-2023 03:44 PM)Claudio L. Wrote: But when I solve an initial value problem with 100k time steps of integration... I need to trust that those ULP's are fine, otherwise my results can diverge quickly into garbage territory. Surprisingly, I once had to argue this point with other engineers. We were re-implementing the data reduction in an existing chemistry instrument for measuring pore size and pore volumes in very fine particulates, using the Barrett-Joyner-Halenda method. Whew. The argument was a lot of work. My math theory wasn't strong enough to produce a convincing abstract argument, so I had to brute force it with realistic examples. "Seeing is believing?" I had to find some degenerate cases that would produce garbage results with our old implementation, and would also produce different garbage using our new C++ compiler's math library implementation.... and then show how a more accurate implementation would produce (a) practically the same results when prior ones were not garbage, (b) good results for many of the prior garbage results, and (c) well-behaved NaN's or infinities for the very few remaining results that were still garbage (so you could tell they were garbage, at a glance). It might have been fun if I hadn't been both on a deadline, and a lone voice calling out in the wilderness. Daily drivers: 15c, 32sII, 35s, 41cx, 48g, WP 34s/31s. Favorite: 16c. Latest: 15ce, 48s, 50g. Gateway drug: 28s found in yard sale ~2009. |
|||
10-09-2023, 07:24 PM
(This post was last modified: 10-09-2023 07:25 PM by johnb.)
Post: #14
|
|||
|
|||
RE: Is super-accuracy matters?
I think an interesting branch-off discussion (maybe in a different thread?) would be, "how do you convince others that additional accuracy/precision is/is-not needed?" (For various scenarios.) Also taking into account the differences in audiences.
For example, back in the 1990's I found myself stonewalled with a team of fellow software engineers who insisted that double precision floating point (on a 32 bit machine, AFAICR) was sufficient for an accounting suite that was supposed to be able to handle GNP-sized quantities across any of the common world currencies. I was able to make my point that we should use a variable-digits exact BCD library, by (again like the above) coming up with a few examples that would be off by a few thousand dollars for a company the size of, say, Nestlé. And I made sure I had both the accountant SMEs and the other software designers in the room. The engineers instantly said "oh, that's in the noise for those size transactions" and the accountants just flipped out. And made my argument for me. * * * Any other interesting examples of how people successfully (or unsuccessfully) argued one side or the other? Daily drivers: 15c, 32sII, 35s, 41cx, 48g, WP 34s/31s. Favorite: 16c. Latest: 15ce, 48s, 50g. Gateway drug: 28s found in yard sale ~2009. |
|||
10-10-2023, 07:33 PM
Post: #15
|
|||
|
|||
RE: Is super-accuracy matters?
Albert, I think that I don't understand much of your posts, but I find them intriguing! This may be because I am neither a mathematician or engineer.
There is something deeply satisfying about knowing something is correct to a particular level of precision. Last weekend I had a go implementing this trig program on the HP-12c. And I'm genuinely amazed that I can get that calculator to get sin/cos/tan accurate to 12 digits. For me, it isn't necessarily how accurate the results are, but that a standard can be expected. I.e. I'd like the 10 digits of the display to be accurate digits, and if it can't achieve that, I'd prefer if it simply left off the digits that aren't accurate. |
|||
10-11-2023, 12:21 AM
(This post was last modified: 10-13-2023 04:52 PM by Albert Chan.)
Post: #16
|
|||
|
|||
RE: Is super-accuracy matters?
(10-09-2023 07:24 PM)johnb Wrote: "how do you convince others that additional accuracy/precision is/is-not needed?" It may be hard to show benefit, but we can go for how little cost it take. Example, I recently upgraded old luajit 1.1.8 with openlibm, for better accuracy. Old luajit was using msvcrt.dll FSIN/FCOS/FPTAN, with only 68 bits of pi see Intel Underestimates Error Bounds by 1.3 quintillion Code: OLD NEW = luajit + openlibm Adding openlibm code to luajit (*) does not bloat up dll's. (and, no cheating! libopenlibm.a is a static library) I don't quite understand how angle reduction code work, but result is amazing! Performance is OK, see Accurate trigonometric functions for large arguments lua> x = 1e22 lua> sin(x), dtoh(x) -0.8522008497671888 +0x1.0f0cf064dd592p+73 lua> !spigot --printf "%.17g" sin(0x1.0f0cf064dd592p+73) -0.8522008497671888 lua> x = 1e308 lua> sin(x), dtoh(x) 0.4533964905016491 +0x1.1ccf385ebc8a0p+1023 lua> !spigot --printf "%.17g" sin(0x1.1ccf385ebc8a0p+1023) 0.45339649050164912 Last 2 numbers actually matched. luajit use dtoa-fast for numeric output. Default is output minimum digits that round-trip, same as python 3. (*) the reason math related code move *inside* lua is due to mingw compiler design. If openlibm is *inside* lua51.dll, its code should get picked first, before msvcrt.dll Luajit 1.1.8, -O2 or higher, use hardware FSIN/FCOS/FPTAN for speed. Below patch would remove this optimization, and use libm versions instead. ljit_x86.h add sin/cos/tan (exact order!) partially inlined math functions. (patch in bold) Then, remove jit_inline_math() sin/cos/tan cases, and use default (libm) instead. ljit_x86.h Wrote:/* Partially inlined math functions. */ Proper way is to patch ljit_x86_inline.dash, then auto-generate ljit_x86.h But, that required working lua is available. |
|||
10-13-2023, 03:08 PM
Post: #17
|
|||
|
|||
RE: Is super-accuracy matters?
(10-11-2023 12:21 AM)Albert Chan Wrote: lua> x = 1e308 BC isn't agreeing. Code: scale = 5000 |
|||
10-13-2023, 03:21 PM
Post: #18
|
|||
|
|||
RE: Is super-accuracy matters?
"Another case is GPS satellite navigation, which is a field where accurate resolution depends on minuscule differences in timing."
And relativity! I think the super-accuracy in scientific calculators came about because it was the one real advantage they had over slide rules - other than addition of course 8^). |
|||
10-13-2023, 03:36 PM
(This post was last modified: 10-13-2023 03:44 PM by johnb.)
Post: #19
|
|||
|
|||
RE: Is super-accuracy matters?
(10-13-2023 03:21 PM)KeithB Wrote: I think the super-accuracy in scientific calculators came about because it was the one real advantage they had over slide rules - other than addition of course 8^). Not the only big advantage. Don't forget programmability! When doing the same set of calculations over and over again (chemistry or astronomy class, anyone?), programmability was invaluable simply because it avoided mistakes such as step omissions. Not to mention how much it speeded up the tedium in the first place. Having said that, prior to programmable calculators, accuracy (and just plain nerdy niftiness!) was a big point! I remember as a kid, just being amazed that I could multiply 1234 x 5678 and instead of getting ~ 7x10¹², it would (more or less promptly) flash "7,006,652." "Wow, I've finally gotten good enough at eyeballing this cursor to get 2-3 significant digits, and this new clicky LED machine gives me EIGHT?" Daily drivers: 15c, 32sII, 35s, 41cx, 48g, WP 34s/31s. Favorite: 16c. Latest: 15ce, 48s, 50g. Gateway drug: 28s found in yard sale ~2009. |
|||
10-13-2023, 07:28 PM
Post: #20
|
|||
|
|||
RE: Is super-accuracy matters?
(10-13-2023 03:08 PM)dm319 Wrote:(10-11-2023 12:21 AM)Albert Chan Wrote: lua> x = 1e308 If we try hexfloat, which lua actually get, it matched. bc s(x) internal use about scale*1.1 digits for pi/4 scale=308 should give us good FIX20 of sin(x = float(1e308)) bc Wrote:ibase = 16 |
|||
« Next Oldest | Next Newest »
|
User(s) browsing this thread: