Post Reply 
Is super-accuracy matters?
10-13-2023, 08:44 PM
Post: #21
RE: Is super-accuracy matters?
(10-13-2023 03:36 PM)johnb Wrote:  
(10-13-2023 03:21 PM)KeithB Wrote:  I think the super-accuracy in scientific calculators came about because it was the one real advantage they had over slide rules - other than addition of course 8^).

Not the only big advantage. Don't forget programmability!

Programmability is what got me into calculators, first the TI-58c, then the TI-59, then the HP-41cx.

Quote:When doing the same set of calculations over and over again (chemistry or astronomy class, anyone?), programmability was invaluable simply because it avoided mistakes such as step omissions. Not to mention how much it speeded up the tedium in the first place.

There were iterative processes I wanted to do that I wouldn't live long enough to do on a slide rule.  The calculator could do ten steps per second, plus conditional branching!  Oooooooo!  Big Grin  I had programs I would let run for as much as 24 hours, which means close to a million instructions executed, with no mistakes.  I was in good practice with the slide rule though, so for single operations, the calculator was not significantly faster.

Quote:Having said that, prior to programmable calculators, accuracy (and just plain nerdy niftiness!) was a big point! I remember as a kid, just being amazed that I could multiply 1234 x 5678 and instead of getting ~ 7x10¹², it would (more or less promptly) flash "7,006,652."

Er... make that 7.01x10^6   (not 10^12)

Quote:"Wow, I've finally gotten good enough at eyeballing this cursor to get 2-3 significant digits, and this new clicky LED machine gives me EIGHT?"

You should always be able to get three digits on the C, CF, CI, CIF, D, and DF scales, and usually four if the first digit is a 1 or maybe even a 2.  That's not to say the 4th digit will always be correct, but it will make your answer more accurate than not using it at all, for example 1.237 where perhaps it should have been 1.236.  On other scales you may get more.  For example, on the LL1 scale for natural logs, you can read off 1.01014.  The scale goes from 1.01 at the left end (for slightly below e^.01) to 1.105 at the right end (for slightly below e^.1).  The next scale up is LL2 which goes from there to e, then LL3 which goes from e to e^10 or about 20,000 (and each little mark is 1000, meaning there's very little precision left at the right end).  However, I've never seen an online video where the operator knew how to roll his fingers to microadjust the slide or even the cursor for greatest precision.  There's a way to quickly do it to the limits of good human visual acuity.

http://WilsonMinesCo.com  (Lots of HP-41 links at the bottom of the links page, at http://wilsonminesco.com/links.html#hp41 )
Visit this user's website Find all posts by this user
Quote this message in a reply
10-13-2023, 08:57 PM
Post: #22
RE: Is super-accuracy matters?
"Not the only big advantage. Don't forget programmability!"
"super-accuracy" came first. This is not a "why is a calculator better than a slide rule thread"!
Find all posts by this user
Quote this message in a reply
10-14-2023, 03:38 AM
Post: #23
RE: Is super-accuracy matters?
(10-13-2023 08:44 PM)Garth Wilson Wrote:  I was in good practice with the slide rule though, so for single operations, the calculator was not significantly faster.
In 1981 in college chemistry, we had a prof who would run us through speed drills if the class finished a bit early. 99% of the class had calculators; he had a K+E slipstick, and he nearly always won.

I Wrote:I remember as a kid, just being amazed that I could multiply 1234 x 5678 and instead of getting ~ 7x10¹², it would (more or less promptly) flash "7,006,652."
Garth Wrote:Er... make that 7.01x10^6   (not 10^12)

And now the real reason I like calculators comes out. Maybe I was bad at using a slide rule? Could be worse... I could be an engineer who's bad at math?

I Wrote:"Wow, I've finally gotten good enough at eyeballing this cursor to get 2-3 significant digits, and this new clicky LED machine gives me EIGHT?"

Garth Wrote:You should always be able to get three digits on the C, CF, CI, CIF, D, and DF scales, ... On other scales you may get more.  For example, on the LL1 scale for natural logs, you can read off 1.01014. 

Definitely proves I no longer remember how to use a slide rule any more.

Daily drivers: 15c, 32sII, 35s, 41cx, 48g, WP 34s/31s. Favorite: 16c.
Latest: 15ce, 48s, 50g. Gateway drug: 28s found in yard sale ~2009.
Find all posts by this user
Quote this message in a reply
10-20-2023, 12:28 AM (This post was last modified: 10-20-2023 12:29 AM by Peter Klein.)
Post: #24
RE: Is super-accuracy matters?
I’m curious: How many digits are enough, in various real-world fields?

I notice that if I convert the distance halfway around the earth from km to miles, the difference between using 1.61 and 1.609344 is a whopping 5.25 km. Should I care?

How many digits did we need to go to the moon? To design a car engine, or the robotics in the car factory? To design a smartphone? The lenses in my glasses? The tomography machine that can measure the curvature of my retina? To navigate a ship or airplane? To measure spectroscopically the light from a distant star or planet and determine useful things about its composition? To determine that yes, you really did detect a Higgs Bosun or a gravity wave?

True story: My father was a theoretical physicist (classical physics, not nuclear). He used a slide rule from the 1930s through the early 70s. Then he got an early TI calculator that was, IIRC, only accurate to 4 or 5 decimal places when doing certain trig and log calculations. He was quite happy with for years. If he needed more accuracy, he punched Fortran programs on cards and sent them to the big computer at work.

My mother, OTOH, could not understand why, when she divided something by three on a four-function calculator, she got results that ended in .3333333 or .6666667. “That’s not good enough! I want the exact answer!” she would say.
Find all posts by this user
Quote this message in a reply
10-20-2023, 07:02 AM
Post: #25
RE: Is super-accuracy matters?
(10-20-2023 12:28 AM)Peter Klein Wrote:  I’m curious: How many digits are enough, in various real-world fields?

I notice that if I convert the distance halfway around the earth from km to miles, the difference between using 1.61 and 1.609344 is a whopping 5.25 km. Should I care?

How many digits did we need to go to the moon? To design a car engine, or the robotics in the car factory? To design a smartphone? The lenses in my glasses? The tomography machine that can measure the curvature of my retina? To navigate a ship or airplane? To measure spectroscopically the light from a distant star or planet and determine useful things about its composition? To determine that yes, you really did detect a Higgs Bosun or a gravity wave?

True story: My father was a theoretical physicist (classical physics, not nuclear). He used a slide rule from the 1930s through the early 70s. Then he got an early TI calculator that was, IIRC, only accurate to 4 or 5 decimal places when doing certain trig and log calculations. He was quite happy with for years. If he needed more accuracy, he punched Fortran programs on cards and sent them to the big computer at work.

My mother, OTOH, could not understand why, when she divided something by three on a four-function calculator, she got results that ended in .3333333 or .6666667. “That’s not good enough! I want the exact answer!” she would say.
(My highlighting).
I have asked myself the exact question.
Yes, it is fun to have something calculated to more than 1.000 decimal places or having my land plot calculated into the exact square Planck length.
Though, I was curious to how accurate one have to be. I found that NASA uses 16 decimal places of Pi when calculating orbits and space craft operations.

As a fun "experiment" for myself and obtaining the most accurate values for the distance earth/sun (1 AU, 149.597.870.800 metres) and knowing the exact length of a day (23h 56m 04,1s) and also the exact length of one tropical year (365,2422 days).
Now, the earth orbit is not exactly circular, but the AU is a median value, so for the purpose of finding the difference in accuracy, I assumed circular orbit based on the mean value of the AU.
Pi with 15 vs 16 decimal points yields a discrepancy of 0,0598mm or about the diameter of a human head hair (60µ).
12 vs 16 decimal digits is 237,322mm.

Given the exact time of one (circular) earth orbit (755.298.370,68048 seconds) one get the following:
Orbit time difference, 12 vs 16 decimal places of Pi: ~7,9µs.
Orbit time difference 15 vs 16 decimal places of Pi: ~2,00349 nanoseconds.

Worst case accuracy scenario should then give us a leap second every 160.000th year, while we have a leap second every 2-3 years.
Out of this, we can conclude that not even nature itself is that accurate Wink

Esben
15C CE, 28s, 35s, 49G+, 50G, Prime G2 HW D, SwissMicros DM32, DM42, DM42n, WP43 Pilot
Elektronika MK-52 & MK-61
Find all posts by this user
Quote this message in a reply
10-20-2023, 08:55 AM
Post: #26
RE: Is super-accuracy matters?
Orbits and other spacecraft trajectories are a good example of systems which can be very sensitive to initial conditions. It's much deeper than just knowing how far to travel. But on the other hand, in an engineering context, you won't have super-accurate measurements of physical quantities so that limits what you can do. (Probably for related reasons, spacecraft do perform mid-course corrections.)

But you wouldn't want to have the limited precision of your calculator, or your value of pi, to be the limiting factor. You'd want headroom. (You might also want to calculate with error bars, or with monte carlo techniques, to determine whether the sensitivity of your solution makes it impractical.)

It's notable, I think, that the recent claimed detection of nanohertz (long wavelength) gravitational waves, which involved measurements of distant pulsars, was made possible because of recent improvements to estimates of Jupiter's vital statistics. Jupiter is big enough and far enough from the Sun that the barycentre of the pair is outside the Sun, and that barycentre is (kind of) the point around which the Earth orbits. So if you want to know where you are, you need to know where Jupiter is. Accuracy required - but I don't know how much.
Find all posts by this user
Quote this message in a reply
10-20-2023, 12:07 PM
Post: #27
RE: Is super-accuracy matters?
(10-20-2023 07:02 AM)DA74254 Wrote:  Out of this, we can conclude that not even nature itself is that accurate Wink

Profound enough to make it ones signature Smile

Remember kids, "In a democracy, you get the government you deserve."
Find all posts by this user
Quote this message in a reply
10-20-2023, 04:40 PM (This post was last modified: 10-20-2023 04:41 PM by johnb.)
Post: #28
RE: Is super-accuracy matters?
(10-20-2023 08:55 AM)EdS2 Wrote:  ... But you wouldn't want to have the limited precision of your calculator, or your value of pi, to be the limiting factor. You'd want headroom.

Exactly. Especially if your calculations are repetitive and build on each other, or are a very long chain of different manipulations. Having a few [extra] guard digits goes a long way towards avoiding an accumulation of roundoff errors.

Of course, big precision arithmetic won't save you from naïveté. The implementer is still responsible for couching their solutions in terms of operations that don't unnecessarily throw away precision or otherwise become degenerate. (For example, we should use the Kahan summation algorithm for repetitive simple operations, and we should rewrite expressions so we avoid the edge cases in things like trig functions --- something I've repeatedly nailed myself with over the years and thus been forced to wear the dunce cap.)

Daily drivers: 15c, 32sII, 35s, 41cx, 48g, WP 34s/31s. Favorite: 16c.
Latest: 15ce, 48s, 50g. Gateway drug: 28s found in yard sale ~2009.
Find all posts by this user
Quote this message in a reply
10-20-2023, 07:48 PM (This post was last modified: 10-20-2023 08:52 PM by Johnh.)
Post: #29
RE: Is super-accuracy matters?
This is a nice thread, and so I'd like to show you something very personal from my family. In the late 1800's, my great-great (great) uncle was a young engineer working on the mechanism for lifting the decks of Tower Bridge in London. After that, he set off around the world on a career as an engineer in the British Empire, in southern Africa and eventually in New Zealand.

He took with him his Fullers Cylindrical Sliderule, and I have it here. While sliderules were usually about 12" long and could with care, offer accuracy to about 3 digits, this one is accurate to 5 digits. This was done by using a set of very long scales wrapped in a spiral around a pair of concentric cylinders. Effectively, the scales are over 40' long.

Ours was made probably around 1898, here it is:

   

Its beautiful made from fine hardwoods, with scales printed and then bonded. The two cylinders move smoothly with what feels like a layer of velvet between. Precisely machined brass bars get aligned and then read against the printed scales.

So how accurate is it really?

Consider a calculation with quite a lot of digits involved, such as 1.2345 x 2.3456

Here is the setting of the two input numbers:

   
   

And the result (with a 21st century calc app)

   

You can clearly read 289 printed just to the left of the pointer, the scale for the next digit 5, and interpolate quite clearly for the 5th digit 6 , so 2.8956 !

I find it amazing, firstly that someone could conceive of this device in the 19th century (George Fuller, professor of engineering at Queen's College, Belfast, patented in 1878), and then that it could be made accurately enough so that it worked.

Ours is in remarkably good condition, almost 'New in Box'. This suggests to me that it's possible that none of my ancestors every really got around to figuring out how to use it! though we do have a printed manual.

Apparently these were made up into the 1970's, with different materials such as Bakelite, a production run of almost 100 years. So they crossed over with the early HP scientifics and making the 40-year-old design of my HP15C look like new tech!

The device is covered in a few websites, but here is a nice one, featuring a collection of other interesting old calc tech:

https://www.pghtech.org/news-and-publica...20or%20two.
Find all posts by this user
Quote this message in a reply
10-20-2023, 09:06 PM
Post: #30
RE: Is super-accuracy matters?
That's a marvellous device!
Find all posts by this user
Quote this message in a reply
10-22-2023, 07:44 PM
Post: #31
RE: Is super-accuracy matters?
Mother of all Slide Rules ! The Fuller Calculator


Find all posts by this user
Quote this message in a reply
10-24-2023, 01:33 PM
Post: #32
RE: Is super-accuracy matters?
(10-07-2023 07:24 AM)EdS2 Wrote:  I think this is related: we might ask ourselves, where in the real world might we need unexpectedly great accuracy?



One of the examples often seen is in finance: calculations involving some small rate of interest applied over very many terms ultimately involve transcendental functions and scaled integers. A friend of mine, at the time working in finance, was sorely challenged to reconcile the value of some financial instrument to some precise number of Japanese Yen. Of course, the calculation had to agree with some official way of doing the maths. In the US, at least, I gather that the HP-12C has now become enshrined as the official way to get the one true exactly correct answer. But getting a million dollar sum correct to one cent is still only ten digits or so. Even a trillion dollars of value expressed in yen is just 15 digits. It might be that 30 digits of accuracy will suffice in realistic TVM calculations even in this case.

Which is why Microsoft introduced the "decimal" type:
"The Decimal value type represents decimal numbers ranging from positive 79,228,162,514,264,337,593,543,950,335 to negative 79,228,162,514,264,337,593,543,950,335. The default value of a Decimal is 0. The Decimal value type is appropriate for financial calculations that require large numbers of significant integral and fractional digits and no round-off errors."

"The binary representation of a Decimal value is 128-bits consisting of a 96-bit integer number, and a 32-bit set of flags representing things such as the sign and scaling factor used to specify what portion of it is a decimal fraction. Therefore, the binary representation of a Decimal value the form, ((-296 to 296) / 10(0 to 28)), where -(296-1) is equal to MinValue, and 296-1 is equal to MaxValue. "

https://learn.microsoft.com/en-us/dotnet...ew=net-7.0
Find all posts by this user
Quote this message in a reply
10-24-2023, 03:38 PM
Post: #33
RE: Is super-accuracy matters?
(10-24-2023 01:33 PM)KeithB Wrote:  Which is why Microsoft introduced the "decimal" type:
… as the last one behind all other vendors, which is the usual case with them ;-) Parametrized decimals are a pretty much mandatory item for any business programming language or database system. Alas, all that has little to do with calculators, sorry for going further astray…
Find all posts by this user
Quote this message in a reply
10-24-2023, 03:46 PM
Post: #34
RE: Is super-accuracy matters?
(10-24-2023 01:33 PM)KeithB Wrote:  Which is why Microsoft introduced the "decimal" type ...
https://learn.microsoft.com/en-us/dotnet...ew=net-7.0

Also, in SQL Server / Transact SQL, the DECIMAL or NUMERIC type, which is very similar.

I've always specified things like DECIMAL(19) for small monetary values and DECIMAL(29) for large ones, since that can handle values as large as a country's GNP, and can handle conversions between world currencies to an accuracy equivalent to any denomination.

I've always wondered why T-SQL's "MONEY" and "SMALLMONEY" types have only two guard digits and not three. So I've been wary to use them. Am I mistaken -- aren't there cases where the lack of a third guard digit can cause an error to show up in the result?

[Yes, this is now "Not [even] remotely HP Calculators" - LOL.]

Daily drivers: 15c, 32sII, 35s, 41cx, 48g, WP 34s/31s. Favorite: 16c.
Latest: 15ce, 48s, 50g. Gateway drug: 28s found in yard sale ~2009.
Find all posts by this user
Quote this message in a reply
10-24-2023, 05:20 PM (This post was last modified: 10-24-2023 05:21 PM by EdS2.)
Post: #35
RE: Is super-accuracy matters?
Just to note, decimal arithmetic is by no means a panacea for these kinds of problems, even in finance. A TVM problem involves exponentials and logarithms. Even calculating margins or taxes involves multiplying or dividing amounts by fractions. Money works as whole numbers only in simple cases, like adding numbers up or multiplying amounts by integer quantities.

The one advantage of decimal is in reducing surprise (by which I include matching official calculations) and that's enough for it to be preferred in some contexts.

There's a good page here
https://speleotrove.com/decimal/
with links including a FAQ
Find all posts by this user
Quote this message in a reply
10-30-2023, 11:46 AM
Post: #36
RE: Is super-accuracy matters?
(10-20-2023 07:48 PM)Johnh Wrote:  He took with him his Fullers Cylindrical Sliderule, and I have it here.

This is fascinating, thank you for sharing!!

Funny enough I've just had my parents over, and my mother brought over her abacus. One of our kids was interested in it, and has been using it since. I guess this is an early decimal device. I remember as a kid my mum doing her accounts with the abacus.
Find all posts by this user
Quote this message in a reply
11-18-2023, 05:26 AM (This post was last modified: 11-18-2023 05:35 AM by Matt Agajanian.)
Post: #37
RE: Is super-accuracy matters?
Hi all.

All posts here represent a fascinating read. But it begs a few questions:

If computers or supercomputers with hundreds of digits of accuracy are needed for some scientific disciplines, why would you use a calculator at all? Doesn’t all this discussion mean that any calculator from anyone—Sharp, Casio, Canon, TI and others even, dare I say, HP, just a toy, inadequate, and even obsolete? So, why would you use a calculator anyway?

If all this discussion results in the idea that high (maybe not in the realm of super-accuracy) is of critical importance, how was it that from a lower level of accuracy that was obtainable from 60s and 70s computer technology, the rockets that took astronauts to the moon and outer space possible? What about how medical, structural engineering, pharmaceutical, electrical engineering, etc. and other sciences were subject to the limitations of accuracy from 60s/70s computers?

How usable and reliable were mathematics of those eras given computing limitations??
Find all posts by this user
Quote this message in a reply
11-18-2023, 09:13 AM
Post: #38
RE: Is super-accuracy matters?
I don't see the logic in your query. Even if some applications need something which almost all calculators lack, it can still be true that most applications don't need it and almost all calculators are generally useful. Which is the case.

You seem to be demanding that any useful calculator must be able to apply itself to every meaningful problem. That seems unrealistic, and not real.
Find all posts by this user
Quote this message in a reply
11-18-2023, 09:24 AM (This post was last modified: 11-18-2023 09:24 AM by klesl.)
Post: #39
RE: Is super-accuracy matters?
There are many reasons in favor of calculators, the accuracy is just one piece of the mosaic only.
Find all posts by this user
Quote this message in a reply
11-18-2023, 09:31 AM
Post: #40
RE: Is super-accuracy matters?
(11-18-2023 09:13 AM)EdS2 Wrote:  I don't see the logic in your query. Even if some applications need something which almost all calculators lack, it can still be true that most applications don't need it and almost all calculators are generally useful. Which is the case.

You seem to be demanding that any useful calculator must be able to apply itself to every meaningful problem. That seems unrealistic, and not real.

Okay. So, in what type of cases are calculators suitable? For example, in what context would calculators be useful for medicine, aeronautics, structural engineering, or any other discipline instead of desktop or mainframe computers?
Find all posts by this user
Quote this message in a reply
Post Reply 




User(s) browsing this thread: 3 Guest(s)