HP Forums
Different trig algorithms in CAS and Home? - Printable Version

+- HP Forums (https://www.hpmuseum.org/forum)
+-- Forum: HP Calculators (and very old HP Computers) (/forum-3.html)
+--- Forum: HP Prime (/forum-5.html)
+--- Thread: Different trig algorithms in CAS and Home? (/thread-9849.html)

Pages: 1 2


RE: Different trig algorithms in CAS and Home? - Tim Wessman - 01-05-2018 12:35 AM

1. Because it is basically useless for numerical calculations.
2. Causes much more memory to be used for no benefit. (think matrices and other items with potentially thousands of numbers)
3. Why aren't all computers 1024 bits now? Would make really long numbers possible...


RE: Different trig algorithms in CAS and Home? - chromos - 01-05-2018 01:33 AM

My question was rather a reaction to the posted example, where the accuracy of the calculations were compared and which differed in a fifth or so significant digit. In any case, my question was not a complaint, but rather a curiosity.

BTW, I have set my Prime at 'Rounded 4' most of the time.


RE: Different trig algorithms in CAS and Home? - Dieter - 01-05-2018 08:48 AM

(01-05-2018 01:33 AM)chromos Wrote:  My question was rather a reaction to the posted example, where the accuracy of the calculations were compared and which differed in a fifth or so significant digit. In any case, my question was not a complaint, but rather a curiosity.

This is simply caused by the properties of the tangent function for arguments close to pi/2. Here the tangent and its derivative approach infinity. If you want the tangent of 355/226 to agree with the true result in 12 significant digits, you have to specify 355/226 with at least 20 (!) digits.

This is an example of a function where a change in the last 12th digit of the input may cause changes in the 4th digit of the result. More precisely: at this point the output varies by 5,62 E+13 times the input change. Since the input has an inherent roundoff error of ±5 E–12 the calculated tangent cannot be more precise than ±281 (!). This means that 4 significant digits are the best you can get.

BTW, if you really want higher precision you can of course have it: Free42 and the DM42 as well as the WP34s work with 34 digits, the latter may internally even use much more than these to ensure a correct result. And there are extended/arbitrary precision libraries available for some calculators, e.g. the longfloat library for the 50G. Maybe sometime something similar will be available for the Prime as well.

Dieter


RE: Different trig algorithms in CAS and Home? - salvomic - 01-05-2018 09:10 AM

(01-05-2018 08:48 AM)Dieter Wrote:  ...Maybe sometime something similar will be available for the Prime as well.

I do hope. About that future possibility, read also this thread.


RE: Different trig algorithms in CAS and Home? - TheKaneB - 01-05-2018 09:14 AM

Thank you everybody for the in depth analysis, I think the numerical precision is more than adequate for practical engineering purposes, where I actually care about 3 or 4 sognificant digits most of the time.
The explanation of the two different results is exactly what I was looking for.


RE: Different trig algorithms in CAS and Home? - Claudio L. - 01-05-2018 03:40 PM

(01-05-2018 12:35 AM)Tim Wessman Wrote:  1. Because it is basically useless for numerical calculations.
2. Causes much more memory to be used for no benefit. (think matrices and other items with potentially thousands of numbers)

I'm surprised to see this argument. I wouldn't say useless, some problems have thousands of intermediate operations and are potentially unstable or ill conditioned (matrices, actually, are a good example of where extra precision *IS* useful). While not necessarily too sensitive to the initial argument, the errors accumulated over thousands of operations can and do have a big impact.

Examples where additional digits are crucial:
* Finding all roots of a high-degree polynomial using a numerical root method and polynomial deflation.
* Finding eigenvectors/eigenvalues with iterative methods. This is inherently unstable, and getting those first values in high precision is absolutely required if you want to get even decently close to the higher eigenvalues.

For instance, a couple of months ago I wrote the polynomial root solver for newRPL using Laguerre method and polynomial deflation. After degree 10 it gets hard to get a good answer on the higher roots unless you increase the precision. Old and slow machines didn't stand a chance against these problems, but the Prime and newer calculators are fast enough where these problems are solvable and actually a good fit for the calculator, so we better make sure the calculator gives a decent answer.


RE: Different trig algorithms in CAS and Home? - pier4r - 01-05-2018 05:52 PM

(01-04-2018 11:36 PM)chromos Wrote:  No, it's still not answering my question why current calculator (HP Prime) with 32bit ARM9 CPU @400MHz has +/- the same number of significant digits as calculator 40 years old (TI-58/59) with 4bit TMC0501 @200kHz. Maybe my question is badly formulated? Nevertheless thank you for your time you put into your reply.

Because it is not much a matter of cpu rather than ram/registry size and software. If the software is developed to provide only a certain precision in floating point, that's it. Making libraries for expanded precision requires time and likely there is no budget for it. HP is still a "for profit" organization so doing work that no competitor does (see TI, Casio), for few people, it is uneconomical.

I can imagine that every other owner of the prime may say "but I see only 12 digits in 2018, I want to see 2018 digits instead", but practically speaking no one is going to bother after the 3rd digit. Therefore there is no real market incentive to do it. If there would be a market incentive to show more and more digits, be assured that there would be a solution for it.

Indeed on the 50g there is the impressive LongFloat library that does it, but it is not from HP (holy passionate developers). Then there is also the impressive work from Claudio (newRPL, holy passionate developers x2) that may be reused in the Prime if they talk about it.

(is the logfloat valid also for the 48 series? I am not sure being the 48 series a bit different in sysRPL)


RE: Different trig algorithms in CAS and Home? - chromos - 01-05-2018 07:25 PM

(01-05-2018 05:52 PM)pier4r Wrote:  
(01-04-2018 11:36 PM)chromos Wrote:  No, it's still not answering my question why current calculator (HP Prime) with 32bit ARM9 CPU @400MHz has +/- the same number of significant digits as calculator 40 years old (TI-58/59) with 4bit TMC0501 @200kHz. Maybe my question is badly formulated? Nevertheless thank you for your time you put into your reply.

Because it is not much a matter of cpu rather than ram/registry size and software. If the software is developed to provide only a certain precision in floating point, that's it. Making libraries for expanded precision requires time and likely there is no budget for it. HP is still a "for profit" organization so doing work that no competitor does (see TI, Casio), for few people, it is uneconomical.

I can imagine that every other owner of the prime may say "but I see only 12 digits in 2018, I want to see 2018 digits instead", but practically speaking no one is going to bother after the 3rd digit. Therefore there is no real market incentive to do it. If there would be a market incentive to show more and more digits, be assured that there would be a solution for it.

Indeed on the 50g there is the impressive LongFloat library that does it, but it is not from HP (holy passionate developers). Then there is also the impressive work from Claudio (newRPL, holy passionate developers x2) that may be reused in the Prime if they talk about it.

(is the logfloat valid also for the 48 series? I am not sure being the 48 series a bit different in sysRPL)

Didier already answered my 'trolling' question here. :-)


RE: Different trig algorithms in CAS and Home? - cyrille de brébisson - 01-08-2018 06:13 AM

Hello,

"I'm surprised to see this argument. I wouldn't say useless, some problems have thousands of intermediate operations and are potentially unstable or ill conditioned (matrices, actually, are a good example of where extra precision *IS* useful). While not necessarily too sensitive to the initial argument, the errors accumulated over thousands of operations can and do have a big impact."

I think that what it boils down to here is that if an iterative algo uses functions that have such sensitivities to initial conditions, then, they should not be used in this cases. Or they should be designed with a keen understanding of the underlying math library to avoid issues.

Cyrille