Post Reply 
HP Prime complex division problem in CAS
11-24-2020, 03:27 AM
Post: #17
RE: HP Prime complex division problem in CAS
      
Hi, all:

Some well-meant comments to the various posts above, plus a real-life akin case that happened to me about a month ago. If interested, read on (all bolds and italics are mine) ...

lyuka Wrote:I found a problem with CAS -- Maybe a bug in HP PRIME -- It is that CAS returns undef-undef*i in somewhat large complex-to-complex divisions, such as:

      (1.e154 + 2.e154 * i) / (2.e154 + 1.e154 * i) → undef-undef * i
[...]
I wrote cdiv() as a workaround for the dynamic range problem of complex division in CAS. It works fine in CAS as follows.

      cdiv(1e307+2e307*i, 2e307+1e307*i) → 0.8+0.6*i


Thomas Okken recently found essentially the same problem when timing my PZER polynomial rootfinder RPN subprogram in various versions of his fantastic Free42 emulator in assorted devices. The checks came out flawless when using Free42 Decimal but not so with Free42 Binary (which has the same exponent range as the Prime). Circa Oct 12 Thomas told me:

"Note that the results below include a very fishy number for dimension = 100 in the binary version. [...] PZER running on Free42 Binary on MacBook Air:

\begin{array}{|c|c|}
\hline Degree & Time (sec.) & Degree & Time (sec.) & Degree & Time (sec.) \\\hline
3 & 0.0002 & 15 & 0.0048 & 50 & 0.1025 \\\hline
5 & 0.0004 & 20 & 0.0107 & 70 & 0.2186 \\\hline
10 & 0.0014 & 30 & 0.0207 & 100 & 0.0294 \\\hline
\end{array}

Indeed, the timing for computing all 100 complex roots of the degree 100 polynomial (0.0294") can't be right as it's more than 7x faster that the time for the 70-degree polynomial, which is impossible. Thomas did investigate what went wrong (he even created a new feature for Free42 to help him, Stack Trace Mode) and shortly told me this:

"I found the issue: complex division returns NaN when its intermediaries overflow. You can see this in the decimal version, too, for example:

            1e3200 ENTER COMPLEX ENTER ÷
returns
            <NaN> i<NaN>

This affects the test case in the division on line 76 in the PZER program. Using the "stack trace" mode (not yet released), you can see this easily:

            75 RCL× ST L

      T= 1.77431425335ᴇ159
      Z=1.47064417051ᴇ158 -i2.99768090909ᴇ160
      Y= 0.099
      X=-2.91416944651ᴇ158 -i1.75021919599ᴇ159

            76 STO÷ ST Z

      T= 1.77431425335ᴇ159
      Z=<Not a Number> i<Not a Number>
      Y= 0.099
      X=-2.91416944651ᴇ158 -i1.75021919599ᴇ159


That's a bug. I'll fix it in the next release."



I then told Thomas:

"The equivalent operation in the HP-71B (12-digit) version of PZER returns 16.6518259161 + 2.85660713803 i for that division. Those huge E158-E160 values surely do appear because the polynomial's 100th degree means that the guesses are raised to the 100th power while evaluating it , so if the current guess is >1 in absolute value then its 100th power is likely to be very large. This is normal and expected."



Thomas was true to his word and promptly fixed the problem. He then told me:

"I found a more robust complex division algorithm, which appears to be up to the task [...] it gets the dimension-100 results right. Here are the timings from release builds with the new complex division:
            ...
      50       0.1033
      70       0.2233
      100      0.7828


That looks more like the measurements with the old division [...]"



Given all the above, these are my reasoned conclusions:

1) Having something as basic as a complex division return NaN ("undef") or wrong results for any arguments in range when the result is also in range is an unacceptable bug and should be fixed because, if nothing else, it has the potential to badly affect everything else, either ruining the results or your confidence it their correctness.

It's quite similar to the case of the COMB (Combinations) function when the two parameters and the result are all in range yet an inadequate implementation results in intermediate Overflows.

2) If let unfixed and/or sticking to an inadequate implementation, many other functionalities can be badly affected, resulting in vastly wrong results or even crashing the programs or applications which do use such faulty operations. Also, the robust implementation doesn't necessarily have to result in significantly slower code if done properly, Thomas' fixed code has PZER returning the correct roots in essentially the same time as before.

Also, in the PZER example discussed above, Thomas found that not only the timing was seriously amiss but the resulting array which should hold all 100 computed complex roots returned with all its 100 elements equal to NaN + iNaN, i.e., not a single root could be computed correctly. Once the robust complex division was in place, all 200 components came accurate to the full 12 digits.

3) Thinking that detecting such a problem through extensive tests and/or correcting such a problem is unwarranted because "it will (almost) never happen on a calculator" and anyway such values "seem impossible for a physical measurement" or the fixed implementation will be much slower is wrong on several counts:

      First, modern calculators/emulators such as the Prime or Free42 are very fast, capable computing devices, most especially for math and engineering calculations, as can be seen above where my RPN subprogram PZER running on Thomas' Free42 manages to compute all 30 complex roots of a 30-degree complex polynomial in 0.02", two hundredths of a second.

      Second, that kind of "extreme" values can perfectly and do actually happen in all kinds of engineering/statistics/control/etc. computations, which not uncommonly may require, say, having to compute the eigenvalues of sizable matrices. This in turn can result in having to find all roots of a large degree (say 100-degree or more) polynomial, such as the one which failed in the example above. That computation can be reasonably accomplished even on the go in a powerful calculator/emulator such as the Prime or Free42, my PZER does it in 0.78 seconds. Other real-life cases are aplenty.

      Third, the ground-breaking HP engineers of the golden past were keenly aware themselves of the need to consider especial cases for everything having to do with advanced math computations, be it finding the roots of polynomials or computing troublesome integrals (most are). For instance, in the issue of HEWLETT-PACKARD JOURNAL JULY 1984 p.35 they say, discussing the PROOT polynomial rootfinder keyword:

"[...] ZERPOL operated on a finite and [...] relatively small double-precision exponent range [and] could occasionally exhibit overflow/underflow problems that naturally arise during polynomial manipulations. A great deal of ZERPOL coding is dedicated to clever minimization of these problems.

One of the design achievements of the PROOT functions is virtual elimination of these overflow/underflow problems. [...] PROOT operates internally on separated floating-point numbers containing a 15-digit mantissa and an exponent in the range from -50,000 to 50,000 [...] it is easy to see why PROOT is not plagued with overflow/underflow problems [...] Even with this dynamic exponent range , PROOT also contains a scaling procedure designed to eliminate any overflow/underflow problems [...] However, this situation is considered impossible to produce by any user-supplied polynomial. In fact, the rescaling procedure has yet to be used during internal testing of the PROOT function.[...]"



So it's plain that they took every precaution to avoid problems even in the presence of extreme values or conditions, even for situations deemed impossible to produce. Just in case.

They also demonstrated the accuracy and reliability attained by their clever measures by solving the exact 100-degree polynomial which I used in the example above, where my PZER couldn't find the roots because of the faulty complex division which Thomas Okken discovered. They say:


"PROOT's performance with isolated zeros is illustrated by the 100-degree polynomial [...] Of the 200 real and imaginary components of the calculated roots for this polynomial, about half were found to 12-digit accuracy. Of the rest, the error did not exceed a few counts in the twelfth (or last) resulting digit."



Regards.
V.

  
All My Articles & other Materials here:  Valentin Albillo's HP Collection
 
Visit this user's website Find all posts by this user
Quote this message in a reply
Post Reply 


Messages In This Thread
RE: HP Prime complex division problem in CAS - Valentin Albillo - 11-24-2020 03:27 AM



User(s) browsing this thread: 2 Guest(s)