Post Reply 
HP Prime complex division problem in CAS
11-18-2020, 11:51 PM (This post was last modified: 11-20-2020 03:05 AM by lyuka.)
Post: #1
HP Prime complex division problem in CAS
Hi guys

I found a problem with CAS -- Maybe a bug in HP PRIME -- It is that CAS returns undef-undef*i in somewhat large complex-to-complex divisions, such as:

(1.e154 + 2.e154 * i) / (2.e154 + 1.e154 * i) → undef-undef * i

If it happens in the program, the calculator will finish the calculation without warning, regardless of whether it is a physical calculator or an emulator.
Also, if you try to PRINT a variable which contains it -- undef-undef*i --, the calculator will crash.

[Image: complex-division-NG-in-CAS.png]
The behavior did not change with the CAS setting flag "Exact".

If complex division is implemented in the following way in CAS, it makes sense that complex division can cause problems at half the size of MAXREAL.
Code:
complex cdiv(complex x, complex y)  {
    complex z;
    z.r = (x.r * y.r + x.i * y.i) / (y.r * y.r + y.i * y.i);
    z.i = (x.i * y.r - x.r * y.i) / (y.r * y.r + y.i * y.i);
    return z;
}

On the other hand, at home, it is calculated correctly up to near MAXREAL.
[Image: complex-division-OK-at-Home.png]

-- HP Prime Graphics Calculator Third Edition: December 2017, p.533 --
MAXREAL
Returns the maximum real number that the HP Prime calculator is capable of representing in Home and CAS
views: In the CAS, MAXREAL=1.79769313486*10^308 In Home view, MAXREAL=9.99999999999E499
--

Lyuka
Find all posts by this user
Quote this message in a reply
11-19-2020, 07:08 AM
Post: #2
RE: HP Prime complex division problem in CAS
I wrote cdiv() as a workaround for the dynamic range problem of complex division in CAS.
It works fine in CAS as follows.

cdiv(1e307+2e307*i, 2e307+1e307*i) → 0.8+0.6*i

Code:
#CAS
// cdev -- complex division. returns x / y.
// This is a workaround for the dynamic range problem of complex division in CAS.
// Conforms to HP Prime software version 2.1.14433 (2020 01 21).
cdiv(x, y):=
BEGIN
  LOCAL u, v;
  IF abs(RE(y)) >= abs(IM(y)) THEN
    u := IM(y) / RE(y); 
    v := 1 / (RE(y) + IM(y) * u);
    return (RE(x) + IM(x) * u) * v + (IM(x) - RE(x) * u) * v * i;
  ELSE
    u := RE(y) / IM(y); 
    v := 1 / (RE(y) * u  + IM(y));
    return (RE(x) * u + IM(x)) * v + (IM(x) * u - RE(x)) * v * i;
  END;
END;
#END
Find all posts by this user
Quote this message in a reply
11-19-2020, 07:56 PM (This post was last modified: 11-19-2020 10:24 PM by Albert Chan.)
Post: #3
RE: HP Prime complex division problem in CAS
XCas> z1 := 1e+154+2e+154*i
XCas> z2 := 2e+154+1e+154*i
XCas> z1 / z2                             → undef-undef*i
XCas> z1 / sign(z2) / abs(z2)       → 0.8+0.6*i

Tried the same way with HP Prime emulator, but failed. Sad

CAS> z1 := 1e+154+2e+154*i
CAS> z2 := 2e+154+1e+154*i
CAS> abs(z2)       → +inf

To work around the bad abs(z), I added cabs(z):
Note: maxnorm(z) = |re(z)| + |im(z)|

CAS> cabs(z) := cabs2(z, maxnorm(z))
CAS> cabs2(z,s) := abs(z/s) * s
CAS> cdiv(x,y) := cdiv2(x, y, 1./maxnorm(y))
CAS> cdiv2(x,y,s) := (x*s) / (y*s)

CAS> cabs(z2)            → 2.2360679775e154
CAS> cdiv(z1, z2)       → 0.8+0.6*i
Find all posts by this user
Quote this message in a reply
11-20-2020, 12:05 AM
Post: #4
RE: HP Prime complex division problem in CAS
Hi Albert

This calculator looks pretty, but it's not a good old Hewlett Packard product, it's like someone in the beginner's programming course made it in practice.

Anyway, having a workaround is better than nothing. Rewritten in CAS form.
Code:
#CAS
// This is a workaround for the dynamic range problem of complex division and abs() in CAS.
// Conforms to HP Prime software version 2.1.14433 (2020 01 21).
// cdev -- complex division, returns x / y.
cdiv(x, y):=
BEGIN
  LOCAL n;
  n := 1.0 / maxnorm(y);
  return (x * n) / (y * n);
END;
// cabs -- returns magnitude of z
cabs(z):=
BEGIN
  LOCAL n;
  n := maxnorm(z);
  return abs(z / n) * n;
END;
#END
Find all posts by this user
Quote this message in a reply
11-20-2020, 12:44 AM (This post was last modified: 11-20-2020 12:46 AM by dah145.)
Post: #5
RE: HP Prime complex division problem in CAS
Seems to work for me this way.
CAS seems to be nitpicky with approximate inputs, i.e. any input with a decimal point.


Attached File(s) Thumbnail(s)
   
Find all posts by this user
Quote this message in a reply
11-20-2020, 04:47 PM
Post: #6
RE: HP Prime complex division problem in CAS
(11-19-2020 07:56 PM)Albert Chan Wrote:  Note: maxnorm(z) = |re(z)| + |im(z)|

CAS> cabs(z) := cabs2(z, maxnorm(z))
CAS> cabs2(z,s) := abs(z/s) * s
CAS> cdiv(x,y) := cdiv2(x, y, 1./maxnorm(y))
CAS> cdiv2(x,y,s) := (x*s) / (y*s)

I used maxnorm wrong. From XCas help: maxnorm([x1,x2,..,xn]) = max(|x1|,..,|xn|).
Based on documentation, this is wrong, and possibly a bug.

CAS> maxnorm(3+4*i)       → 7 ??? (5 if argument treated as [3+4i], 4 if treated as [3, 4i])

It still work, because scaling factor cancelled. But, we should spell out what we meant:

CAS> cmax(z) := max(abs(re(z)), abs(im(z)))
CAS> cabs(z) := cabs2(z, cmax(z))

For cdiv(), scaling factor might be too extreme.
It may be better if we take a square-root, to reduce chance of overflow/underflow.
Note: only *very* rough estimate of inverse-square-root is needed.

CAS> cdiv(x,y) := cdiv2(x, y, sqrt(1./cmax(y)))

CAS> cdiv(1e154+2e154*i , 2e154+1e154*i)         → 0.8 + 0.6*i
CAS> cdiv(1e307+1e-307*i , 1e204+1e-204*i)       → 1e103 - 1e−305*i (*)

Without square-root, last cdiv() will return 1e103, without imaginery part.

---

(*) Example taken from improved_cdiv.pdf, section 4, Robustness of Smith’s algorithm.
This is improved Smith algorithm (last line is the improvement, when u=0)

Code:
function cdiv(xr,xi, yr,yi)
    local flip = (abs(yr) < abs(yi))
    if flip then xr,xi,yr,yi = xi,xr,yi,yr end
    local u = yi / yr
    local v = 1/(yr + u*yi)
    local w = flip and -v or v
    if u ~= 0 then return (xr + xi*u)*v, (xi - xr*u)*w end
    return (xr + xi/yr*yi)*v, (xi - xr/yr*yi)*w
end

x/y = conj((conj(x)*i) / (conj(y)*i))

Smith method pick the side with |u| ≤ 1, reduce chance of overflow.
CAS> simplify(subst((x1+x2*i)*(1-u*i)/(y1+u*y2), u=y2/y1))       →\(\Large{x_1+x_2 i \over y_1+y_2 i}\)

lua> cdiv(1e154, 2e154, 2e154, 1e154)       -- u ~= 0
0.8            0.6000000000000001
lua> cdiv(1e307, 1e-307, 1e204, 1e-204)    -- u == 0
1e+103     -1e-305
Find all posts by this user
Quote this message in a reply
11-20-2020, 04:55 PM
Post: #7
RE: HP Prime complex division problem in CAS
CAS is designed for exact computations, not approx : I mean z1/z2 is computed as z1*conj(z2)/norm(z2)^2, and that's the reason why it will overflow with large values (by the way I wonder where the 1e154 comes from, it seems impossible for a physical measurement). Fortunately, it's fairly easy to improve.
In the meantime, you can replace z1/z2 by z1*inv(z2) to avoid overflow.
Find all posts by this user
Quote this message in a reply
11-20-2020, 05:00 PM
Post: #8
RE: HP Prime complex division problem in CAS
As for maxnorm on complex numbers, this is intended. For exact data computing a square root (L2 norm) is costly (or even impossible if the integer is too large to be factored), while |z| defined as|x|+|y| is also a norm (L1 norm) and much faster to compute (O(1)).
Linf norm could works as well.
Find all posts by this user
Quote this message in a reply
11-20-2020, 05:42 PM
Post: #9
RE: HP Prime complex division problem in CAS
Hi Albert

(11-20-2020 04:47 PM)Albert Chan Wrote:  (*) Example taken from improved_cdiv.pdf, section 4, Robustness of Smith’s algorithm.
This is improved Smith algorithm (last line is the improvement, when u=0)

This is an interesting reading.
I found out that the algorithm that I used in the previous post is called Smith's algorithm.
And it was announced a few years before I was born :-)
Find all posts by this user
Quote this message in a reply
11-20-2020, 05:47 PM
Post: #10
RE: HP Prime complex division problem in CAS
(11-20-2020 05:00 PM)parisse Wrote:  As for maxnorm on complex numbers, this is intended. For exact data computing a square root (L2 norm) is costly (or even impossible if the integer is too large to be factored), while |z| defined as|x|+|y| is also a norm (L1 norm) and much faster to compute (O(1)).

Definition of |z| = |x|+|y| is inconsistent. This rule *only* applied for non-vector argument.

CAS> maxnorm(3+4i)      → 7 ???
CAS> maxnorm([3+4i])      → 5

If speed is what we are after, we could just use l1norm:

CAS> l1norm(3+4i)      → 7
CAS> l1norm([3+4i])      → 7
Find all posts by this user
Quote this message in a reply
11-20-2020, 06:07 PM
Post: #11
RE: HP Prime complex division problem in CAS
Hi parisse

(11-20-2020 04:55 PM)parisse Wrote:  In the meantime, you can replace z1/z2 by z1*inv(z2) to avoid overflow.

z1*inv(z2) is much easy to use than using cdiv(), thanks.


(11-20-2020 04:55 PM)parisse Wrote:  (by the way I wonder where the 1e154 comes from, it seems impossible for a physical measurement).

1e154 is slightly larger than about half the digits of IEEE 754 64-bit.

Even when dealing with physical phenomena, a large dynamic range is required when performing a very large product-sum operation.
And when you do large-scale calculations, you need reliable calculations, and if you have doubts about computer calculations,
you will not understand the meaning of the calculation results.
Therefore, at least for the four arithmetic operations, it is necessary to be reliable that the number of digits of the computer can be calculated correctly.

e.g. When the characteristics of the side lobe are important, it is necessary to pay attention to the special deterioration due to the calculation accuracy.
The figure below is a bit extreme, but it seems that 64bit floating point is not enough to make a filter above the sidelobe level -320dB.

Difference in characteristics when the same window function is implemented by IEEE-784 64bit and 80bit
[Image: test-64vs80-hosoda11_s.jpg]

Lyuka
Find all posts by this user
Quote this message in a reply
11-20-2020, 07:48 PM
Post: #12
RE: HP Prime complex division problem in CAS
In CAS, instead of complex division z1 / z2, both z1 * inv(z2) and z1 * z2-1 were OK.

(1.e154+2.e154*i)*(2.e154+1.e154*i)-1 → 0.8+0.6*i
(1.e154+2.e154*i)*inv(2.e154+1.e154*i) → 0.8+0.6*i

But the history display when using inv() was converted to fractional notation.
Therefore, it is not possible to distinguish from the history whether division or reciprocal multiplication was used even though the calculation results are different.
[Image: complex-inverse-multiplication-OK-in-CAS.png]
Find all posts by this user
Quote this message in a reply
11-21-2020, 07:13 AM
Post: #13
RE: HP Prime complex division problem in CAS
I was mainly reacting to
Quote:This calculator looks pretty, but it's not a good old Hewlett Packard product, it's like someone in the beginner's programming course made it in practice.
When I program something for Giac, I care about exact computations and then I make one or 2 tries with "normal range" approx data, I do not make extensive tries with something that will (almost) never happen on a calculator (and that's probably the reason why nobody mentionned it in the last 7 years).
Now, of course it's better to have the best possible accuracy and I modified the source code of Giac accordingly.
Next time you find something wrong, I would appreciate if you could report the bug in a more friendly manner.
Find all posts by this user
Quote this message in a reply
11-21-2020, 07:20 AM (This post was last modified: 11-21-2020 07:26 AM by parisse.)
Post: #14
RE: HP Prime complex division problem in CAS
(11-20-2020 05:47 PM)Albert Chan Wrote:  Definition of |z| = |x|+|y| is inconsistent. This rule *only* applied for non-vector argument.

CAS> maxnorm(3+4i)      → 7 ???
CAS> maxnorm([3+4i])      → 5

If speed is what we are after, we could just use l1norm:

CAS> l1norm(3+4i)      → 7
CAS> l1norm([3+4i])      → 7

Indeed, linfnorm==maxnorm is inconsistent. It should return 4 for both. However, I don't want to make such a change, because it breaks my regression tests. You will have to live with this inconsistency.
Find all posts by this user
Quote this message in a reply
11-22-2020, 02:59 PM (This post was last modified: 11-22-2020 04:40 PM by lyuka.)
Post: #15
RE: HP Prime complex division problem in CAS
May future firmware at least pass the next simple division test.
Code:
#CAS
// from Mc Laren's difficult complex divisions test
cdiv_test():=
BEGIN
  LOCAL c, x, y, z;
  LOCAL g, k;
  PRINT();
  g := MAXREAL / 2;
  FOR k FROM 0.0 TO 2.0 STEP 0.5 DO
    x := 1 + i;
    y := 1 + i * k;
    c := x / y;
    z := (x * g) / (y * g); PRINT(k + ":Ref.-> "+c); 
    z := cdiv(x *g, y * g); PRINT(k+":cdiv-> "+z); 
    z := (x * g) * inv(y * g); PRINT(k+":*inv-> "+z); 
    z := (x * g) / (y * g); //PRINT(k+":#/# -> "+z); // stall
  END;
END;
#END
This is the execution result. The calculation result of (x * g) / (y * g) cannot be printed because the calculator terminates abnormally when trying to print undef-undef * i.
[Image: HP-Prime-CAS-cdiv-test.png]

Anyway, I ported Robert L. Smith and Michael Baudin's "Robust Complex Division".
Code:
#CAS
// This is a workaround for the dynamic range problem of complex division in CAS.
// Conforms to HP Prime software version 2.1.14433 (2020 01 21).
// An implimentation of the following algorithm by Takayuki HOSODA (aka Lyuka).
// "Improved Complex Division in Scilab", Michael Baudin, Robert L. Smith, Jun. 2011
// http://forge.scilab.org/index.php/p/compdiv/downloads/get/improved_cdiv_v0.3.pdf
// cdiv(x, y) returns complex division x / y.
cdiv(x, y):=
BEGIN
  LOCAL z;
  LOCAL r, t, AB, CD, B, S;
  LOCAL OV2, eps, UNBeps, Be;
  AB := fmax(fabs(RE(x)), fabs(IM(x)));
  CD := fmax(fabs(RE(y)), fabs(IM(y)));
  B := 2;
  S := 1;
  // Machine dependent constants (for HP Prime)
  OV2 :=  8.988465674311581E307;  // MAXREAL / 2; 
  eps := 7.105427357601002E-15;    // 1 / 2^47; 
  UNBeps := 6.263026125028041E-294; // MINREAL * B / eps;
  Be := B / (eps * eps);
  IF (AB >= OV2)   THEN x = x / 2;  S = S * 2;  END; // Scale down x
  IF (CD >= OV2)   THEN y = y / 2;  S = S / 2;  END; // Scale down y
  IF (AB < UNBeps) THEN x = x * Be; S = S / Be; END; // Scale up   x
  IF (CD < UNBeps) THEN y = y * Be; S = S * Be; END; // Scale up   y
  IF cabs(IM(y)) <= cabs(RE(y)) THEN
    r := IM(y) / RE(y); 
    t := 1 / (RE(y) + IM(y) * r);
    IF r == 0 THEN
      z :=  (RE(x) + IM(y) * (IM(x) / RE(y))) * t + (IM(x) - IM(y) * (RE(x) / RE(y))) * t * i;
    ELSE
      z :=  (RE(x) + IM(x) * r) * t + (IM(x) - RE(x) * r) * t * i;
    END;
  ELSE
    r := RE(y) / IM(y); 
    t := 1 / (RE(y) * r + IM(y));
    IF r == 0 THEN
      z :=   (RE(y) * (RE(x) / IM(y)) + IM(x)) * t + (RE(y) * (IM(x) / IM(y)) - RE(x)) * t * i;
    ELSE
      z :=   (RE(x) * r + IM(x)) * t + (IM(x) * r - RE(x)) * t * i;
    END;
  END;
  return S * z;
END;
//fmax(a, b) -- returns the larger value from a and b.
fmax(a, b):=
BEGIN
  IF a < b THEN
    return b;
  ELSE
    return a;
  END;
END;
//fabs(a) -- returns the absolute value of 'a'.
fabs(a):=
BEGIN
  IF a < 0 THEN
    return -a;
  ELSE
    return a;
  END;
END;
// This is a workaround for the dynamic range problem of complex division and abs() in CAS.
// Conforms to HP Prime software version 2.1.14433 (2020 01 21).
// cabs(z) -- returns the magnitude of the complex number z.
cabs(z):=
BEGIN
  LOCAL x, y, t, s;
  x := RE(z);
  y := IM(z);
  IF x < 0 THEN x := -x; END;
  IF y < 0 THEN y := -y; END;
  IF x == 0 THEN return y; END;
  IF y == 0 THEN return x; END;
  IF (y > x) THEN
    t := x; x := y; y := t;
  END;
  t := x - y;
  IF t > y THEN
    t := x / y;
    t := t + sqrt(1 + t * t);
  ELSE
    t := t / y;
    s := t * (t + 2);
    t := t + s / (sqrt(2) + sqrt(2 + s));
    t := t + sqrt(2) + 1;
  END;
  return(x + y / t);
END;
#END

Have fun.
Lyuka
Find all posts by this user
Quote this message in a reply
11-22-2020, 06:35 PM
Post: #16
RE: HP Prime complex division problem in CAS
Well, I wonder how slow your programs are, even native. They are certainly interesting in some rare situations but that does not mean these algorithms should be applied for "normal" entries.
Find all posts by this user
Quote this message in a reply
11-24-2020, 03:27 AM
Post: #17
RE: HP Prime complex division problem in CAS
      
Hi, all:

Some well-meant comments to the various posts above, plus a real-life akin case that happened to me about a month ago. If interested, read on (all bolds and italics are mine) ...

lyuka Wrote:I found a problem with CAS -- Maybe a bug in HP PRIME -- It is that CAS returns undef-undef*i in somewhat large complex-to-complex divisions, such as:

      (1.e154 + 2.e154 * i) / (2.e154 + 1.e154 * i) → undef-undef * i
[...]
I wrote cdiv() as a workaround for the dynamic range problem of complex division in CAS. It works fine in CAS as follows.

      cdiv(1e307+2e307*i, 2e307+1e307*i) → 0.8+0.6*i


Thomas Okken recently found essentially the same problem when timing my PZER polynomial rootfinder RPN subprogram in various versions of his fantastic Free42 emulator in assorted devices. The checks came out flawless when using Free42 Decimal but not so with Free42 Binary (which has the same exponent range as the Prime). Circa Oct 12 Thomas told me:

"Note that the results below include a very fishy number for dimension = 100 in the binary version. [...] PZER running on Free42 Binary on MacBook Air:

\begin{array}{|c|c|}
\hline Degree & Time (sec.) & Degree & Time (sec.) & Degree & Time (sec.) \\\hline
3 & 0.0002 & 15 & 0.0048 & 50 & 0.1025 \\\hline
5 & 0.0004 & 20 & 0.0107 & 70 & 0.2186 \\\hline
10 & 0.0014 & 30 & 0.0207 & 100 & 0.0294 \\\hline
\end{array}

Indeed, the timing for computing all 100 complex roots of the degree 100 polynomial (0.0294") can't be right as it's more than 7x faster that the time for the 70-degree polynomial, which is impossible. Thomas did investigate what went wrong (he even created a new feature for Free42 to help him, Stack Trace Mode) and shortly told me this:

"I found the issue: complex division returns NaN when its intermediaries overflow. You can see this in the decimal version, too, for example:

            1e3200 ENTER COMPLEX ENTER ÷
returns
            <NaN> i<NaN>

This affects the test case in the division on line 76 in the PZER program. Using the "stack trace" mode (not yet released), you can see this easily:

            75 RCL× ST L

      T= 1.77431425335ᴇ159
      Z=1.47064417051ᴇ158 -i2.99768090909ᴇ160
      Y= 0.099
      X=-2.91416944651ᴇ158 -i1.75021919599ᴇ159

            76 STO÷ ST Z

      T= 1.77431425335ᴇ159
      Z=<Not a Number> i<Not a Number>
      Y= 0.099
      X=-2.91416944651ᴇ158 -i1.75021919599ᴇ159


That's a bug. I'll fix it in the next release."



I then told Thomas:

"The equivalent operation in the HP-71B (12-digit) version of PZER returns 16.6518259161 + 2.85660713803 i for that division. Those huge E158-E160 values surely do appear because the polynomial's 100th degree means that the guesses are raised to the 100th power while evaluating it , so if the current guess is >1 in absolute value then its 100th power is likely to be very large. This is normal and expected."



Thomas was true to his word and promptly fixed the problem. He then told me:

"I found a more robust complex division algorithm, which appears to be up to the task [...] it gets the dimension-100 results right. Here are the timings from release builds with the new complex division:
            ...
      50       0.1033
      70       0.2233
      100      0.7828


That looks more like the measurements with the old division [...]"



Given all the above, these are my reasoned conclusions:

1) Having something as basic as a complex division return NaN ("undef") or wrong results for any arguments in range when the result is also in range is an unacceptable bug and should be fixed because, if nothing else, it has the potential to badly affect everything else, either ruining the results or your confidence it their correctness.

It's quite similar to the case of the COMB (Combinations) function when the two parameters and the result are all in range yet an inadequate implementation results in intermediate Overflows.

2) If let unfixed and/or sticking to an inadequate implementation, many other functionalities can be badly affected, resulting in vastly wrong results or even crashing the programs or applications which do use such faulty operations. Also, the robust implementation doesn't necessarily have to result in significantly slower code if done properly, Thomas' fixed code has PZER returning the correct roots in essentially the same time as before.

Also, in the PZER example discussed above, Thomas found that not only the timing was seriously amiss but the resulting array which should hold all 100 computed complex roots returned with all its 100 elements equal to NaN + iNaN, i.e., not a single root could be computed correctly. Once the robust complex division was in place, all 200 components came accurate to the full 12 digits.

3) Thinking that detecting such a problem through extensive tests and/or correcting such a problem is unwarranted because "it will (almost) never happen on a calculator" and anyway such values "seem impossible for a physical measurement" or the fixed implementation will be much slower is wrong on several counts:

      First, modern calculators/emulators such as the Prime or Free42 are very fast, capable computing devices, most especially for math and engineering calculations, as can be seen above where my RPN subprogram PZER running on Thomas' Free42 manages to compute all 30 complex roots of a 30-degree complex polynomial in 0.02", two hundredths of a second.

      Second, that kind of "extreme" values can perfectly and do actually happen in all kinds of engineering/statistics/control/etc. computations, which not uncommonly may require, say, having to compute the eigenvalues of sizable matrices. This in turn can result in having to find all roots of a large degree (say 100-degree or more) polynomial, such as the one which failed in the example above. That computation can be reasonably accomplished even on the go in a powerful calculator/emulator such as the Prime or Free42, my PZER does it in 0.78 seconds. Other real-life cases are aplenty.

      Third, the ground-breaking HP engineers of the golden past were keenly aware themselves of the need to consider especial cases for everything having to do with advanced math computations, be it finding the roots of polynomials or computing troublesome integrals (most are). For instance, in the issue of HEWLETT-PACKARD JOURNAL JULY 1984 p.35 they say, discussing the PROOT polynomial rootfinder keyword:

"[...] ZERPOL operated on a finite and [...] relatively small double-precision exponent range [and] could occasionally exhibit overflow/underflow problems that naturally arise during polynomial manipulations. A great deal of ZERPOL coding is dedicated to clever minimization of these problems.

One of the design achievements of the PROOT functions is virtual elimination of these overflow/underflow problems. [...] PROOT operates internally on separated floating-point numbers containing a 15-digit mantissa and an exponent in the range from -50,000 to 50,000 [...] it is easy to see why PROOT is not plagued with overflow/underflow problems [...] Even with this dynamic exponent range , PROOT also contains a scaling procedure designed to eliminate any overflow/underflow problems [...] However, this situation is considered impossible to produce by any user-supplied polynomial. In fact, the rescaling procedure has yet to be used during internal testing of the PROOT function.[...]"



So it's plain that they took every precaution to avoid problems even in the presence of extreme values or conditions, even for situations deemed impossible to produce. Just in case.

They also demonstrated the accuracy and reliability attained by their clever measures by solving the exact 100-degree polynomial which I used in the example above, where my PZER couldn't find the roots because of the faulty complex division which Thomas Okken discovered. They say:


"PROOT's performance with isolated zeros is illustrated by the 100-degree polynomial [...] Of the 200 real and imaginary components of the calculated roots for this polynomial, about half were found to 12-digit accuracy. Of the rest, the error did not exceed a few counts in the twelfth (or last) resulting digit."



Regards.
V.

  
All My Articles & other Materials here:  Valentin Albillo's HP Collection
 
Visit this user's website Find all posts by this user
Quote this message in a reply
11-24-2020, 05:56 AM
Post: #18
RE: HP Prime complex division problem in CAS
I couldn’t agree more, Valentin!
Werner

41CV†,42S,48GX,49G,DM42,DM41X,17BII,15CE,DM15L,12C,16CE
Find all posts by this user
Quote this message in a reply
11-24-2020, 07:52 PM
Post: #19
RE: HP Prime complex division problem in CAS
Well, can you give more precision on these polynomials? I'd like to test them with proot.
My own experience is that above degree about 200 or 300, polynomials (with random coefficients) require advanced algorithms (like implemented in PARI-GP) or multi-precision floats, matrix algorithms don't work anymore in double precision. This can happen with much lower degrees for some bad conditonned polynomials, like pcoeff(range(20)).
Find all posts by this user
Quote this message in a reply
11-27-2020, 08:30 AM
Post: #20
RE: HP Prime complex division problem in CAS
I found relatively simple workaround for complex division in CAS,
that is to apply scaling process to Smith's method (1962).

Code:
#CAS
// cdiv(x, y) returns complex division x / y. 
// written by Takayuki HOSODA (aka Lyuka) Rev. 0.3.2 (Nov. 25 2020)
// This is a workaround for the dynamic range problem of complex division in CAS.
// Conforms to HP Prime software version 2.1.14433 (2020 01 21).
// This cdiv() uses Smith's method (1962) with scaling process in the following paper.
// "Improved Complex Division in Scilab", Michael Baudin, Robert L. Smith, Jun. 2011
// http://forge.scilab.org/index.php/p/compdiv/downloads/get/improved_cdiv_v0.3.pdf
cdiv(x, y):=
BEGIN
  LOCAL z;
  LOCAL r, t, AB, CD, B, S;
  LOCAL OV2, eps, UNBeps, Be;
  AB := MAX(abs(RE(x)), abs(IM(x)));
  CD := MAX(abs(RE(y)), abs(IM(y)));
  B := 2;
  S := 1;
  // Machine dependent constants (for HP Prime)
  OV2 :=  8.988465674311581E307;  // MAXREAL / 2; 
  eps := 7.105427357601002E-15;    // 1 / 2^47; 
  UNBeps := 6.263026125028041E-294; // MINREAL * B / eps;
  Be := B / (eps * eps);
  IF (AB >= OV2)   THEN x = x / 2;  S = S * 2;  END; // Scale down x
  IF (CD >= OV2)   THEN y = y / 2;  S = S / 2;  END; // Scale down y
  IF (AB < UNBeps) THEN x = x * Be; S = S / Be; END; // Scale up   x
  IF (CD < UNBeps) THEN y = y * Be; S = S * Be; END; // Scale up   y
  IF abs(RE(y)) < abs(IM(y)) THEN
    r := RE(y) / IM(y); 
    t := (RE(y) * r  + IM(y));
    z := (RE(x) * r + IM(x)) / t  + (IM(x) * r - RE(x)) / t * i;
  ELSE
    r := IM(y) / RE(y); 
    t := (RE(y) + IM(y) * r);
    z := (RE(x) + IM(x) * r) / t  + (IM(x) - RE(x) * r) / t * i;
  END;
  return S * z;
END;
#END

This cdiv shown above has passed McLaren's difficult complex division test shown below.

Code:
#CAS
// from Mc Laren's difficult complex divisions test
cdiv_test():=
BEGIN
  LOCAL c, x, y, z;
  LOCAL g, k;
  PRINT();
  g := MAXREAL / 2;
  FOR k FROM 0.0 TO 2.0 STEP 0.5 DO
    x := 1 + i;
    y := 1 + i * k;
    c := x / y;
    z := (x * g) / (y * g); PRINT(k + ":Ref.-> "+c); 
    z := cdiv(x *g, y * g); PRINT(k+":cdiv-> "+z); 
  END;
END;
#END

For more information about complex division and implimentation to C99,
The "compdiv_robust" algorithm rewritten for C99 And a comparison with Smith's method (1962) using the scaling process used in it.
Find all posts by this user
Quote this message in a reply
Post Reply 




User(s) browsing this thread: 8 Guest(s)