Post Reply 
On Convergence Rates of Root-Seeking Methods
03-12-2018, 11:18 PM
Post: #41
RE: On Convergence Rates of Root-Seeking Methods
(03-12-2018 04:13 PM)Claudio L. Wrote:  Brent seems like a powerhouse combo of other methods. If I implement Brent, does it make any sense to offer the user a choice to use bisection, etc.?

I don't think bisection would make sense, Brent has the same guaranteed convergence if the root is bracketed. Newton's might, especially if you pass in the function and its derivative.

The GNU Scientific Library provides six methods, three bracket based and three derivative based. They include Steffenson's method which requires the derivative but which converges faster than Newton's. Interestingly, no higher order methods are included.


Quote:It will probably end up being the default method, unless complex mode is enabled, then something like Muller seems to be the only choice?

The GNU Scientific Library provides two methods, one using derivative and one without. I've not looked into this, so can't comment further.


Pauli
Find all posts by this user
Quote this message in a reply
03-13-2018, 12:43 AM
Post: #42
RE: On Convergence Rates of Root-Seeking Methods
(03-13-2018 12:05 AM)Mike (Stgt) Wrote:  The HP-19BII was introduced in 1990, so it is not a museum piece. The production of the HP-48SX started also in 1990 but got a detailed description. Similar with HP-42S (introduction1988) and HP-19B (introduced in 1988).

When I wrote 'not mentionded' I had 'not described' in mind.

Ciao.....Mike

Indeed, many "later" models seem to have never gotten the full treatment they deserve; the 19BII is just such a machine IMHO.

I'm not sure if Dave burned-out adding new descriptions as new models were being added; at the time of the 19B/19BII there were lots of new machines introduced fairly often, so I can easily see how it may have seemed a lot to do for machines that are not so different from predecessors (in this case, only adding RPN), though the years from then to now would seem to be enough to catch-up.

As for 48SX vs. 19BII differing levels of attention, there is no doubt Dave put effort into the machine which is of interest to about 99.99% of the membership here, and almost none for the more likely 5% for the 19BII, which is arguably sensible.

Of course, since machines like the 19BII were truly 'not described', it has become something of a self-fulfilling prophecy; folks that come here looking for info on the 19BII find none, conclude it's not of interest here, and move on to other places.

--Bob Prosperi
Find all posts by this user
Quote this message in a reply
03-14-2018, 08:05 PM
Post: #43
RE: On Convergence Rates of Root-Seeking Methods
(03-13-2018 11:22 PM)Mike (Stgt) Wrote:  BTW, to compute 7/5*inverse(sqrt(1-1/50)) to 50'000 digits the runtime is 0.81 seconds, about the same as for sqrt(2), but the delta of this and the previous sqrt takes ~10% more than the root itself.

I've used this one for quick calculations:
http://preccalc.sourceforge.net

It has some limited programmability, I always check newRPL's output against this one and Wolfram Alpha to make sure.
Find all posts by this user
Quote this message in a reply
03-14-2018, 08:32 PM
Post: #44
RE: On Convergence Rates of Root-Seeking Methods
(03-12-2018 11:18 PM)Paul Dale Wrote:  I don't think bisection would make sense, Brent has the same guaranteed convergence if the root is bracketed. Newton's might, especially if you pass in the function and its derivative.

The GNU Scientific Library provides six methods, three bracket based and three derivative based. They include Steffenson's method which requires the derivative but which converges faster than Newton's. Interestingly, no higher order methods are included.

Thanks for the feedback. If GSL provides Brent, bisection and regula falsi, I should probably provide at least those 3 to the user. Even if it doesn't make sense, just for fun.
For non-bracketed ones, secant, Newton and Steffensen (I need to research this one, first time I hear its name), seems fairly vanilla. Perhaps I'll add some of Namir's experiments with Ostrowski, Halley, etc. just to keep it interesting.

(03-12-2018 11:18 PM)Paul Dale Wrote:  The GNU Scientific Library provides two methods, one using derivative and one without. I've not looked into this, so can't comment further.

I couldn't find any reference to this in the GSL docs (?)

Pauli
[/quote]
Find all posts by this user
Quote this message in a reply
03-14-2018, 09:57 PM (This post was last modified: 03-16-2018 08:27 AM by emece67.)
Post: #45
RE: On Convergence Rates of Root-Seeking Methods
(03-14-2018 08:32 PM)Claudio L. Wrote:  For non-bracketed ones, secant, Newton and Steffensen (I need to research this one, first time I hear its name), seems fairly vanilla. Perhaps I'll add some of Namir's experiments with Ostrowski, Halley, etc. just to keep it interesting.

For non-bracketed methods, you may find this paper interesting. It compares up to 13 different non-bracketed methods (not high order, high complexity ones, the highest order method in the comparison is Ostrowsky-Traub, 4rd order), being Newton, Halley & Steffensen among them. On this paper, Steffensen method failed to converge many, many times (although the author works in the complex plane, not in the real line).

Also, the fractal pictures on the paper are nice :-)

Regards.
Find all posts by this user
Quote this message in a reply
03-14-2018, 10:25 PM
Post: #46
RE: On Convergence Rates of Root-Seeking Methods
(03-14-2018 08:32 PM)Claudio L. Wrote:  I couldn't find any reference to this in the GSL docs (?)

Try the contents on the left: https://www.gnu.org/software/gsl/doc/htm...roots.html


Pauli
Find all posts by this user
Quote this message in a reply
03-15-2018, 01:51 PM (This post was last modified: 03-15-2018 02:02 PM by Claudio L..)
Post: #47
RE: On Convergence Rates of Root-Seeking Methods
(03-14-2018 11:26 PM)Mike (Stgt) Wrote:  So I correct my statement: this Precise Calculator is not too bad, and fast. Sufficient for many problems.

There's one thing preccalc does differently from the rest. I struggled with this at first, then I read the docs and I realized what it does: It searches for the solution at dynamic precision using an iterative method, until the final result doesn't change for the selected precision. But it uses more than the requested precision for intermediate steps (as needed for convergence).
I say I struggled because I was trying to get the rounding error to match after several operations and this one always got me the exact result, no rounding error.
If you try asin(acos(atan(tan(cos(sin(9)))) with 8000 digits, for example (in Deg), you'll see a result with 40 or 50 digits correct, then several seconds later it goes to 9 exactly and displays the total time of computation only after it reaches convergence (on my system that's around 9 seconds). That explains the difference in speed with Calc. This one does the same operation many times, increasing precision each time, while Calc does each operation only once, at the requested precision. In this case, Calc would report some rounding error (different from any other calculator since it uses 'a/b' representation of numbers rather than 'a*2^b' or 'a*10^b', therefore rounding errors will be very different, but will be there), while preccalc won't stop searching until it reports the exact final result of 9.
PS: Try 9-asin(acos(atan(tan(cos(sin(9)))) at 3000 digits and watch it go nuts refining the solution. It seems to have a cutoff, reporting zero when the result is < 10^-(1.3*Prec).
Find all posts by this user
Quote this message in a reply
03-16-2018, 01:49 AM
Post: #48
RE: On Convergence Rates of Root-Seeking Methods
(03-14-2018 09:57 PM)emece67 Wrote:  For non-bracketed methods, you my find this paper interesting. It compares up to 13 different non-bracketed methods (not high order, high complexity ones, the highest order method in the comparison is Ostrowsky-Traub, 4rd order), being Newton, Halley & Steffensen among them. On this paper, Steffensen method failed to converge many, many times (although the author works in the complex plane, not in the real line).

Also, the fractal pictures on the paper are nice :-)

Regards.

Took me a while to digest the paper. There's a lot of information there. From the tables it seems that regardless of the theoretical convergence rates, only one method is consistently faster than Newton: Ostrowski-Traub and not by much (usually 10 to 30% faster only, not what you'd expect with a 4th order method).
And a few surprises: The author worked with complex roots, and here I thought only a few methods were good to find complex roots. It seems any method can work, but some do a horrible job (Steffensen, Figure 9 - the black area means no solution found). And I said before I was going to research this method... now it's discarded (thanks!).
It seems the simplest methods are really hard to beat. All that complexity to get a 10% speedup only some times? Even Halley was faster than Newton on only one table.
A very enlightening paper, thanks for the link.
Find all posts by this user
Quote this message in a reply
03-17-2018, 11:59 PM (This post was last modified: 03-18-2018 01:37 AM by emece67.)
Post: #49
RE: On Convergence Rates of Root-Seeking Methods
(03-16-2018 01:49 AM)Claudio L. Wrote:  It seems the simplest methods are really hard to beat. All that complexity to get a 10% speedup only some times? Even Halley was faster than Newton on only one table.

In fact, the order of convergence is a (really) bad measure for the "speed" of an iterative root-seeking algorithm.

Suppose we want to estimate the time needed for a root seeking algorithm to polish a root estimation with \(n_0\) correct digits to get \(n_t\) correct ones. If the method has an order of convergence \(r\), then we expect it will need \(\lceil\log_r{n_t/n_0}\rceil\) iterations to get such precision. If the method needs \(d\) function evaluations for each iteration, then the expected number of functions evaluations to carry out such polishing is \(d\lceil\log_r{n_t/n_0}\rceil\). If the computation time is dominated by the time needed to compute each function evaluation, then the time required for this polishing is proportional to that quantity. Thus, we can conceive a "speed" \(s\) of the algorithm as a quantity inversely proportional to such time (after some simple manipulations & forgetting the ceil operator and \(\ln n_t/n_o\) constant):
\[s\propto\ln\sqrt[d]r=\ln\gamma\]
where \(\gamma=\sqrt[d]r\) is the efficiency index of the algorithm.

Putting numbers on this, you infer that the "powerful" Kung-Traub method, with its amazing 8th order convergence rate (and 4 function evaluations per iteration) may be only 50 % faster than Newton (2nd order, 2 function evaluations per iteration). Add to this that the Kung-Traub iteration is much more complex than the Newton one and you end up concluding that, in practice, Newton may be faster indeed, except if the time required to compute each function evaluation is much longer that the overhead of the algorithm.

Even more, if we accept the Kung-Traub conjecture, that states that the maximum order of convergence attainable for an iterative root-seeking algorithm is \(r=2^{d-1}\), then the maximum speed we can reach with such optimal algorithms is:
\[s_\max\propto{d-1\over d}\]
so, the speed for such optimal method (Newton, Ostrowsky-Traub and Kung-Traub are optimal with \(d\) equal to 2, 3, and 4 respectively, Halley and the other non-bracketing methods mentioned on this thread are sub-optimal) depends only on the number of function evaluations on each iteration. The slowest will be Newton, Ostrowsky-Traub will show a 33 % speedup and Kung-Traub a 50 % over Newton. As the methods get more and more complex, the maximum speed-up will be as much as 100 % above Newton (twice the speed, half the time), but not more. With the great overhead of such complex algorithms, you see there is no point in using them, except at large digit counts where the algorithm overhead may be negligible against the time required to compute each function evaluation. For everyday use, only Ostrowsky-Traub has any real chance to exceed Newton, and it does many times, but still, do not expect speed-ups greater than 33 %.

Also, note that the time required for the algorithm for getting from a "distant" root estimation to the vicinity of the root (where the algorithm will start to show its convergence order) is not at all easily predictable from the convergence order, thus the valuable information on Varona's paper.

Thus, I think it is better to spend time optimizing other parts of the code (convergence criteria, computation of the derivative, ...) than trying out exotic algorithms.

Regards.
Find all posts by this user
Quote this message in a reply
03-18-2018, 10:10 AM
Post: #50
RE: On Convergence Rates of Root-Seeking Methods
(03-18-2018 05:52 AM)Mike (Stgt) Wrote:  May be you are right, may be I am wrong, but the lucky choice of a good initial estimate is not reflected in your formula.

Sure, that is for I said this:
(03-17-2018 11:59 PM)emece67 Wrote:  Suppose we want to estimate the time needed for a root seeking algorithm to polish a root estimation [...]

With "polish" I want to mean that we are near (or very near) the root, so the algorithm can show its convergence order and it will enlarge the number of correct digits in the estimation. With 0 correct digits we are not near the root, my reasoning does not apply in such conditions.

And:
(03-17-2018 11:59 PM)emece67 Wrote:  Also, note that the time required for the algorithm for getting from a "distant" root estimation to the vicinity of the root (where the algorithm will start to show its convergence order) is not at all easily predictable from the convergence order, thus the valuable information on Varona's paper.

Where I want to mean that the order of convergence is not useful as a "speed" measure during this phase of the iterations.

Thus, my point is that the order of convergence is not useful at all to predict the "speed" of a iterative non-bracketing algorithm, neither far from the root nor converging into it.

Regards.
Find all posts by this user
Quote this message in a reply
03-18-2018, 03:06 PM (This post was last modified: 03-18-2018 07:34 PM by emece67.)
Post: #51
RE: On Convergence Rates of Root-Seeking Methods
(03-05-2018 03:52 AM)Paul Dale Wrote:  
(03-02-2018 01:00 PM)Mike (Stgt) Wrote:  to compute sqare roots up to 100'000 digits and more?

[...]
It might be worth looking into Halley's method, which has cubic convergence, but needs the second derivative which is a constant here. However, I doubt the extra computation would be worthwhile. I think higher order Householder methods will fail because all of the higher order derivatives are zero.

You could look for Karatsuba and FFT square roots. I remember them being Newton's method with improvements to the basic arithmetic operations.

I finally computed square roots in Python upto 1 billion bits (~300 million digits), comparing 8 methods. The fastest was, always, Halley, with speedups when compared against the slower one (Kung-Traub) as high as 2.5x. When compared against Newton or Ostrowsky, the speedups are much moderate, around 20 %. The simplicity of the Halley method and the fact that the 2nd derivative is constant made it a winner for this case.

But the real accelerator was the core library used to perform the arithmetic. I started using the bare mpmath library (that uses the default Python machinery for arithmetic). The time needed by this library to perform multiplications and divisions seems to grow quadratically with the number of digits, so the algorithm used for such operations may be the schoolbook one.

Then switched to mpmath+gmpy (which includes the GMP library). This gave a speedup up to 35x at small digit counts and upto 1200x at big digit counts. The time required for mul/div grew slower than quadratically. This way, computation of sqrt(17) to ~300 million digits using the Halley method took less than 20 minutes in a laptop.

The gmpy library (in fact the GMP one) uses different algorithms for multiplication/division based upon the required number of digits: at low digit counts uses the schoolbook method, then switches to Karatsuba, then to different variants of Toom and finally to FFT.

As expected, Pauli's insight was foolproof ;-)

Regards.

EDIT: there were 1 billion bits, not digits. My fault when switching to gmpy. That is ~300 million digits, not a billion. Fixed also the speedup factors.
Find all posts by this user
Quote this message in a reply
03-18-2018, 06:34 PM
Post: #52
RE: On Convergence Rates of Root-Seeking Methods
(03-18-2018 03:06 PM)emece67 Wrote:  I finally computed square roots in Python upto 1 billion digits, comparing 8 methods....

Where/how do you store such a result? Or rather 8 of them?

Are these > 1GB text files?

And assuming so, how do you actually manipulate (e.g. even to simply compare) such large files?

Whatever the answers, kudos for your perseverance and patience.

--Bob Prosperi
Find all posts by this user
Quote this message in a reply
03-18-2018, 07:18 PM (This post was last modified: 03-18-2018 07:40 PM by emece67.)
Post: #53
RE: On Convergence Rates of Root-Seeking Methods
(03-18-2018 06:34 PM)rprosperi Wrote:  
(03-18-2018 03:06 PM)emece67 Wrote:  I finally computed square roots in Python upto 1 billion digits, comparing 8 methods....

Where/how do you store such a result? Or rather 8 of them?

Are these > 1GB text files?

And assuming so, how do you actually manipulate (e.g. even to simply compare) such large files?

Whatever the answers, kudos for your perseverance and patience.

Oops, it was my fault. There were no billion digits, but 1 billion bits, this is ~300 million decimal digits. I screwed it up when switching to gmpy.

In any case, the program does not store them on disk. After each run it only reports the number of iterations needed, the time spent and the number of correct digits in the result. When running at the maximum precision of ~300 million digits it takes ~2.5 GB of RAM.

Not big merit on my part, the program is straightforward, the Python, mpmath, gmpy and GMP developers do have it.

Sorry & regards.
Find all posts by this user
Quote this message in a reply
03-18-2018, 08:34 PM
Post: #54
RE: On Convergence Rates of Root-Seeking Methods
(03-18-2018 07:18 PM)emece67 Wrote:  Oops, it was my fault. There were no billion digits, but 1 billion bits, this is ~300 million decimal digits. I screwed it up when switching to gmpy.

In any case, the program does not store them on disk. After each run it only reports the number of iterations needed, the time spent and the number of correct digits in the result. When running at the maximum precision of ~300 million digits it takes ~2.5 GB of RAM.

No problem, that is still a LOT of digits.

However, if the result is not stored, how do you know the results are correct?

--Bob Prosperi
Find all posts by this user
Quote this message in a reply
03-18-2018, 09:07 PM
Post: #55
RE: On Convergence Rates of Root-Seeking Methods
(03-18-2018 08:34 PM)rprosperi Wrote:  
(03-18-2018 07:18 PM)emece67 Wrote:  Oops, it was my fault. There were no billion digits, but 1 billion bits, this is ~300 million decimal digits. I screwed it up when switching to gmpy.

In any case, the program does not store them on disk. After each run it only reports the number of iterations needed, the time spent and the number of correct digits in the result. When running at the maximum precision of ~300 million digits it takes ~2.5 GB of RAM.

No problem, that is still a LOT of digits.

However, if the result is not stored, how do you know the results are correct?

Comparison against the sqrt function inside the mpmath library and also by squaring.
Find all posts by this user
Quote this message in a reply
03-18-2018, 10:46 PM
Post: #56
RE: On Convergence Rates of Root-Seeking Methods
(03-18-2018 03:06 PM)emece67 Wrote:  As expected, Pauli's insight was foolproof ;-)

Smile

I was correct about Halley's being worth trying but not my hunch about its complexity outweighing the benefits.

Pauli
Find all posts by this user
Quote this message in a reply
03-28-2018, 11:15 PM
Post: #57
RE: On Convergence Rates of Root-Seeking Methods
More new stuff.

https://arxiv.org/pdf/1803.10156.pdf
Find all posts by this user
Quote this message in a reply
03-29-2018, 08:08 AM
Post: #58
RE: On Convergence Rates of Root-Seeking Methods
(03-28-2018 11:15 PM)ttw Wrote:  More new stuff.

https://arxiv.org/pdf/1803.10156.pdf

Not exactly new, what the authors call the "corrected Newton method" is exactly ye olde Halley method.

Regards.
Find all posts by this user
Quote this message in a reply
03-29-2018, 11:47 PM (This post was last modified: 03-29-2018 11:52 PM by ttw.)
Post: #59
RE: On Convergence Rates of Root-Seeking Methods
I'm surprised that they didn't recognize Halley's method; both Halley's method and Chebychev's method are well known. I didn't recognize it as I was careless and I was more interested in the extended method. There is a method (from the 1960s) which is reminiscent of correlated sampling in Monte Carlo. One choses another easy function g(x) computes roots of f(x)+c*g(x) for values of c approaching zero. Sometimes this will make thing easier.

In "real life" (equations I was paid for solving), none of the higher order methods usually worked. An exception was in finding roots of orthogonal polynomials for getting Gaussian Integration nodes. Usually, everything was at best linearly convergent.

I always seem to end up using some version of the Levenberg-Marquardt method.

This is also of interest: https://en.wikipedia.org/wiki/Jenkins%E2..._algorithm
https://ac.els-cdn.com/S0898122107006694...c37e83c178
Find all posts by this user
Quote this message in a reply
Post Reply 




User(s) browsing this thread: 1 Guest(s)