On Convergence Rates of Root-Seeking Methods
|
03-06-2018, 02:53 AM
Post: #21
|
|||
|
|||
RE: On Convergence Rates of Root-Seeking Methods
(03-04-2018 11:58 PM)Mike (Stgt) Wrote: A german proverb says 'no reply is also a reply'. So is a reply from a non-expert. Probably not the best methods, but lots of information I was not aware of: https://en.wikipedia.org/wiki/Methods_of...uare_roots a) Bakhshali method: interesting as the source is an ancient Hindu manuscript; b) Taylor Series: quoting from Wikipedia: Quote:As an iterative method, the order of convergence is equal to the number of terms used. With two terms, it is identical to the Babylonian method. With three terms, each iteration takes almost as many operations as the Bakhshali approximation, but converges more slowly. Despite that remark, that's the one I chose to play with, because it is easy to follow. Also, it can be expanded at will if one wants to try it with more terms. The first three terms: \(\sqrt{x}= r\left ( 1+\frac{x-r^{2}}{2r^{2}}-\frac{\left ( x-r^{2} \right )^{2}}{8r^{4}} \pm \cdots \right )\) Equivalently, after a manual simplification: \(\sqrt{x}\approx \frac{3\left ( r^{2}+2x \right )-\left ( \frac{x}{r} \right )^{2}}{8r}\) Decimal Basic program and output: OPTION ARITHMETIC DECIMAL_HIGH ! 1000 digits precision LET t = TIME INPUT x LET r = 10^INT(LOG10(x)/2) + 1 ! initial estimate PRINT r DO LET a = r LET r = (3*(r^2 + 2*x) - (x/r)^2)/(8*r) PRINT r LOOP UNTIL ABS(r - a) < 1E-300 PRINT SQR(x) - r PRINT TIME - t;" seconds" END ? 17 2 2.609375 3.8314576666222308153602079071803683679290635266453002417839927016060030965356031813886658895626223976443827895... 4.1222446206376004197199583378751078184770740152292521004797760537939389501951500012131448438220412295740265140... 4.1231056255988785361780116477668858264967274345935977923118742686911491490196513723285709444226805259084101669... 4.1231056256176605498214098559740768302760537393121186747616816254806109415372984751649808177898214772303507785... 4.1231056256176605498214098559740770251471992253736204343986335730949543463376215935878636508106842966843263884... 4.1231056256176605498214098559740770251471992253736204343986335730949543463376215935878636508106842966845440409... 4.1231056256176605498214098559740770251471992253736204343986335730949543463376215935878636508106842966845440409... 8.20271467074563198363135491519254737053821092426895150882866591718992670433259808358492948699826676293788E-931... 2.02 seconds This is a test for small argument as the initial estimation method will cause error for large arguments ( starting around 920 ). RPL program %%HP: T(3)A(R)F(.); \<< DUP XPON 2. / ALOG 1. + DO DUP2 DUP2 SQ SWAP DUP + + 3. * UNROT / SQ - OVER 8. * / SWAP DUP2 / 1. - ABS NIP .00000001 < UNTIL END NIP \>> Again, this will cause error for x = 0. No problem, we know Sqrt(0) = 0. Also, these are just tests :-) |
|||
03-08-2018, 01:04 AM
Post: #22
|
|||
|
|||
RE: On Convergence Rates of Root-Seeking Methods
There's a paper that I don't remember somewhere (Before Bailey Borwein and Plouffe) which discussed the time to compute Pi by various methods. The linear, quadratic, cubic, and quartic methods took about the same time overall. What happens is that the multi-precision computations dominate the convergence rate.
A very loose estimate can be obtained by assuming arithmetic of N digits takes time proportional to N^2) For quadratic convergence up to about 100 digits, digit lengths of (1,2,4,8,16,32,64,100) can be used and for cubic, ( 1,3,27,81,100) The sums of squares are: 15461 and 17381 respectively. Although the quadratic case uses 8 iterations and the cubic 5, the number of elementary operations is nearly the same. |
|||
03-08-2018, 01:21 AM
(This post was last modified: 03-11-2018 01:09 PM by emece67.)
Post: #23
|
|||
|
|||
RE: On Convergence Rates of Root-Seeking Methods
(03-07-2018 10:32 PM)Mike (Stgt) Wrote: It's awesome, at least for those who are not awfully bored by this subject. Yes, it is awesome! I've also made my own tests, in my case in Python with the mpmath package for the arbitrary precision arithmetic. I've tested the algorithms (all are generic root seeking algorithms):
I've not used (by now) sliding accuracy, but fixed ones at 10000, 100000 and 1 million digits with only 2 arguments: 2 and 17. Tested only sqrt(). My findings are that, as in this case the computation of the function to be solved requires only one multiplication and one subtraction, but the algorithm requires one or more divisions, being divisions more time consuming at such large digit counts, the usual parameter used to compare root-seeking algorithm speeds, the efficiency index \(\gamma=\sqrt[r]p\) (being \(p\) the convergence order of the method and \(r\) the number of function evaluations needed on each iteration) is not useful, as function evaluations are simpler than other operations needed by the algorithm. Defining instead a "division-aware efficiency index" as \(\gamma_d=\sqrt[d]p\) (being \(d\) the number of divisions needed on each iteration) one gets a reliable predictor of the speed of the algorithm. Computing this index for the above algorithms one gets:
So the moral is, using this kind of classic root-seeking algorithms to compute square roots, use one with a high convergence order, using few divisions and not too cumbersome (this is the reason I added the Grau+Díaz-Barrero to my tests). Now I will check if the Taylor method used by Gerson can be used to get a 4th order method with only one division. Such a method, if not too complex, may be even faster than Halley. In my tests, the more the number of digits used, the higher the correlation between \(\gamma_d\) and the speed of the method. For the curious, it took 218 s for Halley's algorithm to compute \(\sqrt2\) to 1 million digits (Intel Core-i7@2.7 GHz, Win10). Regards. |
|||
03-08-2018, 04:17 AM
Post: #24
|
|||
|
|||
RE: On Convergence Rates of Root-Seeking Methods | |||
03-08-2018, 05:35 AM
Post: #25
|
|||
|
|||
RE: On Convergence Rates of Root-Seeking Methods
I'm new to this subject and I found this subject is very interesting.
My first time to try the Root-Seeking feature is from the HP-15C SOLVE function. I don't know much about the convergence speed of the SOLVE function because I don't have the actual HP-15C but use the Emulator for PC computer that always run very fast on all kind of equations. Since I just got the HP-11C and this doesn't have the SOLVE feature like the 15C. Then I noticed that in the 11C Owner's Handbook it got the Newton's Method program so this is the second time that I try this root-seeking program. This turn out to be a very slow root-seeking program which I noticed that the reason that way very slow because of the 11C slow computational CPU itself. I happen to try other program that use "Regula Falsi method" I try on 11C this method is much faster than the Newton's Method with this Regula Falsi program you need to provide reference points to X1 and X2 while providing the reference point this can be check right away if that two points is far apart or not when done give it the tolerant and start to compute when done can check for the accuracy against 0 value. If the answer not accurate enough just press R/S again for better more accuracy. I'm new to this Algorithm I'll post this program later on. Gamo |
|||
03-08-2018, 08:00 PM
Post: #26
|
|||
|
|||
RE: On Convergence Rates of Root-Seeking Methods
(03-06-2018 02:53 AM)Gerson W. Barbosa Wrote: Decimal Basic program and output: I suggest you place the INPUT line first, i.e. before TIME is saved in t. ;-) But tell me more about this Decimal Basic. Sounds very interesting. Dieter |
|||
03-08-2018, 08:10 PM
(This post was last modified: 03-08-2018 08:11 PM by Dieter.)
Post: #27
|
|||
|
|||
RE: On Convergence Rates of Root-Seeking Methods
(03-08-2018 05:35 AM)Gamo Wrote: I'm new to this subject and I found this subject is very interesting. First of all: the original post by Namir indeed was about root-seeking methods, i.e. methods for solving f(x)=0. But the last posts deal with a different topic, they are about methods for calculating square roots of a number. (03-08-2018 05:35 AM)Gamo Wrote: I happen to try other program that use "Regula Falsi method" Wow, that was quite a long sentence. Even without a single period or comma. ;-) Anyway, the Newton method usually is converging with a similar speed as Regula Falsi and its variants. But the latter has the advantage that the user can provide two guesses that bracket the solution. It may be faster here and there because each iteration only requires one call of f(x) instead of two for the Newton method (f(x) and f(x+h) or f(x) and f'(x)). BTW, Gamo, I just noticed the PM you sent a few days ago. I now sent a reply – and an 11C program. ;-) Dieter |
|||
03-08-2018, 10:43 PM
(This post was last modified: 03-10-2018 03:59 AM by Gerson W. Barbosa.)
Post: #28
|
|||
|
|||
RE: On Convergence Rates of Root-Seeking Methods
(03-08-2018 08:00 PM)Dieter Wrote:(03-06-2018 02:53 AM)Gerson W. Barbosa Wrote: Decimal Basic program and output: Previously I had x = 2 there. Then I simply replaced it with INPUT x to test the program with various arguments without having to edit it every time. That's what happens when you're in a hurry. 2.02 seconds seemed a bit excessive time to me just for computing the square root of 17 to 930+ places. The 0.02 seconds I get now is more like it :-) (03-08-2018 08:00 PM)Dieter Wrote: But tell me more about this Decimal Basic. Sounds very interesting. It was used by Tom L. in another thread. Rexx has also been suggested, but that was easier to find and install. The latter appears to be much better, but I haven't tried it yet. Decimal BASIC Gerson. ------------------------------ PS: 0 seconds! (Ok, 0 seconds + something) OPTION ARITHMETIC DECIMAL_HIGH ! 1000 digits precision INPUT x LET r = EXP(LOG(x)/2) LET t = TIME PRINT r DO LET a = r LET b = r*r LET r = (3*b*(b + 2*x) - x*x)/(8*b*r) PRINT r LOOP UNTIL ABS(r - a) < 1E-300 PRINT SQR(x) - r PRINT TIME - t;" seconds" END ? 17 4.1231056256176604 4.12310562561766054982140985597407702514719922537352... ... 4.123105625617660549821409855974077025147199225373620434398633573094954346337621593587863650810684296684544040939214141615301420840415868363079548145746906977677023266436240863087790567572385708225521380732563083860309142749804671913529322147978718167815796475906080565496973900766721383689212106708921029005520264976997227788461399259924591373145657819254743622377232515783073400662476891460894993314102436279443386280552637475060905080869257482675403757576927464631666351033096817122919874195864431971054705958485725931943603620656058152613585046428067872150064104914222367522243486737258047037771274998566571218570432100303602606506487154690698281546846459564503441849930597639509078619959043334207783036732466105002383305603648597891517738125149725101393295630516977396156134483704021469549517283774775128332086775432479301964503858945967736521957022356481292823232373091650044755709460165721749143175547451122718361635317492475624065195560022755934398822460451518623945769412122844523427764255913 ... 0 seconds |
|||
03-09-2018, 02:00 PM
(This post was last modified: 03-09-2018 04:31 PM by emece67.)
Post: #29
|
|||
|
|||
RE: On Convergence Rates of Root-Seeking Methods
(03-08-2018 01:21 AM)emece67 Wrote: Now I will check if the Taylor method used by Gerson can be used to get a 4th order method with only one division. Such a method, if not too complex, may be even faster than Halley. I've tested this 4th order Taylor with only 1 division. The results are a little disappointing. At 100000 digits it behaves as the other 4th order method I've tested (Ostrowsky-Traub), the effect of avoiding a division is balanced by the more complex algorithm and also by needing one iteration more. At 1 million digits it gets faster than Ostrowsky-Traub and Newton-Raphson & comparable to Grau+Díaz-Barrero, but still slower than Halley. The test at 10 million digits is now in progress (but it may take some days to complete). Regards. |
|||
03-11-2018, 12:53 PM
(This post was last modified: 03-11-2018 01:21 PM by Gerson W. Barbosa.)
Post: #30
|
|||
|
|||
RE: On Convergence Rates of Root-Seeking Methods
(03-09-2018 02:00 PM)emece67 Wrote: I've tested this 4th order Taylor with only 1 division. The results are a little disappointing. Decimal Basic is not the right tool for this test, but I did it anyway. It's limited to 1 thousand digits and the precision cannot be changed. Starting with a 16-digit approximation, the number of correct digits is quadrupled at each iteration. Thus, for 268.435.456 digits only 12 iterations would suffice ( log(268435456)/log(4) - 2 ). Ideally the precision should be set to 64 digits in the first iteration, then to 256 in the second iteration, then to 1024 in the third iteration and so on, but I cannot do it in Decimal Basic. OPTION ARITHMETIC DECIMAL_HIGH ! 1000 digits precision INPUT x LET r = EXP(LOG(x)/2) LET t = TIME PRINT r DO LET a = r LET b = r*r LET r = (b*(b*(b*(20*x - 5*b + 40) + (120 - 30*x)*x) + x*x*(20*x - 40)) + (8 - 5*x)*x*x*x)/(128*b*b*r) PRINT r LOOP UNTIL ABS(r - a) < 1E-256 PRINT SQR(x) - r PRINT TIME - t;" seconds" END ? 2 1.414213562373095 1.41421356237309504880168872420969807856967187537694807317667973798947912251558502278724354819580798680609385345527072973954631352847657992759491719132295975097 7851190546051993044944382000040210704403426034038634388091113054796694581356388229288359585276126491113757787998102600361450207381045186090436903909837008746643 1034702613985408231146309649208594275287150061823255779551086195957913504068468569392040023760526772118640227753720558095337126185503740848528003003220083571259 7876007466289414661614275577047779186467002751305547109932260519132008182471417158427394358697639180854278403008712404509644239238267271289740289167825812746163 0661480718134771108588125271691324904112941287270529120464653563566882946440308972859172411054988437943106627597053434432708976403774980364280296083588274483068 9032078903887093311671540119615505667625243776521965538988128178033349768645181823188802504726858350883437664116300306244902929112295963603771463834304635321134 83159222912044330867518715241960119192026 1.41421356237309504880168872420969807856967187537694807317667973799073247846210703885038753432764157273501384623091229702492483605585073721264412149709993583141 3222665927505592755799950501152782060571470109559971605970274534596862014728517418640889198609552329230483763417799908824040833757274424682543742882438438438070 4531006364272615236046110643974165998978164328319810701871101571764672134555595634138328395716863930924425632516611811108073845894133895402501220018056132779461 7819928435599821143715124157447066040849521403643466252060585684645815031693402475444690707768303296469996147472177954132880251268214102352564481045043373214707 3703801547018934883767816255384956590224751982474241571339154728018897573565000339167513251254677743584922032240364649937930685912128733124895129684082793457798 6071930775497360468100472461422145794096125895529313874025635406089188218667788735355852898088999919542662057387970464289475610337999433702645392165833753575717 67992903973693340385046714018520340468998 1.41421356237309504880168872420969807856967187537694807317667973799073247846210703885038753432764157273501384623091229702492483605585073721264412149709993583141 3222665927505592755799950501152782060571470109559971605970274534596862014728517418640889198609552329230484308714321450839762603627995251407989687253396546331808 8296406206152583523950547457502877599617298355752203375318570113543746034084988471603868999706990048150305440277903164542478230684929369186215805784631115966687 1301301561856898723723528850926486124949771542183342042856860601468247207714358548741556570696776537202264854470158588016207584749226572260020855844665214583988 9394437092659180031138824646815708263010059485870400318648034219489727829064104507263688131373985525611732204024509122770022694112757362728049573810896750401836 9868368450725799364729060762996941380475654823728997180326802474420629269124859052181004459842150591120249441341728531478105803603371077309182869314710171111683 91658172688941975871658215212822951848847 2.08969463386289156288276595230574797E-1000 0 seconds |
|||
03-11-2018, 02:26 PM
Post: #31
|
|||
|
|||
RE: On Convergence Rates of Root-Seeking Methods
I'm very interested in this subject, since I'm about to start coding the solvers for newRPL.
My question here is generic for all the people that have been doing tests: I'm planning to have several solvers and let the user choose which one to use for their particular problem. First thing that's needed is to separate in 2 classes of methods: single-point and bracketed. Based on the tests reported in this thread, what methods would be a "must have" in each category? Bracketed: * Bisection <add others here!> Single starting-point: * Newton-Raphson <add others here!> Keep in mind newRPL is multiprecision but limited to 2000 digits, so a very complex method that is only faster for more than a million digits wouldn't really be useful. |
|||
03-11-2018, 07:17 PM
(This post was last modified: 03-11-2018 07:31 PM by Dieter.)
Post: #32
|
|||
|
|||
RE: On Convergence Rates of Root-Seeking Methods
(03-11-2018 02:26 PM)Claudio L. Wrote: Based on the tests reported in this thread, what methods would be a "must have" in each category? I'd say that a (modifided) Secant / Regula Falsi method is a must here. The respective Wikipedia article has the two most common variants, the German version adds a third one. Among these the Illinois method has been used in several HP software pacs of the Seventies (cf. HP65 and HP41 Standard Pac). Pauli may add a few more sophisticated methods, e.g. Brent's – I remember the discussion years ago with regard to the WP34s solver. Dieter |
|||
03-11-2018, 08:01 PM
(This post was last modified: 03-11-2018 08:10 PM by emece67.)
Post: #33
|
|||
|
|||
RE: On Convergence Rates of Root-Seeking Methods
(03-11-2018 02:26 PM)Claudio L. Wrote: Single starting-point: I'll add the Ostrowsky-Traub method. It achieves 4th order convergence with 3 function evaluations on each iteration, being simple enough. When I wrote a complex solver for the wp34s (now it is the complex solver on its library) I made tests with some different single starting point solvers, up to 1024 digits, and IMHO, this method gave the best overall performance (see this post here and the next one on the same thread). It has an efficiency index of 1.587, higher than Halley (1.442) & Newton (1.414), and although there are methods with a better efficiency index (Kung-Traub has 1.682) they are more complex and, in practice, in many cases slower because of this complexity. In any case, keep in mind that many of these single starting point methods rely on the computation of the derivative. Many times this derivative is estimated with: \[y'(x)\approx{f(x+\Delta x)-f(x)\over\Delta x}\] and, if you do not use a small enough \(\Delta x\), you end up with a, say, 6 correct digits estimation of the derivative, not adequate to get a new 2000 correct digits estimation of the root from a previous 500 correct digits root estimation. My advice is, if you are working with \(d\) digits, use relative increments in \(x\) around \(10^{-d/2}\). In my tests, the way you estimate the derivative may have more impact on the execution time than the algorithm used. Regards. |
|||
03-11-2018, 09:50 PM
Post: #34
|
|||
|
|||
RE: On Convergence Rates of Root-Seeking Methods
I'd go with Brent's method (or any variants that look good to you.) The problem with high order methods is that their radius of convergence may be too small to be useful. Brent's method alternates inverse-iteration, the secant method, and bisection, to achieve about 1.84 degree guaranteed convergence. There are probably ways to incorporate Newton's method if suitable regions can be found. This kicks things up a little in convergence rate. A problem with derivatives is that these get rather complicated in several dimensions; the slope becomes a gradient (vector valued) and the second derivative becomes the Hessian Matrix. One can look at something like the Levenberg-Mardquart method which can be useful but requires user-set parameters. I think Brent's method needs little but the function and a couple of points where the function is of different signs.
|
|||
03-11-2018, 11:26 PM
Post: #35
|
|||
|
|||
RE: On Convergence Rates of Root-Seeking Methods
(03-11-2018 08:41 PM)Mike (Stgt) Wrote:(03-11-2018 12:53 PM)Gerson W. Barbosa Wrote: [...] 0 + something seconds of course, as stated earlier. But that's too vague, I recognize. Around 3 milliseconds @ 2.60 GHz might be closer. Apparently, the built-in SQR algorithm in Decimal Basic is not optimal: ----------------------------------------------------------- OPTION ARITHMETIC DECIMAL_HIGH LET x = 2 LET s = EXP(LOG(x)/2) LET t = TIME FOR i = 1 TO 1000 LET r = s DO LET a = r LET b = r*r LET r = (b*(b*(b*(20*x - 5*b + 40) + (120 - 30*x)*x) + x*x*(20*x - 40)) + (8 - 5*x)*x*x*x)/(128*b*b*r) LOOP UNTIL ABS(r - a) < 1E-256 NEXT i PRINT TIME - t;" seconds" END 3.14 seconds Average of 10 runs: 3.013 (min = 2.33; max = 3.25) ----------------------------------------------------------- OPTION ARITHMETIC DECIMAL_HIGH LET x = 2 LET t = TIME FOR i = 1 TO 1000 LET r = SQR(x) NEXT i PRINT TIME - t;" seconds" END 3.87 seconds Average of 10 runs: 3.720 (min = 2.54; max = 4.09) ----------------------------------------------------------- |
|||
03-11-2018, 11:36 PM
Post: #36
|
|||
|
|||
RE: On Convergence Rates of Root-Seeking Methods
Rather than Newton's which requires the derivative, did you mean secant?
Dieter is correct, I'd recommend Brent's method for a bracket solver. It is guaranteed to get a solution and is quadratically convergent almost always. There was a modification in the 34S to also include the Ridder's method in addition to Brent's secant, inverse quadratic and bisection methods. Testing indicated that it was beneficial, although I don't remember the conditions under which it is used. There is quite a bit of code before the searching can begin. If the initial estimates don't bracket the solution, a bisection step is done before switching to more advanced methods. There might be a forced secant step after the bisection, I'm not sure anymore. If the first two function evaluations are equal, it does a bisection step and if that produces a third equal value a right then left step exponentially increasing interval scan is done. When there is only one initial estimate, it stars with the interval scan. Pauli |
|||
03-12-2018, 04:08 PM
Post: #37
|
|||
|
|||
RE: On Convergence Rates of Root-Seeking Methods
(03-11-2018 09:06 PM)Mike (Stgt) Wrote:(03-11-2018 02:26 PM)Claudio L. Wrote: * Newton-Raphson Almost every method has an implementation using some variant of difference formulae to replace the derivatives and come up with a workable algorithm. I'm not planning to reinvent the wheel, I'll research what's already being used and go with it. I even looked at some promising 16th order methods but I'm not so sure how tight is the convergence radius, it might need other methods to get the first few digits, then a few iterations of those methods can quickly give you a lot of digits. |
|||
03-12-2018, 04:13 PM
Post: #38
|
|||
|
|||
RE: On Convergence Rates of Root-Seeking Methods
(03-11-2018 11:36 PM)Paul Dale Wrote: Rather than Newton's which requires the derivative, did you mean secant?I did mean Newton, with a formula to approximate the derivative, or if the expression has a derivative the system can determine it algebraically and use it. (03-11-2018 11:36 PM)Paul Dale Wrote: Dieter is correct, I'd recommend Brent's method for a bracket solver. It is guaranteed to get a solution and is quadratically convergent almost always. There was a modification in the 34S to also include the Ridder's method in addition to Brent's secant, inverse quadratic and bisection methods. Testing indicated that it was beneficial, although I don't remember the conditions under which it is used. Brent seems like a powerhouse combo of other methods. If I implement Brent, does it make any sense to offer the user a choice to use bisection, etc.? It will probably end up being the default method, unless complex mode is enabled, then something like Muller seems to be the only choice? |
|||
03-12-2018, 04:38 PM
Post: #39
|
|||
|
|||
RE: On Convergence Rates of Root-Seeking Methods
(03-12-2018 04:08 PM)Claudio L. Wrote: I even looked at some promising 16th order methods but I'm not so sure how tight is the convergence radius, it might need other methods to get the first few digits, then a few iterations of those methods can quickly give you a lot of digits. I suspect that such high-order methods will not give any benefit at all at such 2000 digit counts. A benefit of the Ostrowsky-Traub method is that it resembles 2 consecutive steps of the Newton-Raphson one, but the 2nd pass is indeed simpler than a Newton-Raphson step, so you end up with twice the convergence order with less less than twice effort. It can also be used in the complex plane. Regards. |
|||
03-12-2018, 10:16 PM
Post: #40
|
|||
|
|||
RE: On Convergence Rates of Root-Seeking Methods
(03-12-2018 09:51 PM)Mike (Stgt) Wrote: BTW, the HP-19BII is not mentionded on MoHPC, where the HP-18B is described. Why? It is included, here. --Bob Prosperi |
|||
« Next Oldest | Next Newest »
|
User(s) browsing this thread: 3 Guest(s)