(HP71B) Newton's method
|
10-01-2023, 03:59 PM
Post: #1
|
|||
|
|||
(HP71B) Newton's method
Inspired by Csaba Tizedes videos about Newton's method
(09-23-2023 05:19 PM)Csaba Tizedes Wrote: https://www.youtube.com/shorts/1R3IMq227dM (09-23-2023 05:21 PM)Csaba Tizedes Wrote: Same, without d/dx() f'(x) ≈ (f(x+ε) - f(x-ε)) / (2ε) → f/f' needed 3 function calls per iteration. We may reduce this to about 2 calls per iteration. Redo same example in HP71B, for full convergence. Quote:1 DESTROY ALL @ C=0 >RUN Guess, Accuracy = 10, 1e-6 10 7.85714477037 6.29727032561 5.20041359757 4.4911553721 4.12133235435 4.00984649554 4.00007201239 4.00000000386 4 FNF calls = 19 For comparison, this is using video's formula for Newton's correction. There is no gain in rate of convergence ... just more function calls. >50 D = FNF(X)*R >60 C = C+3 >RUN Guess, Accuracy = 10, 1e-6 10 7.8571447704 6.29727032563 5.20041359761 4.49115522316 4.12133220071 4.00984642245 4.0000719974 4.00000000388 4 FNF calls = 27 |
|||
10-01-2023, 04:34 PM
Post: #2
|
|||
|
|||
RE: (HP71B) Newton's method
Previous HP71B code had a hard-coded absolute epsilon. (line 20)
It may be safer to use relative epsilon instead, in case x±ε round back to x >20 E=(ABS(X)+(X==0))*1E-3 Let's try example from thread, Namir, Byte and REXX, which also does 3 calls per iteration. (09-10-2018 01:26 PM)Massimo Gnerucci Wrote: D:\ReginaREXX391w64>regina.exe root.rex >2 DEF FNF(X)=2*X^2+3*X-12 >RUN Guess, Accuracy = 5, 1e-6 5 2.69565 1.92511161178 1.81413944914 1.81173817456 1.81173769148 FNF calls = 11 True root = (sqrt(105)-3)/4 = 1.8117376914898995958... |
|||
10-01-2023, 09:47 PM
(This post was last modified: 10-02-2023 02:59 PM by Albert Chan.)
Post: #3
|
|||
|
|||
RE: (HP71B) Newton's method
(09-23-2023 09:46 AM)carey Wrote: The simple looking transcendental equation, e^-x = x, is unsolvable in closed form (try it!) Newton form of x = f(x): 0 = (f - x) x = x - (f - x) / (f' - 1) = (x*f' - f) / (f' - 1) = (x - f/f') / (1 - 1/f') W(b=1) example, newton iteration formula: x = (x + 1) / (1 + e^x/b) This setup is nice, with denominator > 0, if b > -1/e = -0.36787944... Even crazy guess work. For W(b=1): ∞ --> 0 --> 0.5 --> 0.5663 --> ... > 2 B=1 @ DEF FNF(X)=B*EXP(-X)-X ! root = W(B) > RUN Guess, Accuracy = B, 1E-6 1 .537882997929 .566987045872 .567143290413 .56714329041 FNF calls = 10 Interestingly, crazy guess of 1e499 cost only 1 extra FNF call. Note: this work only with previous post relative epsilon patch. |
|||
10-02-2023, 04:04 PM
(This post was last modified: 10-03-2023 12:29 AM by Albert Chan.)
Post: #4
|
|||
|
|||
RE: (HP71B) Newton's method
(10-01-2023 09:47 PM)Albert Chan Wrote: W(b=1) example, newton iteration formula: x = (x + 1) / (1 + e^x/b) Because of e^x term in denominator, this setup is not good for W(huge b) Convegence is slow and inaccurate, even with good guess. >2 B=1E10 @ DEF FNF(X)=B*EXP(-X)-X ! root = W(B) >RUN Guess, Accuracy = LN(B), RES*1E-6 23.0258509299 12.0135445721 13.0133779957 14.0128061672 15.0110431755 16.005828047 16.9907276945 17.9478731089 18.8306249224 19.5363990109 19.9283981583 20.0242147693 20.0286862103 20.0286854427 FNF calls = 28 >X*EXP(X) ! expected to be close to B 10000000308.6 It is better is to setup Newton formula with less curvy f (e^-x is as bad as e^x) x * e^x = b ln(x) + x = ln(b) x = f(x) → f = ln(b) - ln(x) → f' = -1/x Newton: x = (x - f/f') / (1 - 1/f') = (1 + ln(b/x)) / (1 + 1/x) >2 B=1E10 @ DEF FNF(X)=LN(X/B)+X ! root = W(B) >RUN Guess, Accuracy = LN(B), RES*1E-6 23.0258509299 20.0197856819 20.0286854133 20.0286854133 FNF calls = 8 >X*EXP(X) ! very close to B. no other 12-digits X can get closer! 9999999999.95 |
|||
11-12-2024, 12:12 PM
(This post was last modified: 11-13-2024 08:19 PM by Albert Chan.)
Post: #5
|
|||
|
|||
RE: (HP71B) Newton's method
An example from Reducing the Number of Slope Evaluations for Newton's Method
I think interval (x ± h), h based from previous correction is better. This version also showed details about when old slope was reused (when count increase by 3) 1 DESTROY ALL @ C=0 2 DEF FNF(X)=EXP(X)-3*X*X 3 INPUT "Guess, Accuracy = ";X,A @ D=X+(X=0) 10 FOR I=1 TO 30 20 H=ABS(.01*D) 30 F1=FNF(X-H) @ F2=FNF(X+H) 40 R=2*H/(F2-F1) ! reciprocal slope 50 D=(F1+F2)/2*R ! secant root = X-D 60 C=C+2 @ IF ABS(D)<H THEN D=D+FNF(X-D)*R @ C=C+1 70 X=X-D @ DISP X,D,C @ IF ABS(D)<A THEN EXIT I 80 NEXT I Code: Guess, Accuracy = 3, 1e-6 Code: Guess, Accuracy = 4, 1e-6 |
|||
11-13-2024, 01:12 PM
(This post was last modified: 11-13-2024 08:21 PM by Albert Chan.)
Post: #6
|
|||
|
|||
RE: (HP71B) Newton's method
A better way to reduce function calls is to straighten curve.
From previous post, e^x - 3x^2 = 0. Divide both side by e^x >2 DEF FNF(X)= 1 - 3*X^2/EXP(X) Code: Guess, Accuracy = 3, 1e-6 Code: Guess, Accuracy = 4, 1e-6 |
|||
11-13-2024, 08:38 PM
(This post was last modified: 11-18-2024 02:28 PM by Albert Chan.)
Post: #7
|
|||
|
|||
RE: (HP71B) Newton's method
(11-12-2024 12:12 PM)Albert Chan Wrote: I think interval (x ± h), h based from previous correction is better. It may be better to count correction using old slope (LINE 70) as separate iteration. In other words, count increase by 1 if old slope was reused. For same epsilon, this would likely quit earlier, not wasting iterations. 1 DESTROY ALL @ C=0 2 DEF FNF(X)=EXP(X)-3*X*X 3 INPUT "Guess, Accuracy = ";X,A @ D=X+(X=0) 10 FOR I=1 TO 30 @ H=ABS(.01*D) 20 F1=FNF(X-H) @ F2=FNF(X+H) 30 R=2*H/(F2-F1) ! reciprocal slope 40 D=(F1+F2)/2*R ! secant root = X-D 50 C=C+2 @ X=X-D @ DISP X,D,C @ IF ABS(D)<=A THEN EXIT I 60 IF ABS(D)>H THEN 80 70 D=FNF(X)*R @ C=C+1 @ X=X-D @ DISP X,D,C @ IF ABS(D)<=A THEN EXIT I 80 NEXT I Code: Guess, Accuracy = 3, 1e-6 Code: Guess, Accuracy = 4, 1e-6 Previous post with straightened curve >2 DEF FNF(X)= 1 - 3*X^2/EXP(X) Code: Guess, Accuracy = 3, 1e-6 Code: Guess, Accuracy = 4, 1e-6 |
|||
11-18-2024, 01:22 PM
(This post was last modified: 11-21-2024 09:58 AM by Albert Chan.)
Post: #8
|
|||
|
|||
RE: (HP71B) Newton's method
Instead of calculating f(x±h), we can do f(x+h), f(x+h2), with x+h, x+h2 closer to true root.
From points {x, h=fnh(x)}, {x+h, h2=fnh(x+h)}, we solve for secant root Note: h = fnh(x) is simply scaled f(x), so that x + h is closer to true root. x - ((x+h)-x) / (fnh(x+h)-fnh(x)) * h = x + (-h/(h2-h)) * h = x + s * h Old s can be used to improve fnh(x), to get closer to true root (*) lua> f = fn'x:exp(x)-3*x*x' lua> f(3), f(4) -6.914463076812332 6.598150033144236 f root = 3 .. 4. For h, we need to scale down f(x), and flip its direction. 10 DEF FNF(X)=EXP(X)-3*X*X 20 INPUT "X,S = ";X,S @ H=X @ C=0 30 FOR I=1 TO 30 @ H=H/X @ IF 1+H*H=1 THEN EXIT I 40 H=FNF(X)*S @ H2=FNF(X+H)*S @ C=C+2 50 IF H2 THEN S2=H/(H-H2) @ H=H*S2 @ S=S*S2 60 X=X+H @ DISP X,H,C @ NEXT I Code: X,S = 3, -0.1 Code: X,S = 4, -0.1 Here is previous post 'flattened' FNF. We just need to switch direction. >10 DEF FNF(x) = 1 - 3*x^2/exp(x) Code: X,S = 3, -1 Code: X,S = 4, -1 (*) I had made a mistake replacing old s with new s (negated reciprocal slope) It should have been s = product of previous s's, and should converge to 1 value. Even if secant root is off, secant line slope should be close, and can be reused. Comment: This simple algorithm convergence rate is faster than Newton! For both examples, rate of convergence approaches (1+√2) ≈ 2.41421 Distributed per function call, we have √(1+√2) ≈ 1.55377 This is close to Secant's rate, φ = (1+√5)/2 ≈ 1.61803 Per iteration convergence rate calculations, mpmath with mp.dps = 10000 Numbers are log ratio of |relative errors| between iteration lua> ok = 3.73307902863 lua> log(abs(1-3.77904928733/ok)) / log(abs(1-3/ok)) -- 1st col, 1st row 2.701295751437436 Code: f(x) = exp(x) - 3*x*x f(x) = 1 - 3*x*x/exp(x) |
|||
11-21-2024, 12:08 AM
(This post was last modified: Yesterday 03:08 PM by Albert Chan.)
Post: #9
|
|||
|
|||
RE: (HP71B) Newton's method
Same as previous post, but with machine figure out negated reciprocal slope S
Since exit condition based on estimated relative error, output H replaced by H/X 10 DEF FNF(X)=EXP(X)-3*X*X 20 INPUT "X1,X2 = ";X,X2 @ H=FNF(X) @ H2=FNF(X2) @ C=2 30 S=(X2-X)/(H-H2) @ H=H*S @ X=X+H @ H=H/X @ DISP X,H,C 40 FOR I=1 TO 30 @ IF 1+H*H=1 THEN EXIT I 50 H=FNF(X)*S @ S2=H-FNF(X+H)*S @ C=C+2 60 IF S2 THEN S2=H/S2 @ H=H*S2 @ S=S*S2 70 X=X+H @ H=H/X @ DISP X,H,C 80 NEXT I Code: >run I name this Aitken solver, because it is based on same principle. {X1, X2 = X1+S*f(X1), X3 = X2+S*f(X2)} --> Aitken for converged X S = negated reciprocal slope of previous 2 points {x1,f(x1)}, {x2,f(x2)} X1 is secant root of previous 2 points X2 is newton of {X1,f(X1)}, but use (-1/S) for f'(X1) X3 is newton of {X2,f(X2)}, but use (-1/S) for f'(X2) X = aitken(X1,X2,X3) = secant(X1,f(X1) , X2,f(X2)). Let X1=X, rinse and repeat It is easy to see why we need to multiply by S2 each time, to get the right S From X2 = X + H*S, below 2 versions are equivalent. < 30 S=(X2-X)/(H-H2) @ ... > 30 S=(X2-X)/H @ S2=H/(H-H2) @ S=S*S2 @ ... This is very close to Secant's method 10 DEF FNF(X)=EXP(X)-3*X*X 20 INPUT "X1,X2 = ";X,X2 @ Y=FNF(X) @ C=1 30 FOR I=1 TO 30 @ Y2=FNF(X2) @ C=C+1 40 H=-(X2-X)/(Y2-Y)*Y @ X=X+H @ H=H/X @ DISP X,H,C 50 VARSWAP X,X2 @ Y=Y2 @ IF 1+H*H=1 THEN EXIT I 60 NEXT I Code: >run Note: Secant's method is not quite quadratic convergence. φ ≈ 1.61804 If we group iteration in f() pairs, like Aitken, φ² = φ+1 ≈ 2.61804. Both uses 1+H*H=1. I should have adjusted exit condition to match. Also, order of X1,X2 matters here. Ideally, we want X2 closer to root. Aitken solver is more Newton like. {X1,X2} arguments are to setup {X,S} |
|||
« Next Oldest | Next Newest »
|
User(s) browsing this thread: 1 Guest(s)