Post Reply 
(HP71B) Newton's method
10-01-2023, 03:59 PM
Post: #1
(HP71B) Newton's method
Inspired by Csaba Tizedes videos about Newton's method
(09-23-2023 05:19 PM)Csaba Tizedes Wrote:  https://www.youtube.com/shorts/1R3IMq227dM
(09-23-2023 05:21 PM)Csaba Tizedes Wrote:  Same, without d/dx()

https://www.youtube.com/shorts/-8pGxTXSk40

f'(x) ≈ (f(x+ε) - f(x-ε)) / (2ε)      → f/f' needed 3 function calls per iteration.

We may reduce this to about 2 calls per iteration.
Redo same example in HP71B, for full convergence.

Quote:1 DESTROY ALL @ C=0
2 DEF FNF(X)=X^4-4*X^3
3 INPUT "Guess, Accuracy = ";X,A
10 FOR I=1 TO 30 @ DISP X
20 E=.00001
30 F1=FNF(X-E) @ F2=FNF(X+E)
40 R=2*E/(F2-F1) ! reciprocal slope
50 D=(F1+F2)/2*R ! secant root = X-D
60 C=C+2 @ IF ABS(D)<E THEN D=D+FNF(X-D)*R @ C=C+1
70 X=X-D @ IF ABS(D)<A THEN EXIT I
80 NEXT I
90 DISP X @ DISP "FNF calls =";C

>RUN
Guess, Accuracy = 10, 1e-6
 10
 7.85714477037
 6.29727032561
 5.20041359757
 4.4911553721
 4.12133235435
 4.00984649554
 4.00007201239
 4.00000000386
 4
FNF calls = 19

For comparison, this is using video's formula for Newton's correction.
There is no gain in rate of convergence ... just more function calls.

>50 D = FNF(X)*R
>60 C = C+3
>RUN
Guess, Accuracy = 10, 1e-6
 10
 7.8571447704
 6.29727032563
 5.20041359761
 4.49115522316
 4.12133220071
 4.00984642245
 4.0000719974
 4.00000000388
 4
FNF calls = 27
Find all posts by this user
Quote this message in a reply
10-01-2023, 04:34 PM
Post: #2
RE: (HP71B) Newton's method
Previous HP71B code had a hard-coded absolute epsilon. (line 20)
It may be safer to use relative epsilon instead, in case x±ε round back to x

>20 E=(ABS(X)+(X==0))*1E-3

Let's try example from thread, Namir, Byte and REXX, which also does 3 calls per iteration.
(09-10-2018 01:26 PM)Massimo Gnerucci Wrote:  D:\ReginaREXX391w64>regina.exe root.rex
Enter expression (of variable X)
2 * X**2 + 3 * X - 12
Enter guess and accuracy
5 0.001
guess = 5
guess = 2.695652174
guess = 1.925113151
guess = 1.814140206
guess = 1.811738817
Root   = 1.811737692
Number of iterations = 5

>2 DEF FNF(X)=2*X^2+3*X-12
>RUN
Guess, Accuracy = 5, 1e-6
 5
 2.69565
 1.92511161178
 1.81413944914
 1.81173817456
 1.81173769148
FNF calls = 11

True root = (sqrt(105)-3)/4 = 1.8117376914898995958...
Find all posts by this user
Quote this message in a reply
10-01-2023, 09:47 PM (This post was last modified: 10-02-2023 02:59 PM by Albert Chan.)
Post: #3
RE: (HP71B) Newton's method
(09-23-2023 09:46 AM)carey Wrote:  The simple looking transcendental equation, e^-x = x, is unsolvable in closed form (try it!)

Newton form of x = f(x):

0 = (f - x)
x = x - (f - x) / (f' - 1)  =  (x*f' - f) / (f' - 1)  =  (x - f/f') / (1 - 1/f')

W(b=1) example, newton iteration formula:  x = (x + 1) / (1 + e^x/b)

This setup is nice, with denominator > 0, if b > -1/e = -0.36787944...
Even crazy guess work. For W(b=1): ∞ --> 0 --> 0.5 --> 0.5663 --> ...

> 2 B=1 @ DEF FNF(X)=B*EXP(-X)-X ! root = W(B)
> RUN
Guess, Accuracy = B, 1E-6
 1
 .537882997929
 .566987045872
 .567143290413
 .56714329041
FNF calls = 10

Interestingly, crazy guess of 1e499 cost only 1 extra FNF call.
Note: this work only with previous post relative epsilon patch.
Find all posts by this user
Quote this message in a reply
10-02-2023, 04:04 PM (This post was last modified: 10-03-2023 12:29 AM by Albert Chan.)
Post: #4
RE: (HP71B) Newton's method
(10-01-2023 09:47 PM)Albert Chan Wrote:  W(b=1) example, newton iteration formula:  x = (x + 1) / (1 + e^x/b)

Because of e^x term in denominator, this setup is not good for W(huge b)
Convegence is slow and inaccurate, even with good guess.

>2 B=1E10 @ DEF FNF(X)=B*EXP(-X)-X ! root = W(B)
>RUN
Guess, Accuracy = LN(B), RES*1E-6
 23.0258509299
 12.0135445721
 13.0133779957
 14.0128061672
 15.0110431755
 16.005828047
 16.9907276945
 17.9478731089
 18.8306249224
 19.5363990109
 19.9283981583
 20.0242147693
 20.0286862103
 20.0286854427
FNF calls = 28

>X*EXP(X) ! expected to be close to B
 10000000308.6



It is better is to setup Newton formula with less curvy f (e^-x is as bad as e^x)

x * e^x = b
ln(x) + x = ln(b)
x = f(x)      → f = ln(b) - ln(x)      → f' = -1/x

Newton: x = (x - f/f') / (1 - 1/f') = (1 + ln(b/x)) / (1 + 1/x)

>2 B=1E10 @ DEF FNF(X)=LN(X/B)+X ! root = W(B)
>RUN
Guess, Accuracy = LN(B), RES*1E-6
 23.0258509299
 20.0197856819
 20.0286854133
 20.0286854133
FNF calls = 8

>X*EXP(X) ! very close to B. no other 12-digits X can get closer!
 9999999999.95
Find all posts by this user
Quote this message in a reply
11-12-2024, 12:12 PM (This post was last modified: 11-13-2024 08:19 PM by Albert Chan.)
Post: #5
RE: (HP71B) Newton's method
An example from Reducing the Number of Slope Evaluations for Newton's Method

I think interval (x ± h), h based from previous correction is better.
This version also showed details about when old slope was reused (when count increase by 3)

1 DESTROY ALL @ C=0
2 DEF FNF(X)=EXP(X)-3*X*X
3 INPUT "Guess, Accuracy = ";X,A @ D=X+(X=0)
10 FOR I=1 TO 30
20 H=ABS(.01*D)
30 F1=FNF(X-H) @ F2=FNF(X+H)
40 R=2*H/(F2-F1) ! reciprocal slope
50 D=(F1+F2)/2*R ! secant root = X-D
60 C=C+2 @ IF ABS(D)<H THEN D=D+FNF(X-D)*R @ C=C+1
70 X=X-D @ DISP X,D,C @ IF ABS(D)<A THEN EXIT I
80 NEXT I

Code:
Guess, Accuracy = 3, 1e-6
 6.30761737624       -3.30761737624        2
 5.46675063672        .840866739517        4
 4.74557560076        .72117503596         6
 4.19690889791        .54866670285         8
 3.86665484877        .330254049138        10
 3.74748651079        .119168337984        12
 3.73326631876        1.42201920313E-2     14
 3.73307904233        1.8727643326E-4      16
 3.73307902864        1.36920386696E-8     19

Code:
Guess, Accuracy = 4, 1e-6
 3.78319350935        .216806490652        2
 3.7352742822         4.79192271496E-2     4
 3.7330832523         2.19102989768E-3     6
 3.73307902862        4.22367603362E-6     9
 3.73307902864       -1.54581287067E-11    12
Find all posts by this user
Quote this message in a reply
11-13-2024, 01:12 PM (This post was last modified: 11-13-2024 08:21 PM by Albert Chan.)
Post: #6
RE: (HP71B) Newton's method
A better way to reduce function calls is to straighten curve.
From previous post, e^x - 3x^2 = 0. Divide both side by e^x

>2 DEF FNF(X)= 1 - 3*X^2/EXP(X)

Code:
Guess, Accuracy = 3, 1e-6
 3.76823887888       -.768238878882        2
 3.73298203526        3.52568436176E-2     4
 3.73307902863       -9.69933705838E-5     7
 3.73307902863        0                    10

Code:
Guess, Accuracy = 4, 1e-6
 3.7252587096         .274741290395        2
 3.73307492918       -7.81621958153E-3     4
 3.73307902863       -4.09945495303E-6     7
 3.73307902863        0                    10
Find all posts by this user
Quote this message in a reply
11-13-2024, 08:38 PM (This post was last modified: 11-18-2024 02:28 PM by Albert Chan.)
Post: #7
RE: (HP71B) Newton's method
(11-12-2024 12:12 PM)Albert Chan Wrote:  I think interval (x ± h), h based from previous correction is better.
This version also showed details about when old slope was reused (when count increase by 3)

It may be better to count correction using old slope (LINE 70) as separate iteration.
In other words, count increase by 1 if old slope was reused.

For same epsilon, this would likely quit earlier, not wasting iterations.

1 DESTROY ALL @ C=0
2 DEF FNF(X)=EXP(X)-3*X*X
3 INPUT "Guess, Accuracy = ";X,A @ D=X+(X=0)
10 FOR I=1 TO 30 @ H=ABS(.01*D)
20 F1=FNF(X-H) @ F2=FNF(X+H)
30 R=2*H/(F2-F1) ! reciprocal slope
40 D=(F1+F2)/2*R ! secant root = X-D
50 C=C+2 @ X=X-D @ DISP X,D,C @ IF ABS(D)<=A THEN EXIT I
60 IF ABS(D)>H THEN 80
70 D=FNF(X)*R @ C=C+1 @ X=X-D @ DISP X,D,C @ IF ABS(D)<=A THEN EXIT I
80 NEXT I

Code:
Guess, Accuracy = 3, 1e-6
 6.30761737624       -3.30761737624        2
 5.46675063672        .840866739517        4
 4.74557560076        .72117503596         6
 4.19690889791        .54866670285         8
 3.86665484877        .330254049138        10
 3.74748651079        .119168337984        12
 3.73326631876        1.42201920313E-2     14
 3.73307904233        1.8727643326E-4      16
 3.73307902863        1.36971908949E-8     18
Code:
Guess, Accuracy = 4, 1e-6
 3.78319350935        .216806490652        2
 3.7352742822         4.79192271496E-2     4
 3.7330832523         2.19102989768E-3     6
 3.73307902821        4.22409335906E-6     8
 3.73307902863       -4.17325436225E-10    9



Previous post with straightened curve

>2 DEF FNF(X)= 1 - 3*X^2/EXP(X)

Code:
Guess, Accuracy = 3, 1e-6
 3.76823887888       -.768238878882        2
 3.73298203526        3.52568436176E-2     4
 3.73307903755       -9.70022880736E-5     6
 3.73307902863        8.91748978705E-9     7
Code:
Guess, Accuracy = 4, 1e-6
 3.7252587096         .274741290395        2
 3.73307492918       -7.81621958153E-3     4
 3.7330790291        -4.0999245281E-6      6
 3.73307902863        4.69575066737E-10    7
Find all posts by this user
Quote this message in a reply
11-18-2024, 01:22 PM (This post was last modified: 11-21-2024 09:58 AM by Albert Chan.)
Post: #8
RE: (HP71B) Newton's method
Instead of calculating f(x±h), we can do f(x+h), f(x+h2), with x+h, x+h2 closer to true root.

From points {x, h=fnh(x)}, {x+h, h2=fnh(x+h)}, we solve for secant root
Note: h = fnh(x) is simply scaled f(x), so that x + h is closer to true root.

x - ((x+h)-x) / (fnh(x+h)-fnh(x)) * h = x + (-h/(h2-h)) * h = x + s * h

Old s can be used to improve fnh(x), to get closer to true root (*)

lua> f = fn'x:exp(x)-3*x*x'
lua> f(3), f(4)
-6.914463076812332      6.598150033144236

f root = 3 .. 4. For h, we need to scale down f(x), and flip its direction.

10 DEF FNF(X)=EXP(X)-3*X*X
20 INPUT "X,S = ";X,S @ H=X @ C=0
30 FOR I=1 TO 30 @ H=H/X @ IF 1+H*H=1 THEN EXIT I
40 H=FNF(X)*S @ H2=FNF(X+H)*S @ C=C+2
50 IF H2 THEN S2=H/(H-H2) @ H=H*S2 @ S=S*S2
60 X=X+H @ DISP X,H,C @ NEXT I

Code:
X,S = 3, -0.1
 3.77904928733        .77904928733         2
 3.73056592227       -4.84833650565E-2     4
 3.73307897801        2.51305573645E-3     6
 3.73307902862        5.06081304277E-8     8
Code:
X,S = 4, -0.1
 3.63243460816       -.367565391835        2
 3.7332302642         .100795656045        4
 3.73307902653       -1.5123767276E-4      6
 3.73307902862        2.09164536676E-9     8

Here is previous post 'flattened' FNF. We just need to switch direction.

>10 DEF FNF(x) = 1 - 3*x^2/exp(x)

Code:
X,S = 3, -1
 3.73923735544        .739237355436        2
 3.73307901812       -6.15833732419E-3     4
 3.73307902863        1.05123533817E-8     6
Code:
X,S = 4, -1
 3.7289957962        -.271004203799        2
 3.73307908199        4.08328579414E-3     4
 3.73307902862       -5.33668310581E-8     6

(*) I had made a mistake replacing old s with new s (negated reciprocal slope)

It should have been s = product of previous s's, and should converge to 1 value.
Even if secant root is off, secant line slope should be close, and can be reused.

Comment: This simple algorithm convergence rate is faster than Newton!
For both examples, rate of convergence approaches (1+√2) ≈ 2.41421
Distributed per function call, we have √(1+√2) ≈ 1.55377
This is close to Secant's rate, φ = (1+√5)/2 ≈ 1.61803

Per iteration convergence rate calculations, mpmath with mp.dps = 10000
Numbers are log ratio of |relative errors| between iteration

lua> ok = 3.73307902863
lua> log(abs(1-3.77904928733/ok)) / log(abs(1-3/ok)) -- 1st col, 1st row
2.701295751437436

Code:
f(x) = exp(x) - 3*x*x           f(x) = 1 - 3*x*x/exp(x)
x,s = 3,-0.1  x,s = 4,-0.1      x,s = 3,-1    x,s = 4,-1
2.70130       1.36973           3.93626       2.58454
1.66101       2.79900           3.07258       2.64935
2.48047       2.10556           2.45092       2.51723
2.26705       2.36375           2.45940       2.45177
2.38094       2.37393           2.42749       2.43010
2.39471       2.40055           2.42055       2.42066
2.40703       2.40795           2.41668       2.41689
2.41106       2.41171           2.41526       2.41532
2.41294       2.41316           2.41464       2.41467
Find all posts by this user
Quote this message in a reply
11-21-2024, 12:08 AM (This post was last modified: Yesterday 03:08 PM by Albert Chan.)
Post: #9
RE: (HP71B) Newton's method
Same as previous post, but with machine figure out negated reciprocal slope S
Since exit condition based on estimated relative error, output H replaced by H/X

10 DEF FNF(X)=EXP(X)-3*X*X
20 INPUT "X1,X2 = ";X,X2 @ H=FNF(X) @ H2=FNF(X2) @ C=2
30 S=(X2-X)/(H-H2) @ H=H*S @ X=X+H @ H=H/X @ DISP X,H,C
40 FOR I=1 TO 30 @ IF 1+H*H=1 THEN EXIT I
50 H=FNF(X)*S @ S2=H-FNF(X+H)*S @ C=C+2
60 IF S2 THEN S2=H/S2 @ H=H*S2 @ S=S*S2
70 X=X+H @ H=H/X @ DISP X,H,C
80 NEXT I

Code:
>run
X1,X2 = -1,0
-.275321257596       -2.63212055884        2
-.46545523822         .408490366015        4
-.458955061365       -1.41629919827E-2     6
-.458962267537        1.57010121882E-5     8
-.458962267537        0                    10
>run
X1,X2 = 1,2
 .934926430466       -6.96028772038E-2     2
 .910115652211       -2.72611268631E-2     4
 .910007572616       -1.18767797141E-4     6
 .910007572492       -1.36722374521E-10    8
>run
X1,X2 = 3,4
 3.51170436248        .145713963836        2
 3.72474392675        5.71957612275E-2     4
 3.73306774577        2.22975300337E-3     6
 3.73307902864        3.02240229434E-6     8
 3.73307902863       -1.77449684916E-12    10

I name this Aitken solver, because it is based on same principle.

{X1, X2 = X1+S*f(X1), X3 = X2+S*f(X2)} --> Aitken for converged X

S = negated reciprocal slope of previous 2 points {x1,f(x1)}, {x2,f(x2)}

X1 is secant root of previous 2 points
X2 is newton of {X1,f(X1)}, but use (-1/S) for f'(X1)
X3 is newton of {X2,f(X2)}, but use (-1/S) for f'(X2)

X = aitken(X1,X2,X3) = secant(X1,f(X1) , X2,f(X2)). Let X1=X, rinse and repeat

It is easy to see why we need to multiply by S2 each time, to get the right S
From X2 = X + H*S, below 2 versions are equivalent.

< 30 S=(X2-X)/(H-H2) @ ...
> 30 S=(X2-X)/H @ S2=H/(H-H2) @ S=S*S2 @ ...



This is very close to Secant's method

10 DEF FNF(X)=EXP(X)-3*X*X
20 INPUT "X1,X2 = ";X,X2 @ Y=FNF(X) @ C=1
30 FOR I=1 TO 30 @ Y2=FNF(X2) @ C=C+1
40 H=-(X2-X)/(Y2-Y)*Y @ X=X+H @ H=H/X @ DISP X,H,C
50 VARSWAP X,X2 @ Y=Y2 @ IF 1+H*H=1 THEN EXIT I
60 NEXT I

Code:
>run
X1,X2 = -1,0
-.275321257596       -2.63212055884        2
-.588196207259        1                    3
-.43936481325         .373365255266        4
-.457108107413       -.286777017778        5
-.458991546782        4.27605555479E-2     6
-.45896222444         4.03980312106E-3     7
-.458962267536       -6.37944512844E-5     8
-.458962267537        9.39007171672E-8     9
>run
X1,X2 = 1,2
 .934926430466       -6.96028772038E-2     2
 .91725948208        -1.18040809506        3
 .910111544594       -2.72657632129E-2     4
 .910008015202       -7.96857473484E-3     5
 .910007572514       -1.14254082314E-4     6
 .910007572487       -4.86495824593E-7     7
>run
X1,X2 = 3,4
 3.51170436248        .145713963836        2
 3.68065825616       -8.67621282961E-2     3
 3.74559850253        6.24450644924E-2     4
 3.73246065052        1.38788855967E-2     5
 3.73307193242       -3.35556623094E-3     6
 3.73307903269        1.65649364591E-4     7
 3.73307902864        1.9009014104E-6      8

Note:
Secant's method is not quite quadratic convergence. φ ≈ 1.61804
If we group iteration in f() pairs, like Aitken, φ² = φ+1 ≈ 2.61804.

Both uses 1+H*H=1. I should have adjusted exit condition to match.
Also, order of X1,X2 matters here. Ideally, we want X2 closer to root.

Aitken solver is more Newton like. {X1,X2} arguments are to setup {X,S}
Find all posts by this user
Quote this message in a reply
Post Reply 




User(s) browsing this thread: 1 Guest(s)