Post Reply 
Automatic differentiation using dual numbers
06-18-2022, 02:09 PM (This post was last modified: 06-21-2022 05:20 AM by Thomas Klemm.)
Post: #1
Automatic differentiation using dual numbers
TLDR;
A dual number has the form \(x + {x}' \varepsilon\) where \(x, {x}' \in \mathbb{R}\) and \(\varepsilon \ne 0\), but \(\varepsilon^2 = 0\).
Think of \(\varepsilon\) similar to the imaginary unit \(i\) but with a different rule.
They can be used to calculate derivatives.

Motivation

An older thread about calculators and numerical differentiation, and an even older one about what the 1st CAS pocket calculator was, made me wonder if there was ever a calculator using automatic differentiation using dual numbers.

So I thought I give it a try abusing the complex numbers of the HP-42S.
The advantage is that addition and subtraction as well as multiplication by a constant are already handled.
Complex numbers can also be stored and recalled.

Most of the functions simply unwrap the complex number, evaluate the function \(f\) and its derivative \(f'\), and rewrap it into a complex number:
Code:
01▸LBL "f"          # z
02 COMPLEX          # y x
03 X<>Y             # x y
04 f                # f(x) y
05 X<>Y             # y f(x)
06 LASTX            # x y f(x)
07 f'               # f'(x) y f(x)
08 ×                # f'(x)*y f(x)
09 COMPLEX          # f(z)
10 RTN              #

But then there are more complicated cases like * and / or Y↑X.
In the simplest case, the stack is retained except for the T register.
But in other cases, the stack is lost entirely.
Also LASTX isn't handled.

We end up with a somewhat crippled RPN calculator, but programming a function is still easy.
Instead of saving the entire stack before entering and restoring it on exit, I suggest storing the values in variables only when needed later.

Only Y↑X needs two additional registers. All the other operations use only the stack.
Also remember that this function works only with a constant (i.e. non-complex) exponent.
Otherwise, use LN and E↑X instead.

Only after writing the program for the HP-42S I noticed that Ángel Martin already wrote a module for the HP-41C.
I haven't tried it yet, just read the excellent documentation.
Thus, I strongly recommend to give it a try.

Custom Menu

These are my assignments:

LN    E↑X   SINH  ASINH COSH  ACOSH
*     /     1/X   Y↑X   SQRT  X↑2
SIN   ASIN  COS   ACOS  TAN   ATAN


Program

Code:
00 { 468-Byte Prgm }
01▸LBL "DUAL"       #
02▸LBL "*"          # w z
03 COMPLEX          # v u z
04 X<> ST Z         # z u v
05 COMPLEX          # y x u v
06 RCL× ST Z        # u*y x u v
07 R↓               # x u v u*y
08 STO× ST Z        # x u x*v u*y
09 ×                # x*u x*v u*y
10 X<> ST Z         # u*y x*v x*u
11 +                # x*v+u*y x*u
12 COMPLEX          # z*w
13 RTN              #
14▸LBL "/"          # w z
15 COMPLEX          # v u z
16 X<>Y             # u v z
17 STO÷ ST Z        # u v z/u
18 ÷                # v/u z/u
19 X<>Y             # z/u v/u
20 COMPLEX          # y/u x/u v/u
21 RCL ST Z         # v/u y/u x/u
22 RCL× ST Z        # xv/u^2 y/u x/u
23 -                # y/u-xv/u^2 x/u
24 COMPLEX          # z/w
25 RTN              #
26▸LBL "1/X"        # z
27 COMPLEX          # y x
28 X<>Y             # x y
29 1/X              # 1/x y
30 X<>Y             # y 1/x
31 LASTX            # x y 1/x
32 X↑2              # x^2 y 1/x
33 ÷                # y/x^2 1/x
34 +/-              # -y/x^2 1/x
35 COMPLEX          # 1/z
36 RTN              #
37▸LBL "X↑2"        # z
38 COMPLEX          # y x
39 X<>Y             # x y
40 X↑2              # x^2 y
41 X<>Y             # y x^2
42 RCL× ST L        # x*y x^2
43 STO+ ST X        # 2*x*y x^2
44 COMPLEX          # z^2
45 RTN              #
46▸LBL "SQRT"       # z
47 COMPLEX          # y x
48 X<>Y             # x y
49 SQRT             # sqrt(x) y
50 STO÷ ST Y        # sqrt(x) y/sqrt(x)
51 X<>Y             # y/sqrt(x) sqrt(x)
52 2                # 2 y/sqrt(x) sqrt(x)
53 ÷                # y/(2*sqrt(x)) sqrt(x)
54 COMPLEX          # sqrt(z)
55 RTN              #
56▸LBL "Y↑X"        # n z
57 STO 00           # n -> R 00
58 R↓               # z
59 COMPLEX          # y x
60 X<>Y             # x y
61 STO 01           # x -> R 01
62 RCL 00           # n x y
63 STO× ST Z        # n x n*y
64 1                # 1 n x n*y
65 -                # n-1 x n*y
66 Y↑X              # x^{n-1} n*y
67 STO× ST Y        # x^{n-1} n*x^{n-1}*y
68 RCL× 01          # x^n n*x^{n-1}*y
69 X<>Y             # n*x^{n-1}*y x^n
70 COMPLEX          # z^n
71 RTN              #
72▸LBL "E↑X"        # z
73 COMPLEX          # y x
74 X<>Y             # x y
75 E↑X              # e^x y
76 STO× ST Y        # e^x e^x*y
77 X<>Y             # e^x*y e^x
78 COMPLEX          # e^z
79 RTN              #
80▸LBL "LN"         # z
81 COMPLEX          # y x
82 X<>Y             # x y
83 STO÷ ST Y        # x y/x
84 LN               # ln(x) y/x
85 X<>Y             # y/x ln(x)
86 COMPLEX          # ln(z)
87 RTN              #
88▸LBL "SIN"        # z
89 COMPLEX          # y x
90 X<>Y             # x y
91 SIN              # sin(x) y
92 X<>Y             # y sin(x)
93 LASTX            # x y sin(x)
94 COS              # cos(x) y sin(x)
95 ×                # cos(x)*y sin(x)
96 COMPLEX          # sin(z)
97 RTN              #
98▸LBL "ASIN"       # z
99 COMPLEX          # y x
100 X<>Y            # x y
101 ASIN            # asin(x) y
102 X<>Y            # y asin(x)
103 RCL ST Y        # asin(x) y asin(y)
104 COS             # sqrt(1-x^2) y asin(x)
105 ÷               # y/sqrt(1-x^2) asin(x)
106 COMPLEX         # asin(z)
107 RTN             #
108▸LBL "COS"       # z
109 COMPLEX         # y x
110 X<>Y            # x y
111 COS             # cos(x) y
112 X<>Y            # y cos(x)
113 LASTX           # x y cos(x)
114 SIN             # sin(x) y cos(x)
115 +/-             # -sin(x) y cos(x)
116 ×               # -sin(x)*y cos(x)
117 COMPLEX         # cos(z)
118 RTN             #
119▸LBL "ACOS"      # z
120 COMPLEX         # y x
121 X<>Y            # x y
122 ACOS            # acos(x) y
123 X<>Y            # y acos(x)
124 RCL ST Y        # acos(x) y acos(y)
125 SIN             # sqrt(1-x^2) y acos(x)
126 ÷               # y/sqrt(1-x^2) acos(x)
127 +/-             # -y/sqrt(1-x^2) acos(x)
128 COMPLEX         # acos(z)
129 RTN             #
130▸LBL "TAN"       # z
131 COMPLEX         # y x
132 X<>Y            # x y
133 TAN             # tan(x) y
134 ENTER           # tan(x) tan(x) y
135 X↑2             # tan^2(x) tan(x) y
136 1               # 1 tan^2(x) tan(x) y
137 +               # 1+tan^2(x) tan(x) y y
138 R↑              # y 1+tan^2(x) tan(x) y
139 ×               # (1+tan^2(x))*y tan(x)
140 COMPLEX         # tan(z)
141 RTN             #
142▸LBL "ATAN"      # z
143 COMPLEX         # y x
144 X<>Y            # x y
145 ATAN            # atan(x) y
146 X<>Y            # y atan(x)
147 RCL ST Y        # atan(x) y atan(x)
148 COS             # 1/sqrt(1+x^2) y atan(x)
149 X↑2             # 1/(1+x^2) y atan(x)
150 ×               # y/(1+x^2) atan(x)
151 COMPLEX         # atan(z)
152 RTN             #
153▸LBL "SINH"      # z
154 COMPLEX         # y x
155 X<>Y            # x y
156 SINH            # sinh(x) y
157 X<>Y            # y sinh(x)
158 LASTX           # x y sinh(x)
159 COSH            # cosh(x) y sinh(x)
160 ×               # cosh(x)*y sinh(x)
161 COMPLEX         # sinh(z)
162 RTN             #
163▸LBL "ASINH"     # z
164 COMPLEX         # y x
165 X<>Y            # x y
166 ASINH           # asinh(x) y
167 X<>Y            # y asinh(x)
168 RCL ST Y        # asinh(x) y asinh(y)
169 COSH            # sqrt(x^2-1) y asinh(x)
170 ÷               # y/sqrt(x^2-1) asinh(x)
171 COMPLEX         # asinh(z)
172 RTN             #
173▸LBL "COSH"      # z
174 COMPLEX         # y x
175 X<>Y            # x y
176 COSH            # cosh(x) y
177 X<>Y            # y cosh(x)
178 LASTX           # x y cosh(x)
179 SINH            # sinh(x) y cosh(x)
180 ×               # sinh(x)*y cosh(x)
181 COMPLEX         # cosh(z)
182 RTN             #
183▸LBL "ACOSH"     # z
184 COMPLEX         # y x
185 X<>Y            # x y
186 ACOSH           # acosh(x) y
187 X<>Y            # y acosh(x)
188 RCL ST Y        # acosh(x) y acosh(y)
189 SINH            # sqrt(x^2-1) y acos(x)
190 ÷               # y/sqrt(x^2-1) acos(x)
191 COMPLEX         # acosh(z)
192 RTN             #
193▸LBL "TANH"      # z
194 COMPLEX         # y x
195 X<>Y            # x y
196 TANH            # tanh(x) y
197 ENTER           # tanh(x) tanh(x) y
198 X↑2             # tanh^2(x) tanh(x) y
199 1               # 1 tanh^2(x) tanh(x) y
200 X<>Y            # tanh^2(x) 1 tanh(x) y
201 -               # 1-tan^2(x) tanh(x) y y
202 R↑              # y 1-tan^2(x) tanh(x) y
203 ×               # (1-tan^2(x))*y tanh(x)
204 COMPLEX         # tanh(z)
205 RTN             #
206▸LBL "ATANH"     # z
207 COMPLEX         # y x
208 X<>Y            # x y
209 ATANH           # atanh(x) y
210 X<>Y            # y atanh(x)
211 RCL ST Y        # atanh(x) y atanh(x)
212 COSH            # 1/sqrt(1-x^2) y atanh(x)
213 X↑2             # 1/(1-x^2) y atanh(x)
214 ×               # y/(1-x^2) atanh(x)
215 COMPLEX         # atanh(z)
216 RTN             #
217▸LBL "NROOT"     # n z
218 1/X             # 1/n z
219 STO 00          # 1/n -> R 00
220 R↓              # z
221 COMPLEX         # y x
222 X<>Y            # x y
223 STO 01          # x -> R 01
224 ABS             # |x| y
225 RCL 00          # 1/n |x| y
226 STO× ST Z       # 1/n |x| y/n
227 1               # 1 1/n |x| y/n
228 -               # 1/n-1 |x| y/n
229 Y↑X             # |x|^{1/n-1} y/n
230 STO× ST Y       # |x|^{1/n-1} |x|^{1/n-1}*y/n
231 RCL× 01         # x^{1/n} |x|^{1/n-1}*y/n
232 X<>Y            # |x|^{1/n-1}*y/n x^{1/n}
233 COMPLEX         # z^{1/n}
234 END

Examples

Arithmetic

\(
(3 + 4\varepsilon)(5 + 6\varepsilon)
\)

3 ENTER 4 COMPLEX
5 ENTER 6 COMPLEX
XEQ "*"

15 i38

Evaluation of a function and its derivative

\(
\begin{align}
f(x) = \frac{1}{\sqrt{3 + \sin(x)}}
\end{align}
\)

Evaluate \(f(2)\) and \(f'(2)\).

2 ENTER 1 COMPLEX
XEQ "SIN"
3 +
XEQ "SQRT"
XEQ "1/X"

0.505767 i0.026920

Polynomial

Write a program to evaluate the following polynomial:

\(
f(x) = 2x^3 + 3x^2 + 5x + 7
\)

Code:
00 { 35-Byte Prgm }
01▸LBL "f(x)"
02 RCL "x"
03 2
04 ×
05 3
06 +
07 RCL "x"
08 XEQ "*"
09 5
10 +
11 RCL "x"
12 XEQ "*"
13 7
14 +
15 END

Hint: Please note the use of the ordinary multiplication × with a constant in line 04.

Finding a root with Newton's algorithm

\(
\begin{align}
x \mapsto x - \frac{f(x)}{f'(x)}
\end{align}
\)

The program is straight forward since both the function and its derivative can be calculated with a single call to "f(x)":
Code:
00 { 34-Byte Prgm }
01▸LBL "NEWTON"
02 1
03 COMPLEX
04 STO "x"
05 XEQ "f(x)"
06 COMPLEX
07 ÷
08 RCL "x"
09 COMPLEX
10 X<> ST Z
11 -
12 END

Now we can use it to find the root of the previous polynomial:

-1 XEQ "NEWTON" R/S R/S R/S …

-1.6
-1.459479553903345724907063197026022
-1.445652028784242179107592346048789
-1.445528468451557138940594968587292
-1.445528458679522363979375907703971
-1.445528458679522302862795910270397
-1.445528458679522302862795910270395

-1.44552845867952230286279591027039479019



\(e^x - 5 = 0\)

Code:
00 { 19-Byte Prgm }
01▸LBL "f(x)"
02 RCL "x"
03 XEQ "E↑X"
04 5
05 -
06 END

1 XEQ "NEWTON" R/S R/S R/S …

1.839397205857211607977618850807305
1.633963151762016740112346092531373
1.609736212513373931404520946197064
1.609437956921145415593181982892169
1.609437912434101364149332898839731
1.609437912434100374600759333226677
1.609437912434100374600759333226188

1.60943791243410037460075933322618763952



\(2x - \sin(x) - 1 = 0\)

Code:
00 { 26-Byte Prgm }
01▸LBL "f(x)"
02 RCL "x"
03 2
04 ×
05 RCL "x"
06 XEQ "SIN"
07 -
08 1
09 -
10 END

1 XEQ "NEWTON" R/S R/S R/S …

0.891395995328754051864551902982202
0.8878657494003519361186921183484038
0.8878622115744122839022415445850894
0.8878622115708660240357050746759603
0.8878622115708660240357015114947114
0.8878622115708660240357015114947116
0.887862211570866024035701511494712

0.88786221157086602403570151149471173497



Find a root of : \(x^x = \pi\)

Code:
00 { 28-Byte Prgm }
01▸LBL "f(x)"
02 RCL "x"
03 XEQ "LN"
04 RCL "x"
05 XEQ "*"
06 XEQ "E↑X"
07 PI
08 -
09 END

Hint: As already mentioned we can not use Y↑X here since the exponent is not a constant.

2 XEQ "NEWTON" R/S R/S R/S …

1.873252698249433854127306813867173
1.854459717903199817846714313283606
1.854106089961332624497916868416055
1.854105967921040960682023166413
1.854105967921026432748370718616169
1.854105967921026432748370718410293

1.85410596792102643274837071841029324542



Solve Leonardo di Pisa's equation: \(x^3+2x^2+10x−20=0\)

Code:
00 { 34-Byte Prgm }
01▸LBL "f(x)"
02 RCL "x"
03 2
04 +
05 RCL "x"
06 XEQ "*"
07 10
08 +
09 RCL "x"
10 XEQ "*"
11 20
12 -
13 END

1 XEQ "NEWTON" R/S R/S R/S …

1.411764705882352941176470588235294
1.369336470588235294117647058823529
1.368808188617531954512471834907757
1.368808107821374524808003343189187
1.368808107821372635227414330022359
1.368808107821372635227414330021326

1.36880810782137263522741433002132553954


Resources
Find all posts by this user
Quote this message in a reply
06-18-2022, 05:35 PM
Post: #2
RE: Automatic differentiation using dual numbers
Is there a way to calculate an extremum for an odd powered function for when the extremum would be a negative number such as the equation involving something like 3ROOT(X)?
Find all posts by this user
Quote this message in a reply
06-18-2022, 07:25 PM
Post: #3
RE: Automatic differentiation using dual numbers
To calculate an extremum we have to find solutions of \(f'(x) = 0\).

We can use the SOLVER with the program f'(x) for the example \(f(x) = x^3 - 2x^2 + 2\):
Code:
00 { 62-Byte Prgm }
01▸LBL "f'(x)"
02 MVAR "x"
03 RCL "x"
04 1
05 COMPLEX
06 STO "x"
07 XEQ "f(x)"
08 COMPLEX
09 STO "x"
10 RTN
11▸LBL "f(x)"
12 RCL "x"
13 2
14 -
15 RCL "x"
16 XEQ "X↑2"
17 XEQ "*"
18 2
19 +
20 END

SOLVER

Select Solve Program
f'(x)

1 x
x=1

2 x
x=2

x
1.333333333333333333333333333333333



I must admit that I don't fully understand the second part which involves \(\sqrt[3]{x}\).
As there is no built-in function we'd use:
Code:
RCL "x"
3
1/X
XEQ "Y↑X"

For negative values we'd have to use:
Code:
RCL "x"
+/-
3
1/X
XEQ "Y↑X"
+/-

Do you have a specific example in mind?
Find all posts by this user
Quote this message in a reply
06-18-2022, 08:30 PM
Post: #4
RE: Automatic differentiation using dual numbers
Really any function that would involve something like 3Root(X). The calculator uses 1/3 and produces a complex number, and f’(X)=0 fails. If there was a yROOT(x) key, it would work.
Find all posts by this user
Quote this message in a reply
06-18-2022, 08:32 PM
Post: #5
RE: Automatic differentiation using dual numbers
Meanwhile, I remembered an earlier thread: Derivatives on HP 42S.
The Complex-Step Derivative Approximation only works for analytic functions.
And as the name suggests, it is an approximation.

The main advantage is that cancellation is avoided compared to the classical finite-difference approximations.
Implementation is easy if the functions and operations are overloaded for complex values.
This is the case with the HP-15C and HP-42S.

Here in automatic differentiation, complex numbers are used primarily as a structure to keep track of the values ​​of \(f(x)\) and \(f'(x)\).
The complex variants of the functions are not used.

In addition, the functions do not have to be analytical.
But they should be differentiable.

We can even define ABS :

\(
\left| \left< u, {u}' \right>\right| = \left< \left| u \right|, {u}' \, \text{sign}(u)\right> \; (u \ne 0)
\)

Thus you can program your own variant of a 3ROOT function and its derivative.

With automatic differentiation, the value of the derivative is exact.
Of course, rounding errors can also occur.
However, these do not depend on an arbitrarily small increment such as \(h = 10^{-5}\).
Find all posts by this user
Quote this message in a reply
06-18-2022, 09:21 PM (This post was last modified: 06-18-2022 10:55 PM by Thomas Klemm.)
Post: #6
RE: Automatic differentiation using dual numbers
Consider the function:

\(
\begin{align}
f(x) &= \sqrt[n]{x} \\
\\
f'(x) &= \frac{x^{\frac{1}{n} - 1}}{n} \\
\end{align}
\)

With negative input the following works properly only for odd N:
Code:
217▸LBL "NROOT"      # n z
218 1/X              # 1/n z
219 STO 00           # 1/n -> R 00
220 R↓               # z
221 COMPLEX          # y x
222 X<>Y             # x y
223 STO 01           # x -> R 01
224 ABS              # |x| y
225 RCL 00           # 1/n |x| y
226 STO× ST Z        # 1/n |x| y/n
227 1                # 1 1/n |x| y/n
228 -                # 1/n-1 |x| y/n
229 Y↑X              # |x|^{1/n-1} y/n
230 STO× ST Y        # |x|^{1/n-1} |x|^{1/n-1}*y/n
231 RCL× 01          # x^{1/n} |x|^{1/n-1}*y/n
232 X<>Y             # |x|^{1/n-1}*y/n x^{1/n}
233 COMPLEX          # z^{1/n}
234 END

Examples

\(n = 3\)



\(x = 8\)

8 ENTER 1 COMPLEX
3
XEQ "NROOT"

2.000000 i0.083333


\(
\begin{align}
f(8) &= 2 \\
\\
f'(8) &= \frac{1}{12} \\
\end{align}
\)



\(x = -8\)

-8 ENTER 1 COMPLEX
3
XEQ "NROOT"

-2.000000 i0.083333


\(
\begin{align}
f(-8) &= -2 \\
\\
f'(-8) &= \frac{1}{12} \\
\end{align}
\)

Is this what you wanted?
Find all posts by this user
Quote this message in a reply
06-18-2022, 09:36 PM
Post: #7
RE: Automatic differentiation using dual numbers
Better, thanks. Approximate value with small i value, avoids throwing an error. The program that works for positive real roots does not generate a complex result (small i value).
Find all posts by this user
Quote this message in a reply
06-18-2022, 10:14 PM (This post was last modified: 06-18-2022 10:27 PM by Thomas Klemm.)
Post: #8
RE: Automatic differentiation using dual numbers
Example

Find the second negative maximum of \(\sin\left(\sqrt[3]{x}\right)\).

Code:
00 { 59-Byte Prgm }
01▸LBL "f'(x)"
02 MVAR "x"
03 RCL "x"
04 1
05 COMPLEX
06 STO "x"
07 XEQ "f(x)"
08 COMPLEX
09 STO "x"
10 RTN
11▸LBL "f(x)"
12 RCL "x"
13 3
14 XEQ "NROOT"
15 XEQ "SIN"
16 END

SOLVER

Select Solve Program
f'(x)

-150 x
x=-150

-50 x
x=-50

x
-104.6461837960118930922325633514673

Exact value: \(\left(-\frac{3\pi}{2}\right)^3 \approx -104.6462\)
Find all posts by this user
Quote this message in a reply
06-19-2022, 02:23 AM
Post: #9
RE: Automatic differentiation using dual numbers
Nice! Dumb question…what if an nRoot(x) appeared within a larger equation such as 3Root(x) +X^2 -3*(x/5). (I am away from my calculator)
Find all posts by this user
Quote this message in a reply
06-19-2022, 04:45 AM
Post: #10
RE: Automatic differentiation using dual numbers
(06-19-2022 02:23 AM)lrdheat Wrote:  what if an nRoot(x) appeared within a larger equation such as 3Root(x) +X^2 -3*(x/5)

Example

Find the minimum of \(\sqrt[3]{x}+x^2-\frac{3x}{5}\).

We can keep f'(x) and simply adjust f(x):
Code:
11▸LBL "f(x)"
12 RCL "x"
13 3
14 XEQ "NROOT"
15 RCL "x"
16 XEQ "X↑2"
17 +
18 RCL "x"
19 3
20 ×
21 5
22 ÷
23 -
24 END

SOLVER

Select Solve Program
f'(x)

-1 x
x=-1

0 x
x=0

x
-0.1952345627907014530751112228253494

-0.1952345627907014530751112228253494266891 from Solve[D[Surd[x,3]+x^2-3x/5,x]=0]
Find all posts by this user
Quote this message in a reply
06-19-2022, 10:50 AM (This post was last modified: 07-11-2022 06:00 PM by Albert Chan.)
Post: #11
RE: Automatic differentiation using dual numbers
Apply f to dual number {g, g'} produce {f(g), (f(g))' = f'(g) * g'}

To emphasize its "symbolic" nature, I use f and g for variables.

r = x^(1/n)
r' = x^(1/n-1)/n = r / (x*n)

Code:
D = {}  -- note: f1, f2 = f, f'
function D.new(f1,f2) return setmetatable({f1,f2 or 0}, D) end
function D.surd(f,n) local r=surd(f[1],n); return D.new(r, f[2]*r/(f[1]*n)) end
function D.__add(f,g) return D.new(f[1]+g[1], f[2]+g[2]) end
function D.__sub(f,g) return D.new(f[1]-g[1], f[2]-g[2]) end
function D.__mul(f,g) return D.new(f[1]*g[1], (f[2]*g[1]+f[1]*g[2])) end
function D.__div(f,g) return D.new(f[1]/g[1], (f[2]*g[1]-f[1]*g[2]) / g[1]^2) end
function D.__unm(f)   return D.new(-f[1], -f[2]) end -- negate
function D.log1p(f) return D.new(log1p(f[1]), f[2]/(f[1]+1)) end
function D.expm1(f) local r=expm1(f[1]); return D.new(r, (r+1)*f[2]) end
function D.newton(x,f) local r=-f[1]/f[2]; x[1]=x[1]+r; return x[1],f[1] end

Example, solve (³√x + x² - 3/5*x = 0), guess x = -1

lua> x = D.new(-1,1)
lua> for k=1,6 do print(D.newton(x, D.surd(x,3) + x*x - D.new(3/5)*x)) end
-0.7352941176470589     0.6
-0.6875931861095272     0.07925115704554603
-0.6858395921931165     0.0027133429258336395
-0.6858371853881947     3.7138631565070135e-006
-0.6858371853836587     6.999345547598068e-012
-0.6858371853836587     0

Note: outputs are (extrapolated x, f(x)); Shift up 2nd column to get points (x, f(x))
Find all posts by this user
Quote this message in a reply
06-19-2022, 11:25 AM
Post: #12
RE: Automatic differentiation using dual numbers
More Examples

From an older thread:
Find the minimum of \(\sqrt[3]{x}(x+4)\).

Code:
11▸LBL "f(x)"
12 RCL "x"
13 3
14 XEQ "NROOT"
15 RCL "x"
16 4
17 +
18 XEQ "*"
19 END

SOLVER

Select Solve Program
f'(x)

-4 x
x=-4

0 x
x=0

x
-1

-1 from Solve[D[Surd[x,3](x+4),x]=0]



From message #52 of hp 35s not very impressive:
Numerically find a root of \(\sin(\cos(x)+x)=1\) for \(x \in [0, \pi]\).

Code:
11▸LBL "f(x)"
12 RCL "x"
13 XEQ "COS"
14 RCL+ "x"
15 XEQ "SIN"
16 1
17 -
18 END


1 XEQ "NEWTON" R/S R/S R/S …

1.096185496720379254068448344585068
1.175888114785594876983050355722717
1.242050787237047966508965780229636
1.297040015847465356429262431517609
1.342780395438986475883038203450154
1.380849045692581522517271141667455
1.412545052274972359330312135368247
1.438942302919224509400448646856456
1.460930716999602583570957867391215
1.479249022592746437337540456509254
1.494511170421290703644758554012951
1.507227829978447782058849865750785
1.517824006751892972090594565180035
1.526653552641516102517803677349702
1.534011159566718715000530836754409
1.540142297317008509515873034239361
1.54525146226124394188595647414023
1.549509032291505688761007830343301
1.553056968300240276217842622504177
1.556013559062516129214210739762696
1.558477371631839185818134801079616
1.560530541211757536883595443592954
1.562241511486046176440960584008961
1.563667317515769249502272196990959
1.564855487741827589822012469653988
1.5658456287488595130191653943798
1.566670745763970142235788700312658
1.567358342992564331745470195675238
1.567931340518708971144634527031736
1.568408838362052654583316328637151
1.568806753176465137666022447896715
1.569138348823290027153250798000587
1.569414678510544465261158813819254
1.569644953239255937574657672473129
1.569836848840342335174954121320105
1.569996761837675236006348072590862
1.570130022666718605870849598706333
1.570241073356391686024755750742321
1.570333615597093564376103876143107
1.570410734130611068186649461486481
1.570474999574977213095175517708114
1.570528554111814785112668043102819
1.570573182892434548623033555727806
1.570610373542906751998078209406487
1.57064136575160961719034182259993
1.570667192592178057949124917149796
1.570688714959325685163863462493728
1.570706650265258827155523702630545
1.570721596353569424513573787698215
1.570734051427489393921579858910183
1.570744430654950406742411512076897
1.570753080010948337990727107957772
1.570760287808507356277721295289301
1.570766294312730998804956970621129
1.570771299748680808303996882049845
1.570775470908511415841082922756551
1.57077894663999092675862234031804
1.5707818432162089638123982822793
1.570784256952696171292749185689969
1.570786266649663336458847535382422
1.570787943608822423042808769861741
1.57078933476841216946339413363803
1.570790483678136851887361443677913
1.570791540762559672731902800264383
1.57079249648789454488183911785543
1.57079249648789454488183911785543
1.57079249648789454488183911785543

1.57079632679489661923132169163975144209...

The exact root is \(\frac{\pi}{2}\).

We notice the slow convergence since the function is very flat at the root.
Obviously we stop getting closer once \(f(x)=0\).



From Max of (sin(x))^(e^x):
Find the maximum of \(\sin(x)^{e^x}\) between \(13\) and \(15\).

Code:
11▸LBL "f(x)"
12 RCL "x"
13 XEQ "SIN"
14 XEQ "LN"
15 RCL "x"
16 XEQ "E↑X"
17 XEQ "*"
18 XEQ "E↑X"
19 END

SOLVER

Select Solve Program
f'(x)

14.13 x
x=14.13

14.14 x
x=14.14

x
14.13716694115406957308189522475776

14.1371669411540695730818952247577629788...

The exact solution is \(\frac{9\pi}{2}\).
Find all posts by this user
Quote this message in a reply
06-19-2022, 02:40 PM (This post was last modified: 07-11-2022 05:28 PM by Albert Chan.)
Post: #13
RE: Automatic differentiation using dual numbers
(06-18-2022 02:09 PM)Thomas Klemm Wrote:  Find a root of : \(x^x = \pi\)

Code:
00 { 28-Byte Prgm }
01▸LBL "f(x)"
02 RCL "x"
03 XEQ "LN"
04 RCL "x"
05 XEQ "*"
06 XEQ "E↑X"
07 PI
08 -
09 END

Hint: As already mentioned we can not use Y↑X here since the exponent is not a constant.

Removing constant exponent limitation is not hard.

r = f^g
r' = (exp(ln(f)*g))' = r * (ln(f)*g)' = r * (ln(f)*g' + (f'/f)*g)

Code:
function D.__pow(f,g)
    local r = pow(f[1],g[1])
    return D.new(r, r*(log(f[1])*g[2] + f[2]/f[1]*g[1]))
end

We add above pow rule here

lua> x = D.new(2,1)
lua> for k=1,6 do print(D.newton(x, x^x - D.new(pi))) end
1.8732526982494337      0.8584073464102069
1.8544597179031999      0.09913010983767823
1.8541060899613326      0.001798101943065511
1.854105967921041        6.201137425776437e-007
1.8541059679210263      7.416289804496046e-014
1.8541059679210263      -4.440892098500626e-016

Note: outputs are (extrapolated x, f(x)); Shift up 2nd column to get points (x, f(x))
Find all posts by this user
Quote this message in a reply
06-19-2022, 08:31 PM (This post was last modified: 06-19-2022 09:38 PM by Thomas Klemm.)
Post: #14
Fixed Point Iteration
We have seen that in some cases fixed-point iteration doesn't converge to the solution of \(x = f(x)\).
Also the convergence can be slow.

Examples

Dottie Number

\(
\begin{align}
\cos(x) - x = 0
\end{align}
\)

Code:
11▸LBL "f(x)"
12 RCL "x"
13 XEQ "COS"
14 RCL- "x"
15 END

1 XEQ "NEWTON" R/S R/S R/S …

0.7503638678402438930349423066821769
0.7391128909113616703605852909048903
0.7390851333852839697601251208568043
0.7390851332151606416617026256850264
0.7390851332151606416553120876738734
0.7390851332151606416553120876738734

Compared to this the fixed-point iteration is rather slow.

…and you hit cosine, and you hit cosine, and you hit cosine.

1 COS COS COS …

0.5403023058681397174009366074429766
0.8575532158463934157441062727619899
0.6542897904977791499709664713278083
0.7934803587425655918260542309902841
0.7013687736227565244719705270529776
0.7394118536802670853384982393894295
0.7388650109363127073601308869127267




(10-04-2020 04:21 PM)Eddie W. Shore Wrote:  Be aware, some equations cannot be solved in this manner, such as x = π / sin x and x = ln(1 / x^4).

\(
\begin{align}
4 \ln(x) + x = 0
\end{align}
\)

Code:
11▸LBL "f(x)"
12 RCL "x"
13 XEQ "LN"
14 4
15 ×
16 RCL+ "x"
17 END

1 XEQ "NEWTON" R/S R/S R/S …

0.8
0.8154290342094731705108633935398897
0.8155534109294744392403815755194143
0.815553418808960626155252543632822
0.815553418808960657772727325308559
0.8155534188089606577727273253085595



\(
\begin{align}
\sin(x) x - \pi = 0
\end{align}
\)

Code:
11▸LBL "f(x)"
12 RCL "x"
13 XEQ "SIN"
14 RCL "x"
15 XEQ "*"
16 PI
17 -
18 END

6 XEQ "NEWTON" R/S R/S R/S …

6.878955081396972443178487244060421
6.76408551570638856537720551093318
6.76604838451268027345470086566581
6.766048790452746205491186938760874
6.766048790452763690963998120299702
6.766048790452763690963998120332144
6.766048790452763690963998120332144


Conclusion

Newton's method allows to transform an equation into a fixed-point iteration that usually converges much faster.
Using automatic differentiation allows us to do this with only minor additional effort.
Find all posts by this user
Quote this message in a reply
06-20-2022, 12:52 PM (This post was last modified: 07-11-2022 05:43 PM by Albert Chan.)
Post: #15
RE: Automatic differentiation using dual numbers
(05-17-2022 01:31 PM)Albert Chan Wrote:  lua> n, pv, pmt, fv = 10, 50, -30, 100
lua> g = loan_rate2(n,pv,pmt,fv)
lua> g()
-0.2844446981069045        0.015555301893095532
lua> g()
-0.28443599888025756      8.699226646944535e-006
lua> g()
-0.28443599888025595      1.6279605736411871e-015

Perhaps we can extend idea of automatic differentiation to higher derivatives?
Let's check if we can repeat above Halley's method rate iterations.

Since we need 3 numbers, lets call it T (for triple) number.

Code:
T = {} -- note: f1, f2, f3 = f, f', f''

function T.new(f1,f2,f3) return setmetatable({f1, f2 or 0, f3 or 0}, T) end
function T.__add(f,g) return T.new(f[1]+g[1], f[2]+g[2], f[3]+g[3]) end
function T.__sub(f,g) return T.new(f[1]-g[1], f[2]-g[2], f[3]-g[3]) end
function T.__div(f,g) return T.rcp(g) * f end
function T.__unm(f) return T.new(-f[1], -f[2], -f[3]) end -- negate

-- (f*g)'  = f*g' + f'*g
-- (f*g)'' = f*g'' + f'*g' + f'*g' + f''*g

function T.__mul(f,g)
    local z = f[1]*g[3] + 2*f[2]*g[2] + f[3]*g[1]
    return T.new(f[1]*g[1], f[1]*g[2] + f[2]*g[1], z)
end

-- (1/f)'  = -f' / f^2
-- (1/f)'' = -(f^2 * f'' - (2*f*f') * f') / f^4

function T.rcp(f)
    return T.new(1/f[1], -f[2]/f[1]^2, (2*f[2]^2-f[1]*f[3]) / f[1]^3)
end

-- log(1+f)'  = f' / (1+f)
-- log(1+f)'' = ((1+f)*f'' - f'*f') / (1+f)^2

function T.log1p(f)
    local y = 1+f[1]
    return T.new(log1p(f[1]), f[2]/y, (y*f[3] - f[2]^2) / (y*y))
end

-- expm1(f)'  = exp(f) * f'
-- expm1(f)'' = exp(f) * f'' + exp(f)*f' * f'

function T.expm1(f)
    local x = expm1(f[1])
    local y, z = f[2]+f[2]*x, f[3]+f[3]*x
    return T.new(x, y, z + f[2]*y)
end

function T.halley(x,f)
    local r = -f[1]/f[2]
    r = r / (1 + 0.5*r*f[3]/f[2])
    x[1] = x[1] + r
    return x[1], f[1]
end

It work ! Smile

lua> n, pv, pmt, fv = T.new(10), T.new(50), T.new(-30), T.new(100)
lua> function f(x) return ((pv+fv)/T.expm1(T.log1p(x)*n) + pv)*x + pmt end
lua> x = T.new(pmt[1]/fv[1], 1) -- small edge
lua> for k=1,3 do print(T.halley(x, f(x))) end
-0.2844446981069045      1.3080888941077191
-0.28443599888025756    0.0007214129592014729
-0.28443599888025595    1.3500311979441904e-013

Note: outputs are (extrapolated x, f(x)); Shift up 2nd column to get points (x, f(x))
Find all posts by this user
Quote this message in a reply
06-20-2022, 05:31 PM
Post: #16
The strange cousin of the complex numbers -- the dual numbers
Just stumbled upon this video by Michael Penn:

The strange cousin of the complex numbers -- the dual numbers

We can follow the lecture with our HP-42S.

1:39:

\(
\begin{align}
(2 + 3 \varepsilon)(5 - \varepsilon) = 10 + 13 \varepsilon
\end{align}
\)

2 ENTER 3 COMPLEX
5 ENTER -1 COMPLEX
XEQ "*"

10 i13

7:07:

\(
\begin{align}
\frac{3 + 2 \varepsilon}{2 + 5 \varepsilon} = \frac{3}{2} - \frac{11}{4} \varepsilon
\end{align}
\)

3 ENTER 2 COMPLEX
2 ENTER 5 COMPLEX
XEQ "/"

1.5 -i2.75

We can also use the matrix representation instead of using complex numbers:

2 3
0 2

5 -1
0 5

*

10 13
0 10


3 2
0 3

2 5
0 2

/

1.5 -2.75
0 1.5

The advantage is that we now get "*", "/" and "1/X" for free.
We'd probably write a constructor (similar to COMPLEX) that simplifies entering values.
And then another function to display them.

Unfortunately function application on a matrix is implemented element-wise.
Otherwise we could simply use the ordinary functions like EXP or SIN on these dual numbers.
Thus we still have to implement that by ourselves.

Hint: If you want to copy and paste the matrices into Free42, quote this post and do it from the input form.
The separator has to be a tabulator character which is transmogrified into a space character by the software of this forum.
Find all posts by this user
Quote this message in a reply
06-20-2022, 09:50 PM
Post: #17
RE: Automatic differentiation using dual numbers
Perhaps it is better to think of dual number as a container, holding symbolic variables.

That's why even with complicated expressions, derivative only has slight rounding errors,
due to symbolic variables really stored numerically.

This is why variable as {x, 1}: f(x) = x → f'(x) = 1
Plain constant simply {c, 0}:   f(x) = c → f'(x) = 0

(06-20-2022 05:31 PM)Thomas Klemm Wrote:  \(
\begin{align}
(2 + 3 \varepsilon)(5 - \varepsilon) = 10 + 13 \varepsilon
\end{align}
\)

2 ENTER 3 COMPLEX
5 ENTER -1 COMPLEX
XEQ "*"

10 i13

Think of this as getting {f*g, (f*g)'}, forget about ε

(f*g) = 2*5 = 10
(f*g)' = f'*g + f*g' = 3*5 + 2*-1 = 13

Again, do not think of them as numbers; think of them as symbolic, stored numerically.

---

If we wanted to extend container to hold more derivatives, ε is a bad idea.
Example, we want to carry 2 derivatives, instead of 1.

CAS> expand((1+2*ε+3*ε^2) * (3+ε+4*ε^2))

3 + 7*ε + 15*ε^2 + 11*ε^3 + 12*ε^4

Do this symbolically (recommended), we get correct answer.

{f, f', f''} = {1,2,3}
{g,g',g''} = {3,1,4}

(f*g) = 1*3 = 3
(f*g)' = f'*g + f*g' = 2*3 + 1*1 = 7
(f*g)'' = f''*g + f'*g' + f'*g' + f*g'' = 3*3 + 2*(2*1) + 1*4 = 17
Find all posts by this user
Quote this message in a reply
06-21-2022, 02:19 AM
Post: #18
RE: Automatic differentiation using dual numbers
(06-20-2022 09:50 PM)Albert Chan Wrote:  … forget about ε … ε is a bad idea

Au contraire!
It's just that you misinterpreted the meaning of the coefficient of \(\varepsilon^2\).
Based on the Taylor series we have:

\(
\begin{align}
f(x + \varepsilon) = f(x) + {f}'(x) \cdot \varepsilon + \frac{1}{2} {f}''(x) \cdot \varepsilon^2
\end{align}
\)

Given your example:
Quote:{f, f', f''} = {1,2,3}
{g,g',g''} = {3,1,4}

… we have to calculate:

\(
\begin{align}
\left(1 + 2 \varepsilon + \frac{3}{2} \varepsilon^2 \right)\left(3 + \varepsilon + 2 \varepsilon^2 \right)
&= 3 + 7 \varepsilon + \frac{17}{2} \varepsilon^2 + \frac{11}{2} \varepsilon^3 + 3 \varepsilon^4 \\
&= 3 + 7 \varepsilon + \frac{17}{2} \varepsilon^2 \\
\end{align}
\)

… since \(\varepsilon^3 = 0\).
With this interpretation we get the correct result.



If we use the matrix representation we have to take that into account.
These 3 matrices form the basis:

\(
I =
\begin{bmatrix}
1 & 0 & 0 \\
0 & 1 & 0 \\
0 & 0 & 1 \\
\end{bmatrix}
\)

\(
\varepsilon =
\begin{bmatrix}
0 & 1 & 0 \\
0 & 0 & 1 \\
0 & 0 & 0 \\
\end{bmatrix}
\)

\(
\varepsilon^2 =
\begin{bmatrix}
0 & 0 & 1 \\
0 & 0 & 0 \\
0 & 0 & 0 \\
\end{bmatrix}
\)

And then \(\{f, f', f''\}\) is represented by:

\(
f \cdot I + {f}' \cdot \varepsilon + \tfrac{1}{2}{f}'' \cdot \varepsilon^2 =
\begin{bmatrix}
f & {f}' & \tfrac{1}{2}{f}'' \\
0 & f & {f}' \\
0 & 0 & f \\
\end{bmatrix}
\)

Indeed we have:

\(
\begin{bmatrix}
1 & 2 & \frac{3}{2} \\
0 & 1 & 2 \\
0 & 0 & 1 \\
\end{bmatrix}
\cdot
\begin{bmatrix}
3 & 1 & 2 \\
0 & 3 & 1 \\
0 & 0 & 3 \\
\end{bmatrix}
=
\begin{bmatrix}
3 & 7 & \frac{17}{2} \\
0 & 3 & 7 \\
0 & 0 & 3 \\
\end{bmatrix}
\)

This can be extended in an analogous way for higher-order derivatives.
Find all posts by this user
Quote this message in a reply
06-21-2022, 10:56 AM
Post: #19
RE: Automatic differentiation using dual numbers
(06-21-2022 02:19 AM)Thomas Klemm Wrote:  Au contraire!
It's just that you misinterpreted the meaning of the coefficient of \(\varepsilon^2\).
Based on the Taylor series we have:

\(
\begin{align}
f(x + \varepsilon) = f(x) + {f}'(x) \cdot \varepsilon + \frac{1}{2} {f}''(x) \cdot \varepsilon^2
\end{align}
\)

Thank you for clearing this up!
To confirm my understanding, I redo expm1 and log1p rules, using epsilon.

Quote:-- expm1(f)' = exp(f) * f'
-- expm1(f)'' = exp(f) * f'' + exp(f)*f' * f'

Let ε2 = ε^2/2!, and assume (ε3, ε4 , ...) ≈ 0

expm1(x) = x + x^2/2! + x^3/3! + ...

expm1(f(x+ε))
≈ expm1(f + f'*ε + f''*ε2)
= exp(f) * exp(f'*ε) * exp(f''*ε2) - 1
≈ exp(f) * (1 + f'*ε + f'^2*ε2) * (1 + f''*ε2) - 1
≈ exp(f) * (1 + f'*ε + f'^2*ε2 + f''*ε2) - 1
= expm1(f) + (exp(f)*f')*ε + (exp(f)*(f'' + f'^2))*ε2       ✔️

Quote:-- log(1+f)' = f' / (1+f)
-- log(1+f)'' = ((1+f)*f'' - f'*f') / (1+f)^2

log1p(x) = x - x^2/2 + x^3/3 - ...

log1p(f(x+ε))
≈ log(1 + f + f'*ε + f''*ε2)
= log1p(f) + log1p(ε*(f' + f''/2*ε)/(1+f))
≈ log1p(f) + ε*(f' + f''/2*ε)/(1+f) - ε2*(f'/(1+f))^2
= log1p(f) + (f'/(1+f))*ε + (((1+f)*f'' - f'^2)/(1+f)^2)*ε2       ✔️
Find all posts by this user
Quote this message in a reply
06-22-2022, 10:36 PM (This post was last modified: 06-23-2022 05:07 AM by Thomas Klemm.)
Post: #20
Automatic differentiation using trial numbers
As already mentioned we can extend the idea of automatic differentiation to higher derivatives.
If we represent the trial numbers by matrices we again can use the HP-42S.

Basis

We use the following basis:

1 0 0
0 1 0
0 0 1
STO "1"

0 1 0
0 0 1
0 0 0
STO "e"

0 0 1
0 0 0
0 0 0
STO "e↑2"

Constructors

We can use "TRIAL" to enter a trial number of the form \(x +\varepsilon\):

7
XEQ "TRIAL"

7 1 0
0 7 1
0 0 7

Or then, if we want to specify the 3 coefficients we can use "NEW":

1
ENTER
2
ENTER
3
XEQ "NEW"

1 2 3
0 1 2
0 0 1

Display

We can use "SHOW" to display the 3 coefficients.
I recommend to use FIX 3:

XEQ "SHOW"

2.000 3.000 5.000
x: [ 3×3 Matrix ]

This function is also called at the end of "TRIAL" and "NEW":
Code:
00 { 71-Byte Prgm }
01▸LBL "TRIAL"
02 RCL× "1"
03 RCL+ "e"
04 GTO 00
05▸LBL "NEW"
06▸LBL 01
07 RCL× "e↑2"
08 X<>Y
09 RCL× "e"
10 +
11 X<>Y
12 RCL× "1"
13 +
14▸LBL "SHOW"
15▸LBL 00
16 CLA
17 EDIT
18 ARCL ST X
19 ├" "
20 →
21 ARCL ST X
22 ├" "
23 →
24 ARCL ST X
25 EXITALL
26 AVIEW
27 END

Addition and Subtraction

Since adding a number to a matrix is done on each element, we can not simply add constants.
Instead we have to multiply the constant by the "1" matrix.

Multiplication and Division

However we can use the ordinary multiplication and division both for constants and trial numbers.

Thus without much programming we can already calculate first and second derivatives of polynomials or even rational functions.

Example

\(
\begin{align}
p(x) = 2x^3 + 3x^2 + 5x + 7
\end{align}
\)

Code:
00 { 38-Byte Prgm }
01▸LBL "p(x)"
02 RCL "x"
03 2
04 ×
05 3
06 RCL× "1"
07 +
08 RCL× "x"
09 5
10 RCL× "1"
11 +
12 RCL× "x"
13 7
14 RCL× "1"
15 +
16 END

Let's assume that we want to evaluate the polynomial at \(x = -1\) and calculate its 1st and 2nd derivative:

-1
XEQ "TRIAL"
STO "x"
XEQ "p(x)"
XEQ "SHOW"

3.000 5.000 -3.000
x: [ 3×3 Matrix ]

Thus we get:

\(
\begin{align}
p(x) &= 3 \\
{p}'(x) &= 5 \\
\tfrac{1}{2}{p}''(x) &= -3 \\
\end{align}
\)

Halley's Method

It's straightforward to implement Halley's method:

\(
\begin{align}
x_{{n+1}}=x_{n}-{\frac{2f(x_{n})f'(x_{n})}{2{[f'(x_{n})]}^{2}-f(x_{n})f''(x_{n})}}
\end{align}
\)

However we rather use this formula:

\(
\begin{align}
x_{n+1}
&=x_{n}-{\frac {f(x_{n})}{f'(x_{n})-{\frac {f(x_{n})}{f'(x_{n})}}{\frac {f''(x_{n})}{2}}}} \\
&=x_{n}-{\frac {f(x_{n})}{f'(x_{n})}}\left[1-{\frac {f(x_{n})}{f'(x_{n})}}\cdot {\frac {f''(x_{n})}{2f'(x_{n})}}\right]^{-1} \\
\end{align}
\)

Code:
00 { 56-Byte Prgm }#
01▸LBL "HALLEY"    #
02 STO 03          # x -> R03
03 RCL× "1"        # x*1
04 RCL+ "e"        # x*1+e
05 STO "x"         # x*1+e -> x
06 XEQ "p(x)"      #
07 EDIT            # matrix -> register
08 STO 00          # f -> R00
09 →               # f'
10 STO 01          # f' -> R01
11 →               # f"/2
12 STO 02          # f"/2 -> R02
13 EXITALL         #
14 RCL 00          # f
15 RCL÷ 01         # f/f'
16 1               # 1 f/f'
17 RCL ST Y        # f/f' 1 f/f'
18 RCL× 02         # f/f'*f"/2 1 f/f'
19 RCL÷ 01         # f/f'*f"/2/f' 1 f/f'
20 -               # 1-f/f'*f"/2/f' f/f'
21 ÷               # f/f'/[1-f/f'*f"/2/f']
22 STO- 03         # x-dx -> x
23 RCL 03          # x
24 END             #

Now we can use it to find the root of the aforementioned polynomial:

-1 XEQ "HALLEY" R/S R/S …

-1.441176470588235294117647058823529
-1.445528443543314459809602861040273
-1.445528458679522302862795272237467
-1.445528458679522302862795910270395
-1.445528458679522302862795910270395

We notice that the rate of convergence is cubic compared to the quadratic convergence of Newton's method.

Dottie Number

Code:
00 { 19-Byte Prgm }
01▸LBL "f(x)"
02 RCL "x"
03 XEQ "COS"
04 RCL- "x"
05 END

1 XEQ "HALLEY"

0.7408739950803435700746289353295154
0.7390851338775818843562183300927446
0.7390851332151606416553120877075439
0.7390851332151606416553120876738734
0.7390851332151606416553120876738734


Time Value of Money

Consider the following parameters:
Quote:n, pv, pmt, fv = T.new(10), T.new(50), T.new(-30), T.new(100)

Code:
00 { 46-Byte Prgm }
01▸LBL "TVM"
02 10
03 STO "n"
04 50
05 RCL× "1"
06 STO "pv"
07 -30
08 RCL× "1"
09 STO "pmt"
10 100
11 RCL× "1"
12 STO "fv"
13 END

We want to find the root \(x\) of the following function:
Quote:function f(x) return ((pv+fv)/T.expm1(T.log1p(x)*n) + pv)*x + pmt end

Code:
00 { 50-Byte Prgm }
01▸LBL "f(x)"
02 RCL "x"
03 XEQ "LN1+X"
04 RCL× "n"
05 XEQ "E↑X-1"
06 RCL "pv"
07 RCL+ "fv"
08 X<>Y
09 ÷
10 RCL+ "pv"
11 RCL× "x"
12 RCL+ "pmt"
13 END

-0.3 XEQ "HALLEY" R/S R/S …
-0.284444698106904440553487332229463
-0.2844359988802575605180815267548689
-0.2844359988802559255723584032186149
-0.284435998880255925572358403218615
-0.284435998880255925572358403218615


Implementation

I haven't implemented all standard functions (yet) but I hope it is still useful to get an idea on how this can be done.

Code:
00 { 397-Byte Prgm }#
01▸LBL "HALLEY"     #
02 STO 03           # x -> R03
03 RCL× "1"         #
04 RCL+ "e"         #
05 STO "x"          #
06 XEQ "f(x)"       #
07 XEQ 02           # matrix -> register
08 RCL 00           # f
09 RCL÷ 01          # f/f'
10 1                # 1 f/f'
11 RCL ST Y         # f/f' 1 f/f'
12 RCL× 02          # f/f'*f"/2 1 f/f'
13 RCL÷ 01          # f/f'*f"/2/f' 1 f/f'
14 -                # 1-f/f'*f"/2/f' f/f'
15 ÷                # f/f'/[1-f/f'*f"/2/f']
16 STO- 03          # x-dx -> x
17 RCL 03           # x
18 RTN              #
19▸LBL "X↑2"        #
20 ENTER            #
21 ×                #
22 RTN              #
23▸LBL "E↑X"        # e^f(x) + y e^f(x) f'(x) + 1/2 y^2 e^f(x) (f''(x) + f'(x)^2)
24 XEQ 02           # matrix -> register
25 RCL 02           # x"/2
26 RCL 01           # x' x"/2
27 X↑2              # x'^2 x"/2
28 2                # 2 x'^2 x"/2
29 ÷                # x'^2/2 x"/2
30 +                # x'^2/2+x"/2
31 RCL× "e↑2"       # (x'^2/2+x"/2)*e^2
32 RCL 01           # x' (x'^2/2+x"/2)*e^2
33 RCL× "e"         # x'*e (x'^2/2+x"/2)*e^2
34 +                # x'*e+(x'^2/2+x"/2)*e^2
35 RCL "1"          # 1 x'*e+(x'^2/2+x"/2)*e^2
36 +                # 1+x'*e+(x'^2/2+x"/2)*e^2
37 RCL 00           # x 1+x'*e+(x'^2/2+x"/2)*e^2
38 E↑X              # exp(x) 1+x'*e+(x'^2/2+x"/2)*e^2
39 ×                # exp(x)[1+x'*e+(x'^2/2+x"/2)*e^2]
40 RTN              #
41▸LBL "LN"         # log(f(x)) + (y f'(x))/f(x) - (y^2 (f'(x)^2 - f(x) f''(x)))/(2 f(x)^2)
42 XEQ 02           # matrix -> register
43 RCL 01           # x'
44 RCL÷ 00          # x'/x
45 RCL× "e"         # x'/x*e
46 LASTX            # x'/x x'/x*e
47 X↑2              # [x'/x]^2 x'/x*e
48 2                # 2 [x'/x]^2 x'/x*e
49 ÷                # [x'/x]^2/2 x'/x*e
50 RCL 02           # x"/2 [x'/x]^2/2 x'/x*e
51 RCL÷ 00          # x"/x/2 [x'/x]^2/2 x'/x*e
52 -                # -x"/x/2+[x'/x]^2/2 x'/x*e
53 RCL× "e↑2"       # (-x"/x/2+[x'/x]^2/2)*e^2 x'/x*e
54 -                # x'/x*e+(x"/x/2-[x'/x]^2/2)*e^2
55 RCL 00           # x x'/x*e+(x"/x/2-[x'/x]^2/2)*e^2
56 LN               # log(x) x'/x*e+(x"/x/2-[x'/x]^2/2)*e^2
57 RCL× "1"         # log(x)*1 x'/x*e+(x"/x/2-[x'/x]^2/2)*e^2
58 +                # log(x)*1+x'/x*e+(x"/x/2-[x'/x]^2/2)*e^2
59 RTN              #
60▸LBL "E↑X-1"      # (e^f(x) - 1) + y e^f(x) f'(x) + 1/2 y^2 e^f(x) (f''(x) + f'(x)^2)
61 XEQ 02           # matrix -> register
62 RCL 02           # x"/2
63 RCL 01           # x' x"/2
64 X↑2              # x'^2 x"/2
65 2                # 2 x'^2 x"/2
66 ÷                # x'^2/2 x"/2
67 +                # x'^2/2+x"/2
68 RCL× "e↑2"       # (x'^2/2+x"/2)*e^2
69 RCL 01           # x' (x'^2/2+x"/2)*e^2
70 RCL× "e"         # x'*e (x'^2/2+x"/2)*e^2
71 +                # x'*e+(x'^2/2+x"/2)*e^2
72 RCL 00           # x x'*e+(x'^2/2+x"/2)*e^2
73 E↑X              # exp(x) x'*e+(x'^2/2+x"/2)*e^2
74 ×                # exp(x)[x'*e+(x'^2/2+x"/2)*e^2]
75 RCL 00           # x exp(x)[x'*e+(x'^2/2+x"/2)*e^2]
76 E↑X-1            # exp(x)-1 exp(x)[x'*e+(x'^2/2+x"/2)*e^2]
77 RCL× "1"         # [exp(x)-1]*1 exp(x)[x'*e+(x'^2/2+x"/2)*e^2]
78 +                # [exp(x)-1]*1+exp(x)[x'*e+(x'^2/2+x"/2)*e^2]
79 RTN              #
80▸LBL "LN1+X"      # log(f(x) + 1) + (y f'(x))/(f(x) + 1) + (y^2 ((f(x) + 1) f''(x) - f'(x)^2))/(2 (f(x) + 1)^2)
81 XEQ 02           # matrix -> register
82 1                # 1
83 RCL+ 00          # 1+x
84 RCL 01           # x' 1+x
85 RCL÷ ST Y        # x'/(1+x) 1+x
86 RCL× "e"         # x'/(1+x)*e 1+x
87 LASTX            # x'/(1+x) x'/(1+x)*e 1+x
88 X↑2              # [x'/(1+x)]^2 x'/(1+x)*e 1+x
89 2                # 2 [x'/(1+x)]^2 x'/(1+x)*e 1+x
90 ÷                # [x'/(1+x)]^2/2 x'/(1+x)*e 1+x 1+x
91 RCL 02           # x"/2 [x'/(1+x)]^2/2 x'/(1+x)*e 1+x
92 RCL÷ ST T        # x"/(1+x)/2 [x'/(1+x)]^2/2 x'/(1+x)*e
93 -                # -x"/(1+x)/2+[x'/(1+x)]^2/2 x'/(1+x)*e
94 RCL× "e↑2"       # (-x"/(1+x)/2+[x'/(1+x)]^2/2)*e^2 x'/(1+x)*e
95 -                # x'/(1+x)*e+(x"/(1+x)/2-[x'/(1+x)]^2/2)*e^2
96 RCL 00           # x x'/(1+x)*e+(x"/(1+x)/2-[x'/(1+x)]^2/2)*e^2
97 LN1+X            # log(1+x) x'/(1+x)*e+(x"/(1+x)/2-[x'/(1+x)]^2/2)*e^2
98 RCL× "1"         # log(1+x)*1 x'/(1+x)*e+(x"/(1+x)/2-[x'/(1+x)]^2/2)*e^2
99 +                # log(1+x)*1+x'/(1+x)*e+(x"/(1+x)/2-[x'/(1+x)]^2/2)*e^2
100 RTN             #
101▸LBL "SIN"       # sin(f(x)) + y f'(x) cos(f(x)) + 1/2 y^2 (f''(x) cos(f(x)) - f'(x)^2 sin(f(x)))
102 XEQ 02          # matrix -> register
103 RCL 00          # x
104 1               # 1 x
105 →REC            # cos(x) sin(x)
106 RCL 01          # x' cos(x) sin(x)
107 X↑2             # x'^2 cos(x) sin(x)
108 RCL× ST Z       # sin(x)*x'^2 cos(x) sin(x)
109 -2              # -2 sin(x)*x'^2 cos(x) sin(x)
110 ÷               # -sin(x)*x'^2/2 cos(x) sin(x)
111 RCL 02          # x"/2 -sin(x)*x'^2/2 cos(x) sin(x)
112 RCL× ST Z       # cos(x)*x"/2 -sin(x)*x'^2/2 cos(x) sin(x)
113 +               # cos(x)*x"/2-sin(x)*x'^2/2 cos(x) sin(x)
114 RCL× "e↑2"      # (cos(x)*x"/2-sin(x)*x'^2/2)*e^2 cos(x) sin(x)
115 X<>Y            # cos(x) (cos(x)*x"/2-sin(x)*x'^2/2)*e^2 sin(x)
116 RCL× 01         # cos(x)*x' (cos(x)*x"/2-sin(x)*x'^2/2)*e^2 sin(x)
117 RCL× "e"        # cos(x)*x'*e (cos(x)*x"/2-sin(x)*x'^2/2)*e^2 sin(x)
118 +               # cos(x)*x'*e+(cos(x)*x"/2-sin(x)*x'^2/2)*e^2 sin(x)
119 X<>Y            # sin(x) cos(x)*x'*e+(cos(x)*x"/2-sin(x)*x'^2/2)*e^2
120 RCL× "1"        # sin(x)*1 cos(x)*x'*e+(cos(x)*x"/2-sin(x)*x'^2/2)*e^2
121 +               # sin(x)*1+cos(x)*x'*e+(cos(x)*x"/2-sin(x)*x'^2/2)*e^2
122 RTN             #
123▸LBL "COS"       # cos(f(x)) - y f'(x) sin(f(x)) + 1/2 y^2 (f'(x)^2 (-cos(f(x))) - f''(x) sin(f(x)))
124 XEQ 02          # matrix -> register
125 RCL 00          # x
126 1               # 1 x
127 →REC            # cos(x) sin(x)
128 RCL 01          # x' cos(x) sin(x)
129 X↑2             # x'^2 cos(x) sin(x)
130 RCL× ST Y       # cos(x)*x'^2 cos(x) sin(x)
131 2               # 2 cos(x)*x'^2 cos(x) sin(x)
132 ÷               # cos(x)*x'^2/2 cos(x) sin(x)
133 RCL 02          # x"/2 cos(x)*x'^2/2 cos(x) sin(x)
134 RCL× ST T       # sin(x)*x"/2 cos(x)*x'^2/2 cos(x) sin(x)
135 +               # sin(x)*x"/2+cos(x)*x'^2/2 cos(x) sin(x)
136 RCL× "e↑2"      # (sin(x)*x"/2+cos(x)*x'^2/2)*e^2 cos(x) sin(x)
137 +/-             # -(sin(x)*x"/2+cos(x)*x'^2/2)*e^2 cos(x) sin(x)
138 X<>Y            # cos(x) -(sin(x)*x"/2+cos(x)*x'^2/2)*e^2 sin(x)
139 RCL× "1"        # cos(x)*1 -(sin(x)*x"/2+cos(x)*x'^2/2)*e^2 sin(x)
140 +               # cos(x)*1-(sin(x)*x"/2+cos(x)*x'^2/2)*e^2 sin(x)
141 X<>Y            # sin(x) cos(x)*1-(sin(x)*x"/2+cos(x)*x'^2/2)*e^2
142 RCL× 01         # sin(x)*x' cos(x)*1-(sin(x)*x"/2+cos(x)*x'^2/2)*e^2
143 RCL× "e"        # sin(x)*x'*e cos(x)*1-(sin(x)*x"/2+cos(x)*x'^2/2)*e^2
144 -               # cos(x)*1-sin(x)*x'*e-(sin(x)*x"/2+cos(x)*x'^2/2)*e^2
145 RTN             #
146▸LBL 02          # matrix -> register
147 EDIT            # f
148 STO 00          # f -> R00
149 →               # f'
150 STO 01          # f' -> R01
151 →               # f"/2
152 STO 02          # f"/2 -> R02
153 EXITALL         #
154 RTN             #
155▸LBL "TRIAL"     #
156 RCL× "1"        #
157 RCL+ "e"        #
158 GTO 00          #
159▸LBL "NEW"       #
160▸LBL 01          #
161 RCL× "e↑2"      #
162 X<>Y            #
163 RCL× "e"        #
164 +               #
165 X<>Y            #
166 RCL× "1"        #
167 +               #
168▸LBL "SHOW"      #
169▸LBL 00          #
170 CLA             #
171 EDIT            #
172 ARCL ST X       #
173 ├" "            #
174 →               #
175 ARCL ST X       #
176 ├" "            #
177 →               #
178 ARCL ST X       #
179 EXITALL         #
180 AVIEW           #
181 END             #
Find all posts by this user
Quote this message in a reply
Post Reply 




User(s) browsing this thread: 6 Guest(s)