Automatic differentiation using dual numbers
|
06-18-2022, 02:09 PM
(This post was last modified: 06-21-2022 05:20 AM by Thomas Klemm.)
Post: #1
|
|||
|
|||
Automatic differentiation using dual numbers
TLDR;
A dual number has the form \(x + {x}' \varepsilon\) where \(x, {x}' \in \mathbb{R}\) and \(\varepsilon \ne 0\), but \(\varepsilon^2 = 0\). Think of \(\varepsilon\) similar to the imaginary unit \(i\) but with a different rule. They can be used to calculate derivatives. Motivation An older thread about calculators and numerical differentiation, and an even older one about what the 1st CAS pocket calculator was, made me wonder if there was ever a calculator using automatic differentiation using dual numbers. So I thought I give it a try abusing the complex numbers of the HP-42S. The advantage is that addition and subtraction as well as multiplication by a constant are already handled. Complex numbers can also be stored and recalled. Most of the functions simply unwrap the complex number, evaluate the function \(f\) and its derivative \(f'\), and rewrap it into a complex number: Code: 01▸LBL "f" # z But then there are more complicated cases like * and / or Y↑X. In the simplest case, the stack is retained except for the T register. But in other cases, the stack is lost entirely. Also LASTX isn't handled. We end up with a somewhat crippled RPN calculator, but programming a function is still easy. Instead of saving the entire stack before entering and restoring it on exit, I suggest storing the values in variables only when needed later. Only Y↑X needs two additional registers. All the other operations use only the stack. Also remember that this function works only with a constant (i.e. non-complex) exponent. Otherwise, use LN and E↑X instead. Only after writing the program for the HP-42S I noticed that Ángel Martin already wrote a module for the HP-41C. I haven't tried it yet, just read the excellent documentation. Thus, I strongly recommend to give it a try. Custom Menu These are my assignments: LN E↑X SINH ASINH COSH ACOSH * / 1/X Y↑X SQRT X↑2 SIN ASIN COS ACOS TAN ATAN Program Code: 00 { 468-Byte Prgm } Examples Arithmetic \( (3 + 4\varepsilon)(5 + 6\varepsilon) \) 3 ENTER 4 COMPLEX 5 ENTER 6 COMPLEX XEQ "*" 15 i38 Evaluation of a function and its derivative \( \begin{align} f(x) = \frac{1}{\sqrt{3 + \sin(x)}} \end{align} \) Evaluate \(f(2)\) and \(f'(2)\). 2 ENTER 1 COMPLEX XEQ "SIN" 3 + XEQ "SQRT" XEQ "1/X" 0.505767 i0.026920 Polynomial Write a program to evaluate the following polynomial: \( f(x) = 2x^3 + 3x^2 + 5x + 7 \) Code: 00 { 35-Byte Prgm } Hint: Please note the use of the ordinary multiplication × with a constant in line 04. Finding a root with Newton's algorithm \( \begin{align} x \mapsto x - \frac{f(x)}{f'(x)} \end{align} \) The program is straight forward since both the function and its derivative can be calculated with a single call to "f(x)": Code: 00 { 34-Byte Prgm } Now we can use it to find the root of the previous polynomial: -1 XEQ "NEWTON" R/S R/S R/S … -1.6 -1.459479553903345724907063197026022 -1.445652028784242179107592346048789 -1.445528468451557138940594968587292 -1.445528458679522363979375907703971 -1.445528458679522302862795910270397 -1.445528458679522302862795910270395 -1.44552845867952230286279591027039479019 \(e^x - 5 = 0\) Code: 00 { 19-Byte Prgm } 1 XEQ "NEWTON" R/S R/S R/S … 1.839397205857211607977618850807305 1.633963151762016740112346092531373 1.609736212513373931404520946197064 1.609437956921145415593181982892169 1.609437912434101364149332898839731 1.609437912434100374600759333226677 1.609437912434100374600759333226188 1.60943791243410037460075933322618763952 \(2x - \sin(x) - 1 = 0\) Code: 00 { 26-Byte Prgm } 1 XEQ "NEWTON" R/S R/S R/S … 0.891395995328754051864551902982202 0.8878657494003519361186921183484038 0.8878622115744122839022415445850894 0.8878622115708660240357050746759603 0.8878622115708660240357015114947114 0.8878622115708660240357015114947116 0.887862211570866024035701511494712 0.88786221157086602403570151149471173497 Find a root of : \(x^x = \pi\) Code: 00 { 28-Byte Prgm } Hint: As already mentioned we can not use Y↑X here since the exponent is not a constant. 2 XEQ "NEWTON" R/S R/S R/S … 1.873252698249433854127306813867173 1.854459717903199817846714313283606 1.854106089961332624497916868416055 1.854105967921040960682023166413 1.854105967921026432748370718616169 1.854105967921026432748370718410293 1.85410596792102643274837071841029324542 Solve Leonardo di Pisa's equation: \(x^3+2x^2+10x−20=0\) Code: 00 { 34-Byte Prgm } 1 XEQ "NEWTON" R/S R/S R/S … 1.411764705882352941176470588235294 1.369336470588235294117647058823529 1.368808188617531954512471834907757 1.368808107821374524808003343189187 1.368808107821372635227414330022359 1.368808107821372635227414330021326 1.36880810782137263522741433002132553954 Resources |
|||
06-18-2022, 05:35 PM
Post: #2
|
|||
|
|||
RE: Automatic differentiation using dual numbers
Is there a way to calculate an extremum for an odd powered function for when the extremum would be a negative number such as the equation involving something like 3ROOT(X)?
|
|||
06-18-2022, 07:25 PM
Post: #3
|
|||
|
|||
RE: Automatic differentiation using dual numbers
To calculate an extremum we have to find solutions of \(f'(x) = 0\).
We can use the SOLVER with the program f'(x) for the example \(f(x) = x^3 - 2x^2 + 2\): Code: 00 { 62-Byte Prgm } SOLVER Select Solve Program f'(x) 1 x x=1 2 x x=2 x 1.333333333333333333333333333333333 I must admit that I don't fully understand the second part which involves \(\sqrt[3]{x}\). As there is no built-in function we'd use: Code: RCL "x" For negative values we'd have to use: Code: RCL "x" Do you have a specific example in mind? |
|||
06-18-2022, 08:30 PM
Post: #4
|
|||
|
|||
RE: Automatic differentiation using dual numbers
Really any function that would involve something like 3Root(X). The calculator uses 1/3 and produces a complex number, and f’(X)=0 fails. If there was a yROOT(x) key, it would work.
|
|||
06-18-2022, 08:32 PM
Post: #5
|
|||
|
|||
RE: Automatic differentiation using dual numbers
Meanwhile, I remembered an earlier thread: Derivatives on HP 42S.
The Complex-Step Derivative Approximation only works for analytic functions. And as the name suggests, it is an approximation. The main advantage is that cancellation is avoided compared to the classical finite-difference approximations. Implementation is easy if the functions and operations are overloaded for complex values. This is the case with the HP-15C and HP-42S. Here in automatic differentiation, complex numbers are used primarily as a structure to keep track of the values of \(f(x)\) and \(f'(x)\). The complex variants of the functions are not used. In addition, the functions do not have to be analytical. But they should be differentiable. We can even define ABS : \( \left| \left< u, {u}' \right>\right| = \left< \left| u \right|, {u}' \, \text{sign}(u)\right> \; (u \ne 0) \) Thus you can program your own variant of a 3ROOT function and its derivative. With automatic differentiation, the value of the derivative is exact. Of course, rounding errors can also occur. However, these do not depend on an arbitrarily small increment such as \(h = 10^{-5}\). |
|||
06-18-2022, 09:21 PM
(This post was last modified: 06-18-2022 10:55 PM by Thomas Klemm.)
Post: #6
|
|||
|
|||
RE: Automatic differentiation using dual numbers
Consider the function:
\( \begin{align} f(x) &= \sqrt[n]{x} \\ \\ f'(x) &= \frac{x^{\frac{1}{n} - 1}}{n} \\ \end{align} \) With negative input the following works properly only for odd N: Code: 217▸LBL "NROOT" # n z Examples \(n = 3\) \(x = 8\) 8 ENTER 1 COMPLEX 3 XEQ "NROOT" 2.000000 i0.083333 \( \begin{align} f(8) &= 2 \\ \\ f'(8) &= \frac{1}{12} \\ \end{align} \) \(x = -8\) -8 ENTER 1 COMPLEX 3 XEQ "NROOT" -2.000000 i0.083333 \( \begin{align} f(-8) &= -2 \\ \\ f'(-8) &= \frac{1}{12} \\ \end{align} \) Is this what you wanted? |
|||
06-18-2022, 09:36 PM
Post: #7
|
|||
|
|||
RE: Automatic differentiation using dual numbers
Better, thanks. Approximate value with small i value, avoids throwing an error. The program that works for positive real roots does not generate a complex result (small i value).
|
|||
06-18-2022, 10:14 PM
(This post was last modified: 06-18-2022 10:27 PM by Thomas Klemm.)
Post: #8
|
|||
|
|||
RE: Automatic differentiation using dual numbers
Example
Find the second negative maximum of \(\sin\left(\sqrt[3]{x}\right)\). Code: 00 { 59-Byte Prgm } SOLVER Select Solve Program f'(x) -150 x x=-150 -50 x x=-50 x -104.6461837960118930922325633514673 Exact value: \(\left(-\frac{3\pi}{2}\right)^3 \approx -104.6462\) |
|||
06-19-2022, 02:23 AM
Post: #9
|
|||
|
|||
RE: Automatic differentiation using dual numbers
Nice! Dumb question…what if an nRoot(x) appeared within a larger equation such as 3Root(x) +X^2 -3*(x/5). (I am away from my calculator)
|
|||
06-19-2022, 04:45 AM
Post: #10
|
|||
|
|||
RE: Automatic differentiation using dual numbers
(06-19-2022 02:23 AM)lrdheat Wrote: what if an nRoot(x) appeared within a larger equation such as 3Root(x) +X^2 -3*(x/5) Example Find the minimum of \(\sqrt[3]{x}+x^2-\frac{3x}{5}\). We can keep f'(x) and simply adjust f(x): Code: 11▸LBL "f(x)" SOLVER Select Solve Program f'(x) -1 x x=-1 0 x x=0 x -0.1952345627907014530751112228253494 -0.1952345627907014530751112228253494266891 from Solve[D[Surd[x,3]+x^2-3x/5,x]=0] |
|||
06-19-2022, 10:50 AM
(This post was last modified: 07-11-2022 06:00 PM by Albert Chan.)
Post: #11
|
|||
|
|||
RE: Automatic differentiation using dual numbers
Apply f to dual number {g, g'} produce {f(g), (f(g))' = f'(g) * g'}
To emphasize its "symbolic" nature, I use f and g for variables. r = x^(1/n) r' = x^(1/n-1)/n = r / (x*n) Code: D = {} -- note: f1, f2 = f, f' Example, solve (³√x + x² - 3/5*x = 0), guess x = -1 lua> x = D.new(-1,1) lua> for k=1,6 do print(D.newton(x, D.surd(x,3) + x*x - D.new(3/5)*x)) end -0.7352941176470589 0.6 -0.6875931861095272 0.07925115704554603 -0.6858395921931165 0.0027133429258336395 -0.6858371853881947 3.7138631565070135e-006 -0.6858371853836587 6.999345547598068e-012 -0.6858371853836587 0 Note: outputs are (extrapolated x, f(x)); Shift up 2nd column to get points (x, f(x)) |
|||
06-19-2022, 11:25 AM
Post: #12
|
|||
|
|||
RE: Automatic differentiation using dual numbers
More Examples
From an older thread: Find the minimum of \(\sqrt[3]{x}(x+4)\). Code: 11▸LBL "f(x)" SOLVER Select Solve Program f'(x) -4 x x=-4 0 x x=0 x -1 -1 from Solve[D[Surd[x,3](x+4),x]=0] From message #52 of hp 35s not very impressive: Numerically find a root of \(\sin(\cos(x)+x)=1\) for \(x \in [0, \pi]\). Code: 11▸LBL "f(x)" 1 XEQ "NEWTON" R/S R/S R/S … 1.096185496720379254068448344585068 1.175888114785594876983050355722717 1.242050787237047966508965780229636 1.297040015847465356429262431517609 1.342780395438986475883038203450154 1.380849045692581522517271141667455 1.412545052274972359330312135368247 1.438942302919224509400448646856456 1.460930716999602583570957867391215 1.479249022592746437337540456509254 1.494511170421290703644758554012951 1.507227829978447782058849865750785 1.517824006751892972090594565180035 1.526653552641516102517803677349702 1.534011159566718715000530836754409 1.540142297317008509515873034239361 1.54525146226124394188595647414023 1.549509032291505688761007830343301 1.553056968300240276217842622504177 1.556013559062516129214210739762696 1.558477371631839185818134801079616 1.560530541211757536883595443592954 1.562241511486046176440960584008961 1.563667317515769249502272196990959 1.564855487741827589822012469653988 1.5658456287488595130191653943798 1.566670745763970142235788700312658 1.567358342992564331745470195675238 1.567931340518708971144634527031736 1.568408838362052654583316328637151 1.568806753176465137666022447896715 1.569138348823290027153250798000587 1.569414678510544465261158813819254 1.569644953239255937574657672473129 1.569836848840342335174954121320105 1.569996761837675236006348072590862 1.570130022666718605870849598706333 1.570241073356391686024755750742321 1.570333615597093564376103876143107 1.570410734130611068186649461486481 1.570474999574977213095175517708114 1.570528554111814785112668043102819 1.570573182892434548623033555727806 1.570610373542906751998078209406487 1.57064136575160961719034182259993 1.570667192592178057949124917149796 1.570688714959325685163863462493728 1.570706650265258827155523702630545 1.570721596353569424513573787698215 1.570734051427489393921579858910183 1.570744430654950406742411512076897 1.570753080010948337990727107957772 1.570760287808507356277721295289301 1.570766294312730998804956970621129 1.570771299748680808303996882049845 1.570775470908511415841082922756551 1.57077894663999092675862234031804 1.5707818432162089638123982822793 1.570784256952696171292749185689969 1.570786266649663336458847535382422 1.570787943608822423042808769861741 1.57078933476841216946339413363803 1.570790483678136851887361443677913 1.570791540762559672731902800264383 1.57079249648789454488183911785543 1.57079249648789454488183911785543 1.57079249648789454488183911785543 1.57079632679489661923132169163975144209... The exact root is \(\frac{\pi}{2}\). We notice the slow convergence since the function is very flat at the root. Obviously we stop getting closer once \(f(x)=0\). From Max of (sin(x))^(e^x): Find the maximum of \(\sin(x)^{e^x}\) between \(13\) and \(15\). Code: 11▸LBL "f(x)" SOLVER Select Solve Program f'(x) 14.13 x x=14.13 14.14 x x=14.14 x 14.13716694115406957308189522475776 14.1371669411540695730818952247577629788... The exact solution is \(\frac{9\pi}{2}\). |
|||
06-19-2022, 02:40 PM
(This post was last modified: 07-11-2022 05:28 PM by Albert Chan.)
Post: #13
|
|||
|
|||
RE: Automatic differentiation using dual numbers
(06-18-2022 02:09 PM)Thomas Klemm Wrote: Find a root of : \(x^x = \pi\) Removing constant exponent limitation is not hard. r = f^g r' = (exp(ln(f)*g))' = r * (ln(f)*g)' = r * (ln(f)*g' + (f'/f)*g) Code: function D.__pow(f,g) We add above pow rule here lua> x = D.new(2,1) lua> for k=1,6 do print(D.newton(x, x^x - D.new(pi))) end 1.8732526982494337 0.8584073464102069 1.8544597179031999 0.09913010983767823 1.8541060899613326 0.001798101943065511 1.854105967921041 6.201137425776437e-007 1.8541059679210263 7.416289804496046e-014 1.8541059679210263 -4.440892098500626e-016 Note: outputs are (extrapolated x, f(x)); Shift up 2nd column to get points (x, f(x)) |
|||
06-19-2022, 08:31 PM
(This post was last modified: 06-19-2022 09:38 PM by Thomas Klemm.)
Post: #14
|
|||
|
|||
Fixed Point Iteration
We have seen that in some cases fixed-point iteration doesn't converge to the solution of \(x = f(x)\).
Also the convergence can be slow. Examples Dottie Number \( \begin{align} \cos(x) - x = 0 \end{align} \) Code: 11▸LBL "f(x)" 1 XEQ "NEWTON" R/S R/S R/S … 0.7503638678402438930349423066821769 0.7391128909113616703605852909048903 0.7390851333852839697601251208568043 0.7390851332151606416617026256850264 0.7390851332151606416553120876738734 0.7390851332151606416553120876738734 Compared to this the fixed-point iteration is rather slow. …and you hit cosine, and you hit cosine, and you hit cosine. 1 COS COS COS … 0.5403023058681397174009366074429766 0.8575532158463934157441062727619899 0.6542897904977791499709664713278083 0.7934803587425655918260542309902841 0.7013687736227565244719705270529776 0.7394118536802670853384982393894295 0.7388650109363127073601308869127267 … (10-04-2020 04:21 PM)Eddie W. Shore Wrote: Be aware, some equations cannot be solved in this manner, such as x = π / sin x and x = ln(1 / x^4). \( \begin{align} 4 \ln(x) + x = 0 \end{align} \) Code: 11▸LBL "f(x)" 1 XEQ "NEWTON" R/S R/S R/S … 0.8 0.8154290342094731705108633935398897 0.8155534109294744392403815755194143 0.815553418808960626155252543632822 0.815553418808960657772727325308559 0.8155534188089606577727273253085595 \( \begin{align} \sin(x) x - \pi = 0 \end{align} \) Code: 11▸LBL "f(x)" 6 XEQ "NEWTON" R/S R/S R/S … 6.878955081396972443178487244060421 6.76408551570638856537720551093318 6.76604838451268027345470086566581 6.766048790452746205491186938760874 6.766048790452763690963998120299702 6.766048790452763690963998120332144 6.766048790452763690963998120332144 Conclusion Newton's method allows to transform an equation into a fixed-point iteration that usually converges much faster. Using automatic differentiation allows us to do this with only minor additional effort. |
|||
06-20-2022, 12:52 PM
(This post was last modified: 07-11-2022 05:43 PM by Albert Chan.)
Post: #15
|
|||
|
|||
RE: Automatic differentiation using dual numbers
(05-17-2022 01:31 PM)Albert Chan Wrote: lua> n, pv, pmt, fv = 10, 50, -30, 100 Perhaps we can extend idea of automatic differentiation to higher derivatives? Let's check if we can repeat above Halley's method rate iterations. Since we need 3 numbers, lets call it T (for triple) number. Code: T = {} -- note: f1, f2, f3 = f, f', f'' It work ! lua> n, pv, pmt, fv = T.new(10), T.new(50), T.new(-30), T.new(100) lua> function f(x) return ((pv+fv)/T.expm1(T.log1p(x)*n) + pv)*x + pmt end lua> x = T.new(pmt[1]/fv[1], 1) -- small edge lua> for k=1,3 do print(T.halley(x, f(x))) end -0.2844446981069045 1.3080888941077191 -0.28443599888025756 0.0007214129592014729 -0.28443599888025595 1.3500311979441904e-013 Note: outputs are (extrapolated x, f(x)); Shift up 2nd column to get points (x, f(x)) |
|||
06-20-2022, 05:31 PM
Post: #16
|
|||
|
|||
The strange cousin of the complex numbers -- the dual numbers
Just stumbled upon this video by Michael Penn:
The strange cousin of the complex numbers -- the dual numbers We can follow the lecture with our HP-42S. 1:39: \( \begin{align} (2 + 3 \varepsilon)(5 - \varepsilon) = 10 + 13 \varepsilon \end{align} \) 2 ENTER 3 COMPLEX 5 ENTER -1 COMPLEX XEQ "*" 10 i13 7:07: \( \begin{align} \frac{3 + 2 \varepsilon}{2 + 5 \varepsilon} = \frac{3}{2} - \frac{11}{4} \varepsilon \end{align} \) 3 ENTER 2 COMPLEX 2 ENTER 5 COMPLEX XEQ "/" 1.5 -i2.75 We can also use the matrix representation instead of using complex numbers: 2 3 0 2 5 -1 0 5 * 10 13 0 10 3 2 0 3 2 5 0 2 / 1.5 -2.75 0 1.5 The advantage is that we now get "*", "/" and "1/X" for free. We'd probably write a constructor (similar to COMPLEX) that simplifies entering values. And then another function to display them. Unfortunately function application on a matrix is implemented element-wise. Otherwise we could simply use the ordinary functions like EXP or SIN on these dual numbers. Thus we still have to implement that by ourselves. Hint: If you want to copy and paste the matrices into Free42, quote this post and do it from the input form. The separator has to be a tabulator character which is transmogrified into a space character by the software of this forum. |
|||
06-20-2022, 09:50 PM
Post: #17
|
|||
|
|||
RE: Automatic differentiation using dual numbers
Perhaps it is better to think of dual number as a container, holding symbolic variables.
That's why even with complicated expressions, derivative only has slight rounding errors, due to symbolic variables really stored numerically. This is why variable as {x, 1}: f(x) = x → f'(x) = 1 Plain constant simply {c, 0}: f(x) = c → f'(x) = 0 (06-20-2022 05:31 PM)Thomas Klemm Wrote: \( Think of this as getting {f*g, (f*g)'}, forget about ε (f*g) = 2*5 = 10 (f*g)' = f'*g + f*g' = 3*5 + 2*-1 = 13 Again, do not think of them as numbers; think of them as symbolic, stored numerically. --- If we wanted to extend container to hold more derivatives, ε is a bad idea. Example, we want to carry 2 derivatives, instead of 1. CAS> expand((1+2*ε+3*ε^2) * (3+ε+4*ε^2)) 3 + 7*ε + 15*ε^2 + 11*ε^3 + 12*ε^4 Do this symbolically (recommended), we get correct answer. {f, f', f''} = {1,2,3} {g,g',g''} = {3,1,4} (f*g) = 1*3 = 3 (f*g)' = f'*g + f*g' = 2*3 + 1*1 = 7 (f*g)'' = f''*g + f'*g' + f'*g' + f*g'' = 3*3 + 2*(2*1) + 1*4 = 17 |
|||
06-21-2022, 02:19 AM
Post: #18
|
|||
|
|||
RE: Automatic differentiation using dual numbers
(06-20-2022 09:50 PM)Albert Chan Wrote: … forget about ε … ε is a bad idea Au contraire! It's just that you misinterpreted the meaning of the coefficient of \(\varepsilon^2\). Based on the Taylor series we have: \( \begin{align} f(x + \varepsilon) = f(x) + {f}'(x) \cdot \varepsilon + \frac{1}{2} {f}''(x) \cdot \varepsilon^2 \end{align} \) Given your example: Quote:{f, f', f''} = {1,2,3} … we have to calculate: \( \begin{align} \left(1 + 2 \varepsilon + \frac{3}{2} \varepsilon^2 \right)\left(3 + \varepsilon + 2 \varepsilon^2 \right) &= 3 + 7 \varepsilon + \frac{17}{2} \varepsilon^2 + \frac{11}{2} \varepsilon^3 + 3 \varepsilon^4 \\ &= 3 + 7 \varepsilon + \frac{17}{2} \varepsilon^2 \\ \end{align} \) … since \(\varepsilon^3 = 0\). With this interpretation we get the correct result. If we use the matrix representation we have to take that into account. These 3 matrices form the basis: \( I = \begin{bmatrix} 1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 1 \\ \end{bmatrix} \) \( \varepsilon = \begin{bmatrix} 0 & 1 & 0 \\ 0 & 0 & 1 \\ 0 & 0 & 0 \\ \end{bmatrix} \) \( \varepsilon^2 = \begin{bmatrix} 0 & 0 & 1 \\ 0 & 0 & 0 \\ 0 & 0 & 0 \\ \end{bmatrix} \) And then \(\{f, f', f''\}\) is represented by: \( f \cdot I + {f}' \cdot \varepsilon + \tfrac{1}{2}{f}'' \cdot \varepsilon^2 = \begin{bmatrix} f & {f}' & \tfrac{1}{2}{f}'' \\ 0 & f & {f}' \\ 0 & 0 & f \\ \end{bmatrix} \) Indeed we have: \( \begin{bmatrix} 1 & 2 & \frac{3}{2} \\ 0 & 1 & 2 \\ 0 & 0 & 1 \\ \end{bmatrix} \cdot \begin{bmatrix} 3 & 1 & 2 \\ 0 & 3 & 1 \\ 0 & 0 & 3 \\ \end{bmatrix} = \begin{bmatrix} 3 & 7 & \frac{17}{2} \\ 0 & 3 & 7 \\ 0 & 0 & 3 \\ \end{bmatrix} \) This can be extended in an analogous way for higher-order derivatives. |
|||
06-21-2022, 10:56 AM
Post: #19
|
|||
|
|||
RE: Automatic differentiation using dual numbers
(06-21-2022 02:19 AM)Thomas Klemm Wrote: Au contraire! Thank you for clearing this up! To confirm my understanding, I redo expm1 and log1p rules, using epsilon. Quote:-- expm1(f)' = exp(f) * f' Let ε2 = ε^2/2!, and assume (ε3, ε4 , ...) ≈ 0 expm1(x) = x + x^2/2! + x^3/3! + ... expm1(f(x+ε)) ≈ expm1(f + f'*ε + f''*ε2) = exp(f) * exp(f'*ε) * exp(f''*ε2) - 1 ≈ exp(f) * (1 + f'*ε + f'^2*ε2) * (1 + f''*ε2) - 1 ≈ exp(f) * (1 + f'*ε + f'^2*ε2 + f''*ε2) - 1 = expm1(f) + (exp(f)*f')*ε + (exp(f)*(f'' + f'^2))*ε2 ✔️ Quote:-- log(1+f)' = f' / (1+f) log1p(x) = x - x^2/2 + x^3/3 - ... log1p(f(x+ε)) ≈ log(1 + f + f'*ε + f''*ε2) = log1p(f) + log1p(ε*(f' + f''/2*ε)/(1+f)) ≈ log1p(f) + ε*(f' + f''/2*ε)/(1+f) - ε2*(f'/(1+f))^2 = log1p(f) + (f'/(1+f))*ε + (((1+f)*f'' - f'^2)/(1+f)^2)*ε2 ✔️ |
|||
06-22-2022, 10:36 PM
(This post was last modified: 06-23-2022 05:07 AM by Thomas Klemm.)
Post: #20
|
|||
|
|||
Automatic differentiation using trial numbers
As already mentioned we can extend the idea of automatic differentiation to higher derivatives.
If we represent the trial numbers by matrices we again can use the HP-42S. Basis We use the following basis: 1 0 0 0 1 0 0 0 1 STO "1" 0 1 0 0 0 1 0 0 0 STO "e" 0 0 1 0 0 0 0 0 0 STO "e↑2" Constructors We can use "TRIAL" to enter a trial number of the form \(x +\varepsilon\): 7 XEQ "TRIAL" 7 1 0 0 7 1 0 0 7 Or then, if we want to specify the 3 coefficients we can use "NEW": 1 ENTER 2 ENTER 3 XEQ "NEW" 1 2 3 0 1 2 0 0 1 Display We can use "SHOW" to display the 3 coefficients. I recommend to use FIX 3: XEQ "SHOW" 2.000 3.000 5.000 x: [ 3×3 Matrix ] This function is also called at the end of "TRIAL" and "NEW": Code: 00 { 71-Byte Prgm } Addition and Subtraction Since adding a number to a matrix is done on each element, we can not simply add constants. Instead we have to multiply the constant by the "1" matrix. Multiplication and Division However we can use the ordinary multiplication and division both for constants and trial numbers. Thus without much programming we can already calculate first and second derivatives of polynomials or even rational functions. Example \( \begin{align} p(x) = 2x^3 + 3x^2 + 5x + 7 \end{align} \) Code: 00 { 38-Byte Prgm } Let's assume that we want to evaluate the polynomial at \(x = -1\) and calculate its 1st and 2nd derivative: -1 XEQ "TRIAL" STO "x" XEQ "p(x)" XEQ "SHOW" 3.000 5.000 -3.000 x: [ 3×3 Matrix ] Thus we get: \( \begin{align} p(x) &= 3 \\ {p}'(x) &= 5 \\ \tfrac{1}{2}{p}''(x) &= -3 \\ \end{align} \) Halley's Method It's straightforward to implement Halley's method: \( \begin{align} x_{{n+1}}=x_{n}-{\frac{2f(x_{n})f'(x_{n})}{2{[f'(x_{n})]}^{2}-f(x_{n})f''(x_{n})}} \end{align} \) However we rather use this formula: \( \begin{align} x_{n+1} &=x_{n}-{\frac {f(x_{n})}{f'(x_{n})-{\frac {f(x_{n})}{f'(x_{n})}}{\frac {f''(x_{n})}{2}}}} \\ &=x_{n}-{\frac {f(x_{n})}{f'(x_{n})}}\left[1-{\frac {f(x_{n})}{f'(x_{n})}}\cdot {\frac {f''(x_{n})}{2f'(x_{n})}}\right]^{-1} \\ \end{align} \) Code: 00 { 56-Byte Prgm }# Now we can use it to find the root of the aforementioned polynomial: -1 XEQ "HALLEY" R/S R/S … -1.441176470588235294117647058823529 -1.445528443543314459809602861040273 -1.445528458679522302862795272237467 -1.445528458679522302862795910270395 -1.445528458679522302862795910270395 We notice that the rate of convergence is cubic compared to the quadratic convergence of Newton's method. Dottie Number Code: 00 { 19-Byte Prgm } 1 XEQ "HALLEY" 0.7408739950803435700746289353295154 0.7390851338775818843562183300927446 0.7390851332151606416553120877075439 0.7390851332151606416553120876738734 0.7390851332151606416553120876738734 Time Value of Money Consider the following parameters: Quote:n, pv, pmt, fv = T.new(10), T.new(50), T.new(-30), T.new(100) Code: 00 { 46-Byte Prgm } We want to find the root \(x\) of the following function: Quote:function f(x) return ((pv+fv)/T.expm1(T.log1p(x)*n) + pv)*x + pmt end Code: 00 { 50-Byte Prgm } -0.3 XEQ "HALLEY" R/S R/S … -0.284444698106904440553487332229463 -0.2844359988802575605180815267548689 -0.2844359988802559255723584032186149 -0.284435998880255925572358403218615 -0.284435998880255925572358403218615 Implementation I haven't implemented all standard functions (yet) but I hope it is still useful to get an idea on how this can be done. Code: 00 { 397-Byte Prgm }# |
|||
« Next Oldest | Next Newest »
|
User(s) browsing this thread: 2 Guest(s)