Looking for TVM contributions
|
06-14-2024, 06:40 PM
(This post was last modified: 06-24-2024 10:54 AM by Albert Chan.)
Post: #42
|
|||
|
|||
RE: Looking for TVM contributions
(06-14-2024 03:23 PM)robve Wrote: I suspected that the slope would be close to zero for it to jump that far from the root. NPV iterating to infinity is not a rounding issue ... go figure! NPV = (fv-pmt/x) * ((1+x)^-n-1) + (pv+fv) NPV(n>1, x→∞) ≈ (fv-0) * (0-1) + fv = -fv + (pv+fv) = pv // =0, for previous post example Update: it may be better to use another NPV version. Time-reversal to get NFV, then scale it back down for NPV NFV = (pv+pmt/x) * ((1+x)^n-1) + (pv+fv) NPV = NFV / (1+x)^n = (pv+pmt/x) * (1-(1+x)^-n) + (pv+fv)/(1+x)^n NPV(n>1, x→∞) = (pv+pmt/x) --> pv Quote:After some experimentation in C code with double precision and in SHARP BASIC, I am not sure if others noticed this as well, but I noticed that Newton suffers from instability when TVM rates converge to zero. Secant's method have this issue too. Both methods need accurate f(ε) calculation. It is not noticed because solved rate is not small enough, not hitting f(ε) issue yet. Also, with 2 very close points, f'(ε) estimated from secant slope is likely worse. After I use {f(0), f'(0), f''(0)} to estimate {f(ε), f'(ε)}, this issue mostly goes away. BTW: small rate branch is primarily designed in case edge rate has wrong sign. Next iteration(s) may pass through f(ε), before it reaches true rate on the other side. Getting true tiny rate solution is really a bonus. If you like to try Newton's method, use this instead (convergence test using f(i), not i) Convergence depends on ULP, not quadratic convergence, thus will never get stuck. https://www.hpmuseum.org/forum/thread-16...#pid187993 Note that K=1+B*I is removed. It just seems to introduce unnecessary error. Assuming (pv+pmt) and (fv-pmt) is exact, this may be better. (B=1 → B=0) Code: function tvm_begin(n,i,pv,pmt,fv, verbose) Quote:Another practical downside of Newton on these vintage machines is that the step computations are more costly to perform than Secant. This negates the benefit of faster theoretical quadratic convergence with Newton versus Secant that converges theoretically with the power of the golden ratio 1.618... which is still very good. Per function calls, Secant's method may beat Newtons. However, f(i) and f'(i) shared expensive expm1(log1p(i)*n) factor. My guess is f'(i) cost very little extra ... say 1/4 of a function call. Secant's method needs to calculate slope too ... it is not free. Besides getting f(i) and slope, there are bookkeeping cost too. Per timing, with less loops, my guess is Newtons beat Secant. Of course, this is all guessing. We know once the code is done. |
|||
« Next Oldest | Next Newest »
|
User(s) browsing this thread: 4 Guest(s)