Newton Method
|
05-11-2016, 07:54 PM
Post: #1
|
|||
|
|||
Newton Method
Can anyone point me to any examples of the Newton or Newton-Raphson method for solving f(x)=0 ? I'm looking for something that can run on the early HP programmables, like the HP-65 or 67.
There are Solve for f(x)=0 programs in the Standard packs, but I don't know if they employ the Newton method. Thanks! Regards, Bob |
|||
05-11-2016, 08:40 PM
(This post was last modified: 05-11-2016 08:49 PM by Dieter.)
Post: #2
|
|||
|
|||
RE: Newton Method
(05-11-2016 07:54 PM)bshoring Wrote: Can anyone point me to any examples of the Newton or Newton-Raphson method for solving f(x)=0 ? I'm looking for something that can run on the early HP programmables, like the HP-65 or 67. That's quite straightforward. Here is a short example that I just tried (slightly modified) on the 34s. It's not elegant, but it does its job. It should run on the 67/97 as well as the 65 (with some steps unmerged). The key is the evaluation of f'(x) which is calculated via [f(x+h) – f(x)]/h where h is x/10000 (resp. 1/10000 if x=0). Code: 001 LBL A Place your f(x) code at label E. The argument x is expected in the X-register. Example: f(x) = x^3 – x^2 – x + 0,5 Using Horner's method this can be coded as Code: LBL E Enter your first guess for a root of f(x) and press A. Starting at x=0, this is what you get in FIX 6: Code: 0 [A] => 0,499995 Pressing [x<>y] shows the last correction term, i.e. an estimate for the accuracy of the current approximation. The other two roots of this function can be found starting at x=—1 resp. x=2. Dieter |
|||
05-12-2016, 12:10 AM
Post: #3
|
|||
|
|||
RE: Newton Method
It's working great! Thanks so much.
Regards, Bob |
|||
05-12-2016, 08:47 PM
Post: #4
|
|||
|
|||
RE: Newton Method
Dieter,
A better way to calculate h is using something like: h=0.001*(1+ABS(x)) which does not require testing the value of x. Namir |
|||
05-12-2016, 09:17 PM
Post: #5
|
|||
|
|||
RE: Newton Method
(05-12-2016 08:47 PM)Namir Wrote: A better way to calculate h is using something like: Hmmm.... I do not think this is "better". Consider a function where the derivative changes rapidly near zero, e.g. sin(10^6 · x). Since the smallest possible h is 0,001 it may become much larger than x itself. That's why I prefer defining h as a certain fraction of x. (05-12-2016 08:47 PM)Namir Wrote: which does not require testing the value of x. Well, that's done in two steps. And thus even shorter than "ABS 1 +". ;-) Dieter |
|||
05-13-2016, 01:59 PM
(This post was last modified: 05-13-2016 02:03 PM by Namir.)
Post: #6
|
|||
|
|||
RE: Newton Method
The short equation to calculate h was suggested by my Numerical Analysis professor, Brice Carnahan, at the University of Michigan. I have used it for over 30 years and it works like a charm! Carnahan is the co-author of "Applied Numerical Methods" a book that uses FORTRAN code. I bought his book, in Paris, many months before I attended his class at the University of Michigan.
The equation for calculating h is very convenient in high-level programming languages. Namir |
|||
05-14-2016, 06:36 AM
Post: #7
|
|||
|
|||
RE: Newton Method
Thanks all for the great ideas!
Regards, Bob |
|||
05-14-2016, 07:49 PM
Post: #8
|
|||
|
|||
RE: Newton Method
(05-13-2016 01:59 PM)Namir Wrote: The short equation to calculate h was suggested by my Numerical Analysis professor, Brice Carnahan, at the University of Michigan. I have used it for over 30 years and it works like a charm! Carnahan is the co-author of "Applied Numerical Methods" a book that uses FORTRAN code. I bought his book, in Paris, many months before I attended his class at the University of Michigan. Namir, I plugged your formula into the above program. Using the example equation given above, with guesses of 0 & -1, it reached the same result with the same number of iterations (5). However with a guess of 1, yours converged faster, cutting the number of iterations from 25 to 18. Interesting! Thanks! Regards, Bob |
|||
05-14-2016, 10:23 PM
(This post was last modified: 05-14-2016 10:33 PM by Dieter.)
Post: #9
|
|||
|
|||
RE: Newton Method
(05-14-2016 07:49 PM)bshoring Wrote: Namir, I plugged your formula into the above program. Using the example equation given above, with guesses of 0 & -1, it reached the same result with the same number of iterations (5). However with a guess of 1, yours converged faster, cutting the number of iterations from 25 to 18. Interesting! At x=1 the function in the example has a local mininum, i.e. at that point the derivative f'(x) is zero. If f'(x) was calculated exactly the Newton method would cause a division by zero and the iteration would stop with an error. In our case f'(x) is approximated by a small value: h=0,002 yields f'=0,004... and h=0,0001 leads to f' = 0,0002. Dividing f(1) by such small values results in a first approximation that is waaaayyyy off: ~125 with h=0,002 resp. ~2500 with h=0,0001 ...and +infinity with an exact derivative. ;-) In other words: starting with x=1 should throw an error due to division by zero. Since the derivative is not exact, the algorithm continues with a very large value. Which is why so many iterations are required. Dieter |
|||
05-15-2016, 02:00 AM
(This post was last modified: 05-15-2016 12:23 PM by Namir.)
Post: #10
|
|||
|
|||
RE: Newton Method
Hello Dieter,
Last year I was reading "Practical Numerical Methods for Chemical Engineers", third edition, by Richard Davis. When this author discusses Newton's method he uses the following very interesting variant on page 195: x(i) = x(i-1) - f(x(i-1)) * f'(x(i-1)( / ([f'(x(i-1)]^2 + delta) Davis uses delta = 1e-5 to protected against the slope f'(x) (approximated using any one of our techniques) being zero. The expression ([f'(x(i-1)]^2 + delta) is always positive.I am guessing that the delta term also stabilizes convergence as f;(x) gets very small too. The above equation requires a few more calculations and probably one more storage register/variable, but adds more stability. |
|||
05-15-2016, 01:11 PM
Post: #11
|
|||
|
|||
RE: Newton Method
(05-15-2016 02:00 AM)Namir Wrote: Last year I was reading "Practical Numerical Methods for Chemical Engineers", third edition, by Richard Davis. When this author discusses Newton's method he uses the following very interesting variant on page 195: If I read this formula correctly, it calculates the new x-approximation this way: Code: f(x) · f'(x) If f'(x) becomes zero, this means that the correction term becomes zero as well, so that x does not change from this point on. So the iteration will infinitely continue with the same x. Or it will stop (since x_new = x) and return x although not f(x) but f'(x) is zero. Is this how it is supposed to work? Or did I get something wrong here? Dieter |
|||
05-15-2016, 01:57 PM
(This post was last modified: 05-15-2016 02:01 PM by Namir.)
Post: #12
|
|||
|
|||
RE: Newton Method
Davis' formula prevents division-by-zero overflow. That's one aspect. The second aspect is that when the derivative becomes (much) smaller than delta, the value of delta becomes more relevant and correction to guess x is non-zero, unless f(x) is also zero. I am not saying that Davis' method is fool-proof for each and every f(x). It's an enhancement to the original Newton's algorithm. Of course how well it works depends on the actual f(x), its roots, and the guesses you supply.
I don't use Davis' formula myself. I am just sharing my find, because I thought it had some merit. Namir |
|||
08-24-2016, 12:08 PM
Post: #13
|
|||
|
|||
RE: Newton Method
(05-11-2016 07:54 PM)bshoring Wrote: Can anyone point me to any examples of the Newton or Newton-Raphson method for solving f(x)=0 ? I'm looking for something that can run on the early HP programmables, like the HP-65 or 67. I have an excellent article/program explicitly relevant to your inquiry, but acquired from JSTOR & thus w redistribution limitations. Code: A Short Program for Simpson's or Gazdar's* Rule-Integration on Handheld Programmable "A handheld programmable calculator with as few as fifty steps of memory and eight storage registers can be used to solve almost any problem one usually meets in elementary texts on calculus or numerical analysis. There are, however, some exceptional problems that offer considerable difficulty in writing a short program to solve them within the limited memory. One such problem is that of numerical integration by Simpson's rule ... As an illustration, we give here a program for the HP-25 ... The program can also compute an improper integral ..." If I'm misreading the redistribution release for this article, please advise via PM. BEST! SlideRule |
|||
08-24-2016, 12:20 PM
Post: #14
|
|||
|
|||
RE: Newton Method | |||
08-24-2016, 08:37 PM
(This post was last modified: 08-24-2016 08:38 PM by Csaba Tizedes.)
Post: #15
|
|||
|
|||
RE: Newton Method
(08-24-2016 12:20 PM)Pekis Wrote: Hello, Yes, yes, I really like it! Csaba |
|||
08-25-2016, 04:47 AM
Post: #16
|
|||
|
|||
RE: Newton Method
Interesting responces.
Just as a FYI: Newton's method is shown in the 19C & 55 Math application manuals. The secant method was used in the 67 standard pack. 30-some years ago had a class called Numerical Techniques, we used FORTRAN. The class covered Newton-Raphson, but for the life of me can't remember the book. |
|||
08-25-2016, 05:00 AM
(This post was last modified: 08-25-2016 05:01 AM by Namir.)
Post: #17
|
|||
|
|||
RE: Newton Method
(08-25-2016 04:47 AM)Duane Hess Wrote: Interesting responces. Was it "Applied Numerical Methods" by Carnahan, Luther, and Wilkes? This book has a lot of good FORTRAN code. Carnahan was my professor in 1979 at the University of Michigan. I just ran into him at JFK this past June 28 when we were both flying to Nice, France. |
|||
08-25-2016, 04:19 PM
Post: #18
|
|||
|
|||
RE: Newton Method
(08-25-2016 05:00 AM)Namir Wrote:(08-25-2016 04:47 AM)Duane Hess Wrote: Interesting responces. I thought that rang a bell, but not quite. I used "Applied Numerical Analysis", Gerald, hardcover book has a distinctive red, white and blue cover. I'm surprised to see the copyright date is 1970, though I used it at Norwich Univ also about 1979. Good Fortran examples included, and also good coverage of how to deal with instabilities and boundary conditions. I also seem to recall that many (most it seems, but that may only be my memory) of the homework assignments were situations that resulted in these instabilities. Was irritating at the time, but turned out to be good preparation for real life problems. --Bob Prosperi |
|||
08-27-2016, 03:09 AM
Post: #19
|
|||
|
|||
RE: Newton Method
Great discussion and resources! Thanks everyone!
Regards, Bob |
|||
08-27-2016, 06:31 AM
Post: #20
|
|||
|
|||
RE: Newton Method
Namir:
Hi. Been thinking & thinking of the name of the book, Googling, not getting anywhere. The title you suggested & Carnahan, along with other authors rung a bell. It seems the name was very similar to "Applied Numerical Methods" but had "with FORTRAN" or some other addition implying digital computation. However, I recall FORTRAN being in the title. It was a paperback, about 3/8-inch thick, smaller size (i.e. 5" or 6" x 7" to 8"), had white, green & grey on the cover. Seems the bottom half of the cover had green/grey or grey/white "streaks". Sorta like sloppy painted semi-runny vertical lines. Don't recall the top half, other than that's where the title was, either in white or black letters. The intent of the book (in my opinion) was to give a reasonable math based somewhat "how-to" description showing the logic & pitfalls of various numerical methods specifically in a manner useful to a FORTRAN programmer. Further, I don't believe the author intended to make mathematicians/statisticians out of programmers, but to give a reasonable foundation where a FORTRAN programmer could intelligently assist a math/stat researcher or an engineer with digital computation. The course was called Numerical Techniques (as I recall) at Kearney State College. The stat department had a full-blown Numerical Analysis class for stat majors. The techniques class historically had small enrollment & was considered for elimination. The class was applied in nature. When I took it they changed the class to 1/2 numerical techniques & 1/2 advanced FORTRAN programming and was dually listed under stat & comp-sci. It lasted a few more years and was eliminated partially due to low enrollment. But mainly after faculty moving on to better things not having faculty who could fill the intent of the class well. |
|||
« Next Oldest | Next Newest »
|
User(s) browsing this thread: 1 Guest(s)