Post Reply 
Newton Method
05-11-2016, 07:54 PM
Post: #1
Newton Method
Can anyone point me to any examples of the Newton or Newton-Raphson method for solving f(x)=0 ? I'm looking for something that can run on the early HP programmables, like the HP-65 or 67.

There are Solve for f(x)=0 programs in the Standard packs, but I don't know if they employ the Newton method.

Thanks!


Regards,
Bob
Find all posts by this user
Quote this message in a reply
05-11-2016, 08:40 PM (This post was last modified: 05-11-2016 08:49 PM by Dieter.)
Post: #2
RE: Newton Method
(05-11-2016 07:54 PM)bshoring Wrote:  Can anyone point me to any examples of the Newton or Newton-Raphson method for solving f(x)=0 ? I'm looking for something that can run on the early HP programmables, like the HP-65 or 67.

That's quite straightforward. Here is a short example that I just tried (slightly modified) on the 34s. It's not elegant, but it does its job. It should run on the 67/97 as well as the 65 (with some steps unmerged). The key is the evaluation of f'(x) which is calculated via [f(x+h) – f(x)]/h where h is x/10000 (resp. 1/10000 if x=0).

Code:
001 LBL A
002 STO 0   ' x
003 LBL 1
004 RCL 0
005 GSB E
006 STO 1   ' store f(x)
007 RCL 0
008 x=0?
009 e^x     ' turn x = 0 into 1
010 EEX
011 4
012 /
013 STO 2   ' store h = x/10000
014 RCL 0
015 +
016 GSB E   ' get f(x+h)
017 RCL 1
018 -       ' f(x+h) - f(x)
019 RCL 2
020 /       ' ... / h ~= f'(x)
021 RCL 1
022 x<>y
023 /       ' f(x) / f'(x)
024 STO- 0  ' adjust x
025 RCL 0
026 R/S     ' show new x
027 GTO 1   ' next iteration

028 LBL E
... your
... f(x)
... here
... RTN

R0: x
R1: f(x)
R2: h

Place your f(x) code at label E. The argument x is expected in the X-register.

Example: f(x) = x^3 – x^2 – x + 0,5

Using Horner's method this can be coded as

Code:
LBL E
ENTER
ENTER
ENTER
1
-
x
1
-
x
,
5
+
RTN

Enter your first guess for a root of f(x) and press A.
Starting at x=0, this is what you get in FIX 6:

Code:
0 [A] => 0,499995

[R/S] => 0,400000

[R/S] => 0,403030

[R/S] => 0,403032

[R/S] => 0,403032

Pressing [x<>y] shows the last correction term, i.e. an estimate for the accuracy of the current approximation.

The other two roots of this function can be found starting at x=—1 resp. x=2.

Dieter
Find all posts by this user
Quote this message in a reply
05-12-2016, 12:10 AM
Post: #3
RE: Newton Method
It's working great! Thanks so much.


Regards,
Bob
Find all posts by this user
Quote this message in a reply
05-12-2016, 08:47 PM
Post: #4
RE: Newton Method
Dieter,

A better way to calculate h is using something like:

h=0.001*(1+ABS(x))

which does not require testing the value of x.

Namir
Find all posts by this user
Quote this message in a reply
05-12-2016, 09:17 PM
Post: #5
RE: Newton Method
(05-12-2016 08:47 PM)Namir Wrote:  A better way to calculate h is using something like:

h=0.001*(1+ABS(x))

Hmmm.... I do not think this is "better". Consider a function where the derivative changes rapidly near zero, e.g. sin(10^6 · x). Since the smallest possible h is 0,001 it may become much larger than x itself. That's why I prefer defining h as a certain fraction of x.

(05-12-2016 08:47 PM)Namir Wrote:  which does not require testing the value of x.

Well, that's done in two steps. And thus even shorter than "ABS 1 +". ;-)

Dieter
Find all posts by this user
Quote this message in a reply
05-13-2016, 01:59 PM (This post was last modified: 05-13-2016 02:03 PM by Namir.)
Post: #6
RE: Newton Method
The short equation to calculate h was suggested by my Numerical Analysis professor, Brice Carnahan, at the University of Michigan. I have used it for over 30 years and it works like a charm! Carnahan is the co-author of "Applied Numerical Methods" a book that uses FORTRAN code. I bought his book, in Paris, many months before I attended his class at the University of Michigan.

The equation for calculating h is very convenient in high-level programming languages.

Namir
Find all posts by this user
Quote this message in a reply
05-14-2016, 06:36 AM
Post: #7
RE: Newton Method
Thanks all for the great ideas!


Regards,
Bob
Find all posts by this user
Quote this message in a reply
05-14-2016, 07:49 PM
Post: #8
RE: Newton Method
(05-13-2016 01:59 PM)Namir Wrote:  The short equation to calculate h was suggested by my Numerical Analysis professor, Brice Carnahan, at the University of Michigan. I have used it for over 30 years and it works like a charm! Carnahan is the co-author of "Applied Numerical Methods" a book that uses FORTRAN code. I bought his book, in Paris, many months before I attended his class at the University of Michigan.

The equation for calculating h is very convenient in high-level programming languages.

Namir

Namir, I plugged your formula into the above program. Using the example equation given above, with guesses of 0 & -1, it reached the same result with the same number of iterations (5). However with a guess of 1, yours converged faster, cutting the number of iterations from 25 to 18. Interesting!

Thanks!


Regards,
Bob
Find all posts by this user
Quote this message in a reply
05-14-2016, 10:23 PM (This post was last modified: 05-14-2016 10:33 PM by Dieter.)
Post: #9
RE: Newton Method
(05-14-2016 07:49 PM)bshoring Wrote:  Namir, I plugged your formula into the above program. Using the example equation given above, with guesses of 0 & -1, it reached the same result with the same number of iterations (5). However with a guess of 1, yours converged faster, cutting the number of iterations from 25 to 18. Interesting!

At x=1 the function in the example has a local mininum, i.e. at that point the derivative f'(x) is zero. If f'(x) was calculated exactly the Newton method would cause a division by zero and the iteration would stop with an error.

In our case f'(x) is approximated by a small value: h=0,002 yields f'=0,004... and h=0,0001 leads to f' = 0,0002. Dividing f(1) by such small values results in a first approximation that is waaaayyyy off: ~125 with h=0,002 resp. ~2500 with h=0,0001 ...and +infinity with an exact derivative. ;-)

In other words: starting with x=1 should throw an error due to division by zero. Since the derivative is not exact, the algorithm continues with a very large value. Which is why so many iterations are required.

Dieter
Find all posts by this user
Quote this message in a reply
05-15-2016, 02:00 AM (This post was last modified: 05-15-2016 12:23 PM by Namir.)
Post: #10
RE: Newton Method
Hello Dieter,

Last year I was reading "Practical Numerical Methods for Chemical Engineers", third edition, by Richard Davis. When this author discusses Newton's method he uses the following very interesting variant on page 195:

x(i) = x(i-1) - f(x(i-1)) * f'(x(i-1)( / ([f'(x(i-1)]^2 + delta)

Davis uses delta = 1e-5 to protected against the slope f'(x) (approximated using any one of our techniques) being zero. The expression ([f'(x(i-1)]^2 + delta) is always positive.I am guessing that the delta term also stabilizes convergence as f;(x) gets very small too.

The above equation requires a few more calculations and probably one more storage register/variable, but adds more stability.
Find all posts by this user
Quote this message in a reply
05-15-2016, 01:11 PM
Post: #11
RE: Newton Method
(05-15-2016 02:00 AM)Namir Wrote:  Last year I was reading "Practical Numerical Methods for Chemical Engineers", third edition, by Richard Davis. When this author discusses Newton's method he uses the following very interesting variant on page 195:

x(i) = x(i-1) - f(x(i-1)) * f'(x(i-1)( / ([f'(x(i-1)]^2 + delta)

If I read this formula correctly, it calculates the new x-approximation this way:

Code:
                f(x) · f'(x)
x_new  =  x  —  ------------
                f'(x)² + Δ

If f'(x) becomes zero, this means that the correction term becomes zero as well, so that x does not change from this point on. So the iteration will infinitely continue with the same x. Or it will stop (since x_new = x) and return x although not f(x) but f'(x) is zero. Is this how it is supposed to work? Or did I get something wrong here?

Dieter
Find all posts by this user
Quote this message in a reply
05-15-2016, 01:57 PM (This post was last modified: 05-15-2016 02:01 PM by Namir.)
Post: #12
RE: Newton Method
Davis' formula prevents division-by-zero overflow. That's one aspect. The second aspect is that when the derivative becomes (much) smaller than delta, the value of delta becomes more relevant and correction to guess x is non-zero, unless f(x) is also zero. I am not saying that Davis' method is fool-proof for each and every f(x). It's an enhancement to the original Newton's algorithm. Of course how well it works depends on the actual f(x), its roots, and the guesses you supply.

I don't use Davis' formula myself. I am just sharing my find, because I thought it had some merit.

Namir
Find all posts by this user
Quote this message in a reply
08-24-2016, 12:08 PM
Post: #13
RE: Newton Method
(05-11-2016 07:54 PM)bshoring Wrote:  Can anyone point me to any examples of the Newton or Newton-Raphson method for solving f(x)=0 ? I'm looking for something that can run on the early HP programmables, like the HP-65 or 67.
There are Solve for f(x)=0 programs in the Standard packs, but I don't know if they employ the Newton method.
Thanks!

I have an excellent article/program explicitly relevant to your inquiry, but acquired from JSTOR & thus w redistribution limitations.
Code:
A Short Program for Simpson's or Gazdar's* Rule-Integration on Handheld Programmable
Calculators
Author(s): Abdus Sattar Gazdar
Source: The Two-Year College Mathematics Journal, Vol. 9, No. 3 (Jun., 1978), pp. 182-185 Published 
by: Mathematical Association of America
Stable URL: http://www.jstor.org/stable/3026695
Accessed: 31/12/2014 07:29
quoting from the article;
"A handheld programmable calculator with as few as fifty steps of memory and eight storage registers can be used to solve almost any problem one usually meets in elementary texts on calculus or numerical analysis.
There are, however, some exceptional problems that offer considerable difficulty in writing a short program to solve them within the limited memory. One such problem is that of numerical integration by Simpson's rule ... As an illustration, we give here a program for the HP-25 ... The program can also compute an improper integral ...
"

If I'm misreading the redistribution release for this article, please advise via PM.

BEST!
SlideRule
Find all posts by this user
Quote this message in a reply
08-24-2016, 12:20 PM
Post: #14
RE: Newton Method
Hello,

Don't forget the secant method (quasi-newton) method, fast and efficient Smile
Find all posts by this user
Quote this message in a reply
08-24-2016, 08:37 PM (This post was last modified: 08-24-2016 08:38 PM by Csaba Tizedes.)
Post: #15
RE: Newton Method
(08-24-2016 12:20 PM)Pekis Wrote:  Hello,

Don't forget the secant method (quasi-newton) method, fast and efficient Smile

Yes, yes, I really like it! Wink Smile

Csaba
Find all posts by this user
Quote this message in a reply
08-25-2016, 04:47 AM
Post: #16
RE: Newton Method
Interesting responces.

Just as a FYI:

Newton's method is shown in the 19C & 55 Math application manuals. The secant method was used in the 67 standard pack.

30-some years ago had a class called Numerical Techniques, we used FORTRAN. The class covered Newton-Raphson, but for the life of me can't remember the book.
Find all posts by this user
Quote this message in a reply
08-25-2016, 05:00 AM (This post was last modified: 08-25-2016 05:01 AM by Namir.)
Post: #17
RE: Newton Method
(08-25-2016 04:47 AM)Duane Hess Wrote:  Interesting responces.

Just as a FYI:

Newton's method is shown in the 19C & 55 Math application manuals. The secant method was used in the 67 standard pack.

30-some years ago had a class called Numerical Techniques, we used FORTRAN. The class covered Newton-Raphson, but for the life of me can't remember the book.

Was it "Applied Numerical Methods" by Carnahan, Luther, and Wilkes? This book has a lot of good FORTRAN code. Carnahan was my professor in 1979 at the University of Michigan. I just ran into him at JFK this past June 28 when we were both flying to Nice, France.
Find all posts by this user
Quote this message in a reply
08-25-2016, 04:19 PM
Post: #18
RE: Newton Method
(08-25-2016 05:00 AM)Namir Wrote:  
(08-25-2016 04:47 AM)Duane Hess Wrote:  Interesting responces.

Just as a FYI:

Newton's method is shown in the 19C & 55 Math application manuals. The secant method was used in the 67 standard pack.

30-some years ago had a class called Numerical Techniques, we used FORTRAN. The class covered Newton-Raphson, but for the life of me can't remember the book.

Was it "Applied Numerical Methods" by Carnahan, Luther, and Wilkes? This book has a lot of good FORTRAN code. Carnahan was my professor in 1979 at the University of Michigan. I just ran into him at JFK this past June 28 when we were both flying to Nice, France.

I thought that rang a bell, but not quite. I used "Applied Numerical Analysis", Gerald, hardcover book has a distinctive red, white and blue cover. I'm surprised to see the copyright date is 1970, though I used it at Norwich Univ also about 1979. Good Fortran examples included, and also good coverage of how to deal with instabilities and boundary conditions.

I also seem to recall that many (most it seems, but that may only be my memory) of the homework assignments were situations that resulted in these instabilities. Was irritating at the time, but turned out to be good preparation for real life problems.

--Bob Prosperi
Find all posts by this user
Quote this message in a reply
08-27-2016, 03:09 AM
Post: #19
RE: Newton Method
Great discussion and resources! Thanks everyone!


Regards,
Bob
Find all posts by this user
Quote this message in a reply
08-27-2016, 06:31 AM
Post: #20
RE: Newton Method
Namir:

Hi. Been thinking & thinking of the name of the book, Googling, not getting anywhere. The title you suggested & Carnahan, along with other authors rung a bell.

It seems the name was very similar to "Applied Numerical Methods" but had "with FORTRAN" or some other addition implying digital computation. However, I recall FORTRAN being in the title.

It was a paperback, about 3/8-inch thick, smaller size (i.e. 5" or 6" x 7" to 8"), had white, green & grey on the cover. Seems the bottom half of the cover had green/grey or grey/white "streaks". Sorta like sloppy painted semi-runny vertical lines. Don't recall the top half, other than that's where the title was, either in white or black letters.

The intent of the book (in my opinion) was to give a reasonable math based somewhat "how-to" description showing the logic & pitfalls of various numerical methods specifically in a manner useful to a FORTRAN programmer. Further, I don't believe the author intended to make mathematicians/statisticians out of programmers, but to give a reasonable foundation where a FORTRAN programmer could intelligently assist a math/stat researcher or an engineer with digital computation.

The course was called Numerical Techniques (as I recall) at Kearney State College. The stat department had a full-blown Numerical Analysis class for stat majors. The techniques class historically had small enrollment & was considered for elimination. The class was applied in nature. When I took it they changed the class to 1/2 numerical techniques & 1/2 advanced FORTRAN programming and was dually listed under stat & comp-sci. It lasted a few more years and was eliminated partially due to low enrollment. But mainly after faculty moving on to better things not having faculty who could fill the intent of the class well.
Find all posts by this user
Quote this message in a reply
Post Reply 




User(s) browsing this thread: 1 Guest(s)