# HP Forums

Full Version: Newton or Secant?
You're currently viewing a stripped down version of our content. View the full version with proper formatting.
I have always been partial to Newton's root-seeking method. For many decades, it has been my to-go method. I use Bisection in combination of, or instead of, Newton's method. I have always regarded the Secant method, which is based on a rough approximation of Newton's method, as mathematically inferior to Newton's method. I have to admit that I was surprised to see st least one HP calculator stat pac (and maybe Solver to?????) use the Secant Method.

Recently, I decided to sit down and compare Newton and Secant up close an personal. The Secant method requires one function call per iteration. Newton's method requires at least two function calls (one for the function and one for the approximation of the slope) per iteration. Comparing teh two methods depends on the function whose root you seek, the initial root guess (and how close they are to the true root). The test I conducted showed that teh Secant method does very well in requiring fewer function calls than Newton's method to attain a refined guess for the root. In addition, I saw several cases where the Secant method provided the final refined guess for the root in fewer iterations--a DOUBLE WIN!

So, as a new convert, I say don't discard the Secant method. Give it a try and see if it does well for the problem you are solving. Only if it does not pass the mustard test, then use Newton, Halley, or one of my many new algorithms (<-- cheap plug :-)).

Namir
The original HP-34C solver was based on the secant method. I'd be surprised if all subsequent (HP) solvers weren't basically the same.

Newton and Halley are great if you know the derivative(s).

I'm quite partial to Brent's method.

Pauli
There are several texts on the subject and there should be some stuff on the internet. Basically, Newton's method is faster per step (log of the error gets cut in half if the approximation is near enough to a root; the secant method cuts the log of the error by the Golden Section.) On the other hand, Newton's method nominally takes 2 function evaluations per step while the secant method only uses 1 function evaluation. It's possible that the derivative used in Newton's method can be had for little cost which would be an advantage for Newton's method. Normally the convergence region of the secant method is wider that that of Newton's.

There is another (poorer) method that sometimes is called the secant method but the proper name is Reguli Falsi. In this case, one does not use the last two iterations like the secant method but uses the last two iterations which have different signs.
(04-03-2017 12:40 AM)ttw Wrote: [ -> ]There is another (poorer) method that sometimes is called the secant method but the proper name is Reguli Falsi. In this case, one does not use the last two iterations like the secant method but uses the last two iterations which have different signs.

The Reguli Falsi is a root-bracketing method, which can run into trouble when one one end of the root-bracketing interval converges to the root. I found an interesting article on the Internet an article titled "A family of regula falsi root-ﬁnding methods" by Sérgio Galdino. This article shows several remedies for the Reguli-Falsi's problem. Each remedies allows the root-bracketing interval to properly shrink around the root.
Reference URL's
• HP Forums: https://www.hpmuseum.org/forum/index.php
• :