The Museum of HP Calculators

HP Forum Archive 15

[ Return to Index | Top of Index ]

New root-seeking algorithm
Message #1 Posted by Namir on 25 Nov 2005, 11:21 p.m.

I presented this new root-seeking algorithm (I call it the Quartile algorithm) at the HHC2005 conference It isa root-bracketing method that improves on the Bsection method:

Given: Root bracketing guesses A and B
       Tolerance Tolr

P-Code:

Let alpha = 0.25 Let FA = F(A) Let FB= F(B) Do If ABS(FA) < ABS(B) Then Xm = A + alpha * (B - A) Else Xm = A + (1 - alpha)*(B - A) End

Let Fm = F(Xm) If Fm * FA > 0 Then A = Xm FA = Fm Else B = Xm FB = Fm End Loop Until Abs(A - B) < Tolr Root = (A + B) / 2

J. P. Moreau has implemeted the Quartile algorithm in BASIC, C++, and other languages. The BASIC code is found here

      
Re: New root-seeking algorithm
Message #2 Posted by Marcus von Cube, Germany on 26 Nov 2005, 5:05 a.m.,
in response to message #1 by Namir

Namir,

Quote:
Do
  If ABS(FA) < ABS(B) Then

I assume you mean:

Do
  If ABS(FA) < ABS(FB) Then

Can you elaboreate a little about the heuristics behind the algorithm?

Marcus

            
Re: New root-seeking algorithm
Message #3 Posted by Namir on 26 Nov 2005, 7:22 a.m.,
in response to message #2 by Marcus von Cube, Germany

You are correct. Here is the correced p-code:

P-Code:

P-Code:

Let alpha = 0.25 Let FA= F(A) Let FB= F(B) Do If ABS(FA) < ABS(FB) Then Xm = A + alpha * (B - A) Else Xm = A + (1 - alpha)*(B - A) End

Let Fm = F(Xm) If Fm * FA > 0 Then A = Xm FA = Fm Else B = Xm FB = Fm End Loop Until Abs(A - B) < Tolr Root = (A + B) / 2

                  
Re: New root-seeking algorithm
Message #4 Posted by Namir on 26 Nov 2005, 2:08 p.m.,
in response to message #3 by Namir

The most common two root-braketing methods are:

1. The Bisection method.This method compares the sign of function values.
2. he Reguli alse method uses the function values are weights to determine the new guess.

The Quartile method compares the absolute funcion values to determine the next guess. Thus, its approach lies somewhere in between the above two methods. If you look at the p-code an change alpha to 0.5, you get the Bisection method.

Thus, the Bisection method is a special case of the Quartile method, where the new guess is systematically taken as the middle of the root-bracketing interval.

Edited: 26 Nov 2005, 2:09 p.m.

                        
Re: New root-seeking algorithm
Message #5 Posted by Patrick on 3 Dec 2005, 10:59 p.m.,
in response to message #4 by Namir

Sorry, Namir, I don't mean to be impolite, but so what?

It is relatively simple to develop umpteen variations on a theme, the question is whether or not one of these variations is actually better in some way.

Are there any analyses which motivates the choice of alpha = 0.25? In what categories of problems does this algorithm work better than others? How much better? Does it converge more frequently than others? Is it faster?

                              
Re: New root-seeking algorithm
Message #6 Posted by Namir on 10 Dec 2005, 9:24 a.m.,
in response to message #5 by Patrick

The Bisection method is a reliable method despite it's slowness. Often, complex root-seeking implementation use a combination of Newton (or similar/better methods) with he Bisection as a backup plan in case Newtons method yield guesses that stray away.

The Quartile method is a root-bracketing method that is better than the Bisection. The value for alpha can be in the range of 0.25 and 0.3. Th Quartile algorithm uses a slightly different approach--compares the absolute values of the functions at interval [A. B]. The Quartile methd is not a variant of Bisection. Te Bisection is a special case of the Quartile method (setting alpha to 0.5).

The method of False-Position is the next higherup root-bracketing method that uses the function values as weights. This algorithm does not systematically shrink the interval containing the root. To stop iterating, you compare the current new guess with the last new guess.

If you are going to resort to the Bisection method as the main or backup method, then you can replace it with the Quartile method to have fewer iterations. The reduction in iterations depends on the function and the initial root-bracketing interval.

Newtons method is popular in root solving. It uses calls to teh functio and it's first derivative. The Haily method is generally better than Newton's. It calls the function and its first two derivatives. The Householder method is even faster, but calls n he fucntion and it's first three derivatives! By using function calls to approximate the derivatives, I found that Haily's method is in general the optimum method (considering iterations and function calls).

Parick, I welcome your suggstions for root-seeking algorithms (both variants of other methods or original algorithms). I think the subject is fascinating and I am interested in all sugegstions.

Namir

Edited: 10 Dec 2005, 9:28 a.m.

                                    
Re: New root-seeking algorithm
Message #7 Posted by ECL on 10 Dec 2005, 10:26 p.m.,
in response to message #6 by Namir

Namir,

I too am interested in root methods. I am not familiar w/ Householder method.

I am dealing with simple methods (modified false position, newton-raphson..). I prefer non-symbolic derivative evaluation due to using primarily the 33s. Although this can be overcome with use of divided difference approach.

                                          
Re: New root-seeking algorithm
Message #8 Posted by Namir on 10 Dec 2005, 10:39 p.m.,
in response to message #7 by ECL

Th Householder method is relatively new. Like Newton's and Haily's methods it is based on te Tayler's expansion. The Householder methods includes up to the third derivative.

Atributes of method:

+ Very fast convergence rate + Requires first, second, and third function derivatives! + Using derivative approximation methods results in five function calls per iteration + Compared to other methods, this method has fewer iterations but a high number of function calls + Basic Householder iteration uses:

X = X - f(X){[f’(X)^2 - f(X) f’’(X) / 2]/[f’(X)^3 - f(X) f’(X) f’’(X) - f'''(X) f(X)^2 / 6]}

where f(x) is target function (f(x) = 0) f'(x) is the first derivative f''(x) is the second derivative f'''(x) is the third derivative

Pseudo-code:

Obtain initial guess x and Tolerance Obtain or set value to MaxIterations Clear Diverge flag Set IterCount = 0 Loop Increment IterCount If IterCount > MaxIterations Then Set Diverge flag and exit loop h = 0.01 * [1 + |x|] f0 = f(x) fp1 = f(x+h) fp2 = f(x+2h) fm1 = f(x-h) fm2 = f(x-2h) D1 = (fp - fm) / (2h) D2 = (fp - 2f0 + fm) / (h*h) D3 = (fp2 - 2 * fp1 + 2 * fm1 - fm2) / 2 / h ^ 3 diff = f0 * ((f0^2 - f0 * D1 / 2)/(f0^3 - f0 * D1 * D2 - D3 * f0^2 / 6]} x = x - diff Until |diff| < Tolerance

Return root as x and Diverge flag

Edited: 11 Dec 2005, 8:38 a.m.


[ Return to Index | Top of Index ]

Go back to the main exhibit hall