I am happy to post two new root-seeking algorithms. They are the Super Secant and Hyper Secant methods. These two methods are based on the legacy secant methods (which are rough approximation of Newton's method) that use multiple guess refinement per iteration.

click here to download a ZIP file that contains a PDF document and an Excel file that shows how these algorithms work compared to the methods Newton, Halley, and Ostrowski.

Enjoy!

Namir

Nice work! I have to read it slowly, I just had a glance.

May I make some remarks about formatting and packaging? Both for the next work, since you publish a lot.

For the format, could you use "alignment justified" or "justified text"? It is more pleasant to read.

For the packaging, while the file is obviously intended for who has a computer, reading a pdf from other platforms is not so rare. So could be possible to have the package (excel+pdf) and the pdf duplicated so one can access at least directly to the pdf?

If you want to keep the package, there is the possibility to embed files in a pdf, a free tool that does this is the pretty neat pdftk server (

here the manual)

Mine are just suggestions, not a critics. What is important is the content of the file.

(04-02-2017 09:54 PM)pier4r Wrote: [ -> ]Nice work! I have to read it slowly, I just had a glance.

May I make some remarks about formatting and packaging? Both for the next work, since you publish a lot.

For the format, could you use "alignment justified" or "justified text"? It is more pleasant to read.

For the packaging, while the file is obviously intended for who has a computer, reading a pdf from other platforms is not so rare. So could be possible to have the package (excel+pdf) and the pdf duplicated so one can access at least directly to the pdf?

If you want to keep the package, there is the possibility to embed files in a pdf, a free tool that does this is the pretty neat pdftk server (here the manual)

Mine are just suggestions, not a critics. What is important is the content of the file.

I write the document in MS Word and hwn I am ready to publish save it as a PDF. So yo are suggesting to use "alignment justified" in Word?

Namir

(04-02-2017 09:54 PM)pier4r Wrote: [ -> ]Nice work! I have to read it slowly, I just had a glance.

May I make some remarks about formatting and packaging? Both for the next work, since you publish a lot.

For the format, could you use "alignment justified" or "justified text"? It is more pleasant to read.

For the packaging, while the file is obviously intended for who has a computer, reading a pdf from other platforms is not so rare. So could be possible to have the package (excel+pdf) and the pdf duplicated so one can access at least directly to the pdf?

If you want to keep the package, there is the possibility to embed files in a pdf, a free tool that does this is the pretty neat pdftk server (here the manual)

Mine are just suggestions, not a critics. What is important is the content of the file.

I tested "alignment justified" on the Word document and I agree with you that it looks nicer!!

Namir

Have you tried any of these new methods on an HP-67 ?

Excellent work, Namir!

I'm looking forward to hearing your presentation in September.

Thanks again, Namir, for your work.

As you know, I'm quite parcial about the Ostrowsky method. When I read your paper I was surprised by its mediocre performance. Thus, I decided to check what was the problem with it.

In your code, the derivative is approximated as the ratio of two increments, but the constant you use (0.01 in the computation of h as h = 0.01 * (1 + Abs(X))) is way high for the Ostroswky method, you need a much smaller one.

Changing to h = 3.0e-7 * (1 + Abs(X)) (a nice value if the floats are 64 bits, as I think they are in VB), you will see (*) that the Ostrowsky method numbers in the tables turn red in all test cases except 2: Custom1 (but it no longer fails, it's now 17-53, same iterations but 2 more function evaluations than your hyper-secant method) and equation 6 with x0 = 1 (it's now 15-45, second behind Halley).

Perhaps the other methods in this comparison may also benefit from such change in the computation of h.

Your approach in the Hyper-Secant method looks really interesting for me.

Regards.

(*) I've performed such computations in Python with a precision of 15 digits, In VB the results may be different. In any case, my Python code returned the very same results (for number of iterations and total function calls) for all test cases in your table, so I am confident about my statement about the change in the computation of h. Also, the Python code does some sanity checks (as to not to divide by 0 and so on) anticipating problems such the derivative going to 0. I'm not sure at all if the VB code may have problems of such kind when the constant is changed.

(04-04-2017 08:43 AM)emece67 Wrote: [ -> ]Thanks again, Namir, for your work.

As you know, I'm quite parcial about the Ostrowsky method. When I read your paper I was surprised by its mediocre performance. Thus, I decided to check what was the problem with it.

In your code, the derivative is approximated as the ratio of two increments, but the constant you use (0.01 in the computation of h as h = 0.01 * (1 + Abs(X))) is way high for the Ostroswky method, you need a much smaller one.

Changing to h = 3.0e-7 * (1 + Abs(X)) (a nice value if the floats are 64 bits, as I think they are in VB), you will see (*) that the Ostrowsky method numbers in the tables turn red in all test cases except 2: Custom1 (but it no longer fails, it's now 17-53, same iterations but 2 more function evaluations than your hyper-secant method) and equation 6 with x0 = 1 (it's now 15-45, second behind Halley).

Perhaps the other methods in this comparison may also benefit from such change in the computation of h.

Your approach in the Hyper-Secant method looks really interesting for me.

Regards.

(*) I've performed such computations in Python with a precision of 15 digits, In VB the results may be different. In any case, my Python code returned the very same results (for number of iterations and total function calls) for all test cases in your table, so I am confident about my statement about the change in the computation of h. Also, the Python code does some sanity checks (as to not to divide by 0 and so on) anticipating problems such the derivative going to 0. I'm not sure at all if the VB code may have problems of such kind when the constant is changed.

Cesar,

Thank you so much for your comments. I use h=0.01 *(1+|x|) in fear that much smaller values would cause computational errors. By this I mean the accuracy of vintage calculators may give a slope of zero if h is way too small. Obviously I am wrong. I think replacing 0.01 with smaller value for all the methods should be interesting.

I think I am going to compare how reducing 0.01 repeatedly by a factor of 10 affect the iterations of at least the Newton method (maybe include Halley and Ostrowski too) and see how it affects the number of iterations and number of functions needed to reach a refined guess for the root, for a given tolerance value.

Namir

Another possibility is to use variable step lengths. I don't have a quick rule of thumb, but there are some in the various references. The idea is to automatically adjust step length as the computation proceeds. This is often with the Levenberg-Marquardt method for multidimensional problems.

If error estimation is easy, one just increases the step length until things don't work then backs off. (There are good discussions on the Wiki for the Nelder-Mead Creeping Simplex optimization method.) I used to lengthen by 3 and shrink by 2 as the actual numbers don't matter; one eventually gets 3^n/2^m with n stretches and m shrinks.

I also found a new reference:

http://citeseerx.ist.psu.edu/viewdoc/dow...1&type=pdf
You will also need to modify the stopping criterion as, simply relaying on the difference between the last two root estimations is not adequate. The convergence is, sometimes, so fast that at the 2nd or 3rd iteration, although being Abs(X - LastX) greater than Toler, the function does indeed evaluate to 0.

I ended up with:

Code:

' Ostrowski

' Count the number of function evaluation in detail

Dim Tnfe As Long

Tnfe = 0

R = 2

C = C + 2

X = [A2].Value

Do

LastX = X

h = 0.0000003 * (1 + Abs(X))

F0 = Fx(sFx, X)

Tnfe = Tnfe + 1

' Early Exit if root found

If F0 = 0 Then

Exit Do

End If

Fp = Fx(sFx, X + h)

Tnfe = Tnfe + 1

' Early Exit if root found

If Fp = 0 Then

Cells(R, C) = X + h

Cells(R, C + 1) = Fp

R = R + 1

Exit Do

End If

Deriv1 = (Fp - F0) / h

Z = X - F0 / Deriv1

Fz = Fx(sFx, Z)

Tnfe = Tnfe + 1

' Early Exit if root found

If Fz = 0 Then

Cells(R, C) = Z

Cells(R, C + 1) = Fz

R = R + 1

Exit Do

End If

X = Z - Fz * (X - Z) / (F0 - 2 * Fz)

Cells(R, C) = X

Cells(R, C + 1) = Fx(sFx, X)

R = R + 1

Loop Until Abs(X - LastX) < Toler Or R > 1000

Cells(R + 1, C) = "Fx Calls="

Cells(R + 1, C + 1) = Tnfe

I can confirm that the other methods do also benefit from decreasing the constant in the computation of h (your Hyper-Secant included. I've not tested the Super-Secant).

Hi Namir,

here is an HP-41 program that uses quadratic interpolation to find a root of f(x) = 0

It takes 3 guesses in registers X Y Z and returns a root x in X-register and f(x) in Y-register ( which should be a "small" number ) if flag F02 is clear.

It should also find double roots.

If F02 is set, "SLV2" tries to find an extremum.

In both cases, R00 = function name is to be initialized.

Here is the listing:

01 LBL "SLV2"

02 STO 01

03 RDN

04 STO 02

05 X<>Y

06 STO 03

07 XEQ IND 00

08 STO 06

09 RCL 02

10 XEQ IND 00

11 STO 05

12 LBL 01

13 VIEW 01

14 RCL 01

15 XEQ IND 00

16 STO 04

17 RCL 02

18 RCL 03

19 -

20 *

21 ENTER

22 STO 07

23 RCL 01

24 RCL 03

25 -

26 STO 08

27 STO 10

28 ST* Z

29 RCL 05

30 *

31 ST* 08

32 -

33 RCL 02

34 RCL 01

35 -

36 STO 09

37 ST- 10

38 ST* Z

39 RCL 06

40 *

41 ST* 09

42 -

43 STO 06

44 *

45 RCL 07

46 RCL 10

47 *

48 RCL 08

49 -

50 RCL 09

51 +

52 2

53 /

54 STO 10

55 X^2

56 +

57 FC? 02

58 X<0?

59 GTO 02

60 SQRT

61 RCL 10

62 SIGN

63 *

64 RCL 10

65 +

66 GTO 03

67 LBL 02

68 RCL 10

69 CHS

70 RCL 06

71 LBL 03

72 X#0?

73 /

74 RCL 01

75 +

76 X<> 01

77 X<> 02

78 STO 03

79 RCL 04

80 X<> 05

81 STO 06

82 RCL 01

83 RCL 02

84 X#Y?

85 GTO 01

86 RCL 04

87 X<>Y

88 END

( 113 bytes / SIZE 011 )

Number of function evaluations = 2 + number of iterations.

Best wishes,

JM.

(04-05-2017 09:47 PM)JMBaillard Wrote: [ -> ]Hi Namir,

here is an HP-41 program that uses quadratic interpolation to find a root of f(x) = 0

It takes 3 guesses in registers X Y Z and returns a root x in X-register and f(x) in Y-register ( which should be a "small" number ) if flag F02 is clear.

It should also find double roots.

If F02 is set, "SLV2" tries to find an extremum.

In both cases, R00 = function name is to be initialized.

Here is the listing:

01 LBL "SLV2"

02 STO 01

03 RDN

04 STO 02

05 X<>Y

06 STO 03

07 XEQ IND 00

08 STO 06

09 RCL 02

10 XEQ IND 00

11 STO 05

12 LBL 01

13 VIEW 01

14 RCL 01

15 XEQ IND 00

16 STO 04

17 RCL 02

18 RCL 03

19 -

20 *

21 ENTER

22 STO 07

23 RCL 01

24 RCL 03

25 -

26 STO 08

27 STO 10

28 ST* Z

29 RCL 05

30 *

31 ST* 08

32 -

33 RCL 02

34 RCL 01

35 -

36 STO 09

37 ST- 10

38 ST* Z

39 RCL 06

40 *

41 ST* 09

42 -

43 STO 06

44 *

45 RCL 07

46 RCL 10

47 *

48 RCL 08

49 -

50 RCL 09

51 +

52 2

53 /

54 STO 10

55 X^2

56 +

57 FC? 02

58 X<0?

59 GTO 02

60 SQRT

61 RCL 10

62 SIGN

63 *

64 RCL 10

65 +

66 GTO 03

67 LBL 02

68 RCL 10

69 CHS

70 RCL 06

71 LBL 03

72 X#0?

73 /

74 RCL 01

75 +

76 X<> 01

77 X<> 02

78 STO 03

79 RCL 04

80 X<> 05

81 STO 06

82 RCL 01

83 RCL 02

84 X#Y?

85 GTO 01

86 RCL 04

87 X<>Y

88 END

( 113 bytes / SIZE 011 )

Number of function evaluations = 2 + number of iterations.

Best wishes,

JM.

Thank you JM. A few years ago I presented at one of the HHC conferences an algorithm that used inverse quadratic Lagrangian interpolation to find the root of a function. I came to realize that there was some "politics" among mathematicians. For some reason many avoided methods like inverse quadratic Lagrangian interpolation to find the roots.

Namir

There are a couple of more root finders that I have used. One is Brents's algorithm (inverse quardratic interpolation with bisection) and the other is the "Illinois" algorithm (which I heard of long before the published work.) There is another modification of Newton's method that raises its effective rate (to Sqrt(8)). The idea is to evaluate f(x) and f'(x) at different xs. (

http://www.sciencedirect.com/science/art...30)There's also Chebychev's method which is (like Halley's) a Taylor series; Chebychev used the series and Halley used the continued fraction for the series. Also a guy named Galindo did rather well with bunches of tests and algorithms.

(04-06-2017 01:51 AM)ttw Wrote: [ -> ]There are a couple of more root finders that I have used. One is Brents's algorithm (inverse quardratic interpolation with bisection) and the other is the "Illinois" algorithm (which I heard of long before the published work.) There is another modification of Newton's method that raises its effective rate (to Sqrt(8)). The idea is to evaluate f(x) and f'(x) at different xs. (http://www.sciencedirect.com/science/art...30)There's also Chebychev's method which is (like Halley's) a Taylor series; Chebychev used the series and Halley used the continued fraction for the series. Also a guy named Galindo did rather well with bunches of tests and algorithms.

You link is confusing my browser! Can you please send a link that works. I would like to read the material you are pointing to.

(04-06-2017 05:11 AM)DMaier Wrote: [ -> ] (04-06-2017 03:17 AM)Namir Wrote: [ -> ]You link is confusing my browser! Can you please send a link that works. I would like to read the material you are pointing to.

The URL ran into the following text. It should be:

http://www.sciencedirect.com/science/art...5913002930

Thank you very much for the link. The article itself has more pdf links to other free articles. I also google-searched for articles with pdf-for-purchase, mentioned in the reference area, and found even more pdf articles to download. So thank you for the link as it lead to a bonanza of articles!

Namir