Post Reply 
On Convergence Rates of Root-Seeking Methods
03-06-2018, 02:53 AM
Post: #21
RE: On Convergence Rates of Root-Seeking Methods
(03-04-2018 11:58 PM)Mike (Stgt) Wrote:  A german proverb says 'no reply is also a reply'.
/M.

So is a reply from a non-expert.

Probably not the best methods, but lots of information I was not aware of:

https://en.wikipedia.org/wiki/Methods_of...uare_roots

a) Bakhshali method: interesting as the source is an ancient Hindu manuscript;

b) Taylor Series: quoting from Wikipedia:

Quote:As an iterative method, the order of convergence is equal to the number of terms used. With two terms, it is identical to the Babylonian method. With three terms, each iteration takes almost as many operations as the Bakhshali approximation, but converges more slowly.

Despite that remark, that's the one I chose to play with, because it is easy to follow. Also, it can be expanded at will if one wants to try it with more terms.

The first three terms:

\(\sqrt{x}= r\left ( 1+\frac{x-r^{2}}{2r^{2}}-\frac{\left ( x-r^{2} \right )^{2}}{8r^{4}} \pm \cdots \right )\)

Equivalently, after a manual simplification:

\(\sqrt{x}\approx \frac{3\left ( r^{2}+2x \right )-\left ( \frac{x}{r} \right )^{2}}{8r}\)

Decimal Basic program and output:

OPTION ARITHMETIC DECIMAL_HIGH ! 1000 digits precision
LET t = TIME
INPUT x
LET r = 10^INT(LOG10(x)/2) + 1 ! initial estimate
PRINT r
DO 
   LET a = r
   LET r = (3*(r^2 + 2*x) - (x/r)^2)/(8*r)
   PRINT r
LOOP UNTIL ABS(r - a) < 1E-300
PRINT SQR(x) - r
PRINT TIME - t;" seconds"
END


? 17
 2 
 2.609375
 3.831457666622230815360207907180368367929063526645300241783992701606003096535603​1813886658895626223976443827895... 
 4.122244620637600419719958337875107818477074015229252100479776053793938950195150​0012131448438220412295740265140...
 4.123105625598878536178011647766885826496727434593597792311874268691149149019651​3723285709444226805259084101669...
 4.123105625617660549821409855974076830276053739312118674761681625480610941537298​4751649808177898214772303507785...
 4.123105625617660549821409855974077025147199225373620434398633573094954346337621​5935878636508106842966843263884...
 4.123105625617660549821409855974077025147199225373620434398633573094954346337621​5935878636508106842966845440409...
 4.123105625617660549821409855974077025147199225373620434398633573094954346337621​5935878636508106842966845440409...
 8.202714670745631983631354915192547370538210924268951508828665917189926704332598​08358492948699826676293788E-931...
2.02 seconds


This is a test for small argument as the initial estimation method will cause error for large arguments ( starting around 920 ).

RPL program

%%HP: T(3)A(R)F(.);
\<< DUP XPON 2. / ALOG 1. +
DO DUP2 DUP2 SQ SWAP DUP + + 3. * UNROT / SQ - OVER 8. * / SWAP DUP2 / 1. - ABS NIP .00000001 <
UNTIL
END NIP
\>>


Again, this will cause error for x = 0. No problem, we know Sqrt(0) = 0. Also, these are just tests :-)
Find all posts by this user
Quote this message in a reply
03-08-2018, 01:04 AM
Post: #22
RE: On Convergence Rates of Root-Seeking Methods
There's a paper that I don't remember somewhere (Before Bailey Borwein and Plouffe) which discussed the time to compute Pi by various methods. The linear, quadratic, cubic, and quartic methods took about the same time overall. What happens is that the multi-precision computations dominate the convergence rate.
A very loose estimate can be obtained by assuming arithmetic of N digits takes time proportional to N^2) For quadratic convergence up to about 100 digits, digit lengths of (1,2,4,8,16,32,64,100) can be used and for cubic, ( 1,3,27,81,100) The sums of squares are: 15461 and 17381 respectively. Although the quadratic case uses 8 iterations and the cubic 5, the number of elementary operations is nearly the same.
Find all posts by this user
Quote this message in a reply
03-08-2018, 01:21 AM (This post was last modified: 03-11-2018 01:09 PM by emece67.)
Post: #23
RE: On Convergence Rates of Root-Seeking Methods
(03-07-2018 10:32 PM)Mike (Stgt) Wrote:  It's awesome, at least for those who are not awfully bored by this subject.

Yes, it is awesome!

I've also made my own tests, in my case in Python with the mpmath package for the arbitrary precision arithmetic. I've tested the algorithms (all are generic root seeking algorithms):
  • Halley (3rd order convergence, 1 division)
  • Grau+Díaz-Barrero (6th order, 2 divisions)
  • Newton-Raphson (2nd order, 1 division)
  • Ostrowsky-Traub (4th order, 2 divisions)
  • Kung-Traub (8th order, 3 divisions, but more complex)
  • Potra-Ptak (3rd order, 2 divisions)

I've not used (by now) sliding accuracy, but fixed ones at 10000, 100000 and 1 million digits with only 2 arguments: 2 and 17. Tested only sqrt().

My findings are that, as in this case the computation of the function to be solved requires only one multiplication and one subtraction, but the algorithm requires one or more divisions, being divisions more time consuming at such large digit counts, the usual parameter used to compare root-seeking algorithm speeds, the efficiency index \(\gamma=\sqrt[r]p\) (being \(p\) the convergence order of the method and \(r\) the number of function evaluations needed on each iteration) is not useful, as function evaluations are simpler than other operations needed by the algorithm.

Defining instead a "division-aware efficiency index" as \(\gamma_d=\sqrt[d]p\) (being \(d\) the number of divisions needed on each iteration) one gets a reliable predictor of the speed of the algorithm. Computing this index for the above algorithms one gets:
  • Halley: 3
  • Grau+Díaz-Barrero: 2.45
  • Newton-Raphson: 2
  • Ostrowsky-Traub: 2
  • Kung-Traub: 2
  • Potra-Ptak: 1.7
so Halley is the fastest, then Grau+Díaz-Barrero, then Newton-Raphson, Ostrowsky-Traub and Kung-Traub, with Potra-Ptak the slowest. And this is exactly what I've found, only being Kung-Traub slower than expected (but still faster than Potra-Ptak) because, I think, of its higher complexity.

So the moral is, using this kind of classic root-seeking algorithms to compute square roots, use one with a high convergence order, using few divisions and not too cumbersome (this is the reason I added the Grau+Díaz-Barrero to my tests).

Now I will check if the Taylor method used by Gerson can be used to get a 4th order method with only one division. Such a method, if not too complex, may be even faster than Halley.

In my tests, the more the number of digits used, the higher the correlation between \(\gamma_d\) and the speed of the method.

For the curious, it took 218 s for Halley's algorithm to compute \(\sqrt2\) to 1 million digits (Intel Core-i7@2.7 GHz, Win10).

Regards.
Find all posts by this user
Quote this message in a reply
03-08-2018, 04:17 AM
Post: #24
RE: On Convergence Rates of Root-Seeking Methods
(03-08-2018 03:07 AM)Mike (Stgt) Wrote:  
(03-08-2018 01:04 AM)ttw Wrote:  There's a paper that I don't remember somewhere ...

Until you find it again you may take this one.
BTW, the code snippets here are Brent-Salamin and Borwein&Borwein.

/M.

The first one is the paper I remembered. Thanks.
Find all posts by this user
Quote this message in a reply
03-08-2018, 05:35 AM
Post: #25
RE: On Convergence Rates of Root-Seeking Methods
I'm new to this subject and I found this subject is very interesting.

My first time to try the Root-Seeking feature is from the HP-15C SOLVE function.
I don't know much about the convergence speed of the SOLVE function because I don't have the actual HP-15C but use the Emulator for PC computer that always run very fast on all kind of equations.

Since I just got the HP-11C and this doesn't have the SOLVE feature like the 15C. Then I noticed that in the 11C Owner's Handbook it got the Newton's Method program so this is the second time that I try this root-seeking program. This turn out to be a very slow root-seeking program which I noticed that the reason that way very slow because of the 11C slow computational CPU itself.

I happen to try other program that use "Regula Falsi method"
I try on 11C this method is much faster than the Newton's Method with this Regula Falsi program you need to provide reference points to X1 and X2 while providing the reference point this can be check right away if that two points is far apart or not when done give it the tolerant and start to compute when done can check for the accuracy against 0 value. If the answer not accurate enough just press R/S again for better more accuracy.

I'm new to this Algorithm I'll post this program later on.

Gamo
Find all posts by this user
Quote this message in a reply
03-08-2018, 08:00 PM
Post: #26
RE: On Convergence Rates of Root-Seeking Methods
(03-06-2018 02:53 AM)Gerson W. Barbosa Wrote:  Decimal Basic program and output:

OPTION ARITHMETIC DECIMAL_HIGH ! 1000 digits precision
LET t = TIME
INPUT x
...
PRINT TIME - t;" seconds"
END

I suggest you place the INPUT line first, i.e. before TIME is saved in t. ;-)

But tell me more about this Decimal Basic. Sounds very interesting.

Dieter
Find all posts by this user
Quote this message in a reply
03-08-2018, 08:10 PM (This post was last modified: 03-08-2018 08:11 PM by Dieter.)
Post: #27
RE: On Convergence Rates of Root-Seeking Methods
(03-08-2018 05:35 AM)Gamo Wrote:  I'm new to this subject and I found this subject is very interesting.

First of all: the original post by Namir indeed was about root-seeking methods, i.e. methods for solving f(x)=0. But the last posts deal with a different topic, they are about methods for calculating square roots of a number.

(03-08-2018 05:35 AM)Gamo Wrote:  I happen to try other program that use "Regula Falsi method"
I try on 11C this method is much faster than the Newton's Method with this Regula Falsi program you need to provide reference points to X1 and X2 while providing the reference point this can be check right away if that two points is far apart or not when done give it the tolerant and start to compute when done can check for the accuracy against 0 value.

Wow, that was quite a long sentence. Even without a single period or comma. ;-)

Anyway, the Newton method usually is converging with a similar speed as Regula Falsi and its variants. But the latter has the advantage that the user can provide two guesses that bracket the solution. It may be faster here and there because each iteration only requires one call of f(x) instead of two for the Newton method (f(x) and f(x+h) or f(x) and f'(x)).

BTW, Gamo, I just noticed the PM you sent a few days ago. I now sent a reply – and an 11C program. ;-)

Dieter
Find all posts by this user
Quote this message in a reply
03-08-2018, 10:43 PM (This post was last modified: 03-10-2018 03:59 AM by Gerson W. Barbosa.)
Post: #28
RE: On Convergence Rates of Root-Seeking Methods
(03-08-2018 08:00 PM)Dieter Wrote:  
(03-06-2018 02:53 AM)Gerson W. Barbosa Wrote:  Decimal Basic program and output:

OPTION ARITHMETIC DECIMAL_HIGH ! 1000 digits precision
LET t = TIME
INPUT x
...
PRINT TIME - t;" seconds"
END

I suggest you place the INPUT line first, i.e. before TIME is saved in t. ;-)

Previously I had x = 2 there. Then I simply replaced it with INPUT x to test the program with various arguments without having to edit it every time. That's what happens when you're in a hurry. 2.02 seconds seemed a bit excessive time to me just for computing the square root of 17 to 930+ places. The 0.02 seconds I get now is more like it :-)

(03-08-2018 08:00 PM)Dieter Wrote:  But tell me more about this Decimal Basic. Sounds very interesting.

It was used by Tom L. in another thread. Rexx has also been suggested, but that was easier to find and install. The latter appears to be much better, but I haven't tried it yet.

Decimal BASIC

Gerson.

------------------------------

PS:

0 seconds! (Ok, 0 seconds + something)

OPTION ARITHMETIC DECIMAL_HIGH ! 1000 digits precision
INPUT x
LET r = EXP(LOG(x)/2)
LET t = TIME
PRINT r
DO
LET a = r
LET b = r*r
LET r = (3*b*(b + 2*x) - x*x)/(8*b*r)
PRINT r
LOOP UNTIL ABS(r - a) < 1E-300
PRINT SQR(x) - r
PRINT TIME - t;" seconds"
END




? 17
4.1231056256176604
4.12310562561766054982140985597407702514719922537352...

...

4.123105625617660549821409855974077025147199225373620434398633573094954346337621​59358786365081068429668454404093921414161530142084041586836307954814574690697767​70232664362408630877905675723857082255213807325630838603091427498046719135293221​47978718167815796475906080565496973900766721383689212106708921029005520264976997​22778846139925992459137314565781925474362237723251578307340066247689146089499331​41024362794433862805526374750609050808692574826754037575769274646316663510330968​17122919874195864431971054705958485725931943603620656058152613585046428067872150​06410491422236752224348673725804703777127499856657121857043210030360260650648715​46906982815468464595645034418499305976395090786199590433342077830367324661050023​83305603648597891517738125149725101393295630516977396156134483704021469549517283​77477512833208677543247930196450385894596773652195702235648129282323237309165004​47557094601657217491431755474511227183616353174924756240651955600227559343988224​60451518623945769412122844523427764255913


...

0 seconds
Find all posts by this user
Quote this message in a reply
03-09-2018, 02:00 PM (This post was last modified: 03-09-2018 04:31 PM by emece67.)
Post: #29
RE: On Convergence Rates of Root-Seeking Methods
(03-08-2018 01:21 AM)emece67 Wrote:  Now I will check if the Taylor method used by Gerson can be used to get a 4th order method with only one division. Such a method, if not too complex, may be even faster than Halley.

I've tested this 4th order Taylor with only 1 division. The results are a little disappointing. At 100000 digits it behaves as the other 4th order method I've tested (Ostrowsky-Traub), the effect of avoiding a division is balanced by the more complex algorithm and also by needing one iteration more. At 1 million digits it gets faster than Ostrowsky-Traub and Newton-Raphson & comparable to Grau+Díaz-Barrero, but still slower than Halley.

The test at 10 million digits is now in progress (but it may take some days to complete).

Regards.
Find all posts by this user
Quote this message in a reply
03-11-2018, 12:53 PM (This post was last modified: 03-11-2018 01:21 PM by Gerson W. Barbosa.)
Post: #30
RE: On Convergence Rates of Root-Seeking Methods
(03-09-2018 02:00 PM)emece67 Wrote:  I've tested this 4th order Taylor with only 1 division. The results are a little disappointing.

Decimal Basic is not the right tool for this test, but I did it anyway. It's limited to 1 thousand digits and the precision cannot be changed.
Starting with a 16-digit approximation, the number of correct digits is quadrupled at each iteration. Thus, for 268.435.456 digits only 12 iterations would suffice ( log(268435456)/log(4) - 2 ). Ideally the precision should be set to 64 digits in the first iteration, then to 256 in the second iteration, then to 1024 in the third iteration and so on, but I cannot do it in Decimal Basic.

OPTION ARITHMETIC DECIMAL_HIGH ! 1000 digits precision
INPUT x
LET r = EXP(LOG(x)/2)
LET t = TIME
PRINT r
DO
LET a = r
LET b = r*r
LET r = (b*(b*(b*(20*x - 5*b + 40) + (120 - 30*x)*x) + x*x*(20*x - 40)) + (8 - 5*x)*x*x*x)/(128*b*b*r)
PRINT r
LOOP UNTIL ABS(r - a) < 1E-256
PRINT SQR(x) - r
PRINT
PRINT TIME - t;" seconds"
PRINT
END

? 2
1.414213562373095
1.414213562373095048801688724209698078569671875376948073176679737989479122515585022787243548195807986806093853455270729739546313528476579927594917​19132295975097
78511905460519930449443820000402107044034260340386343880911130547966945813563882​29288359585276126491113757787998102600361450207381045186090436903909837008746643​
10347026139854082311463096492085942752871500618232557795510861959579135040684685​69392040023760526772118640227753720558095337126185503740848528003003220083571259​
78760074662894146616142755770477791864670027513055471099322605191320081824714171​58427394358697639180854278403008712404509644239238267271289740289167825812746163​
06614807181347711085881252716913249041129412872705291204646535635668829464403089​72859172411054988437943106627597053434432708976403774980364280296083588274483068​
90320789038870933116715401196155056676252437765219655389881281780333497686451818​23188802504726858350883437664116300306244902929112295963603771463834304635321134​
83159222912044330867518715241960119192026

1.414213562373095048801688724209698078569671875376948073176679737990732478462107​03885038753432764157273501384623091229702492483605585073721264412149709993583141​
32226659275055927557999505011527820605714701095599716059702745345968620147285174​1864088919860955232923048
3763417799908824040833757274424682543742882438438438070
45310063642726152360461106439741659989781643283198107018711015717646721345555956​34138328395716863930924425632516611811108073845894133895402501220018056132779461​
78199284355998211437151241574470660408495214036434662520605856846458150316934024​75444690707768303296469996147472177954132880251268214102352564481045043373214707​
37038015470189348837678162553849565902247519824742415713391547280188975735650003​39167513251254677743584922032240364649937930685912128733124895129684082793457798​
60719307754973604681004724614221457940961258955293138740256354060891882186677887​35355852898088999919542662057387970464289475610337999433702645392165833753575717​
67992903973693340385046714018520340468998

1.414213562373095048801688724209698078569671875376948073176679737990732478462107​03885038753432764157273501384623091229702492483605585073721264412149709993583141​
32226659275055927557999505011527820605714701095599716059702745345968620147285174​18640889198609552329230484308714321450839762603627995251407989687253396546331808​
82964062061525835239505474575028775996172983557522033753185701135437460340849884​71603868999706990048150305440277903164542478230684929369186215805784631115966687​
13013015618568987237235288509264861249497715421833420428568606014682472077143585​48741556570696776537202264854470158588016207584749226572260020855844665214583988​
93944370926591800311388246468157082630100594858704003186480342194897278290641045​07263688131373985525611732204024509122770022694112757362728049573810896750401836​
98683684507257993647290607629969413804756548237289971803268024744206292691248590​52181004459842150591120249441341728531478105803603371077309182869314710171111683​
91658172688941975871658215212822951848847


2.08969463386289156288276595230574797E-1000

0 seconds
Find all posts by this user
Quote this message in a reply
03-11-2018, 02:26 PM
Post: #31
RE: On Convergence Rates of Root-Seeking Methods
I'm very interested in this subject, since I'm about to start coding the solvers for newRPL.
My question here is generic for all the people that have been doing tests:

I'm planning to have several solvers and let the user choose which one to use for their particular problem.
First thing that's needed is to separate in 2 classes of methods: single-point and bracketed.

Based on the tests reported in this thread, what methods would be a "must have" in each category?

Bracketed:
* Bisection
<add others here!>


Single starting-point:
* Newton-Raphson
<add others here!>

Keep in mind newRPL is multiprecision but limited to 2000 digits, so a very complex method that is only faster for more than a million digits wouldn't really be useful.
Find all posts by this user
Quote this message in a reply
03-11-2018, 07:17 PM (This post was last modified: 03-11-2018 07:31 PM by Dieter.)
Post: #32
RE: On Convergence Rates of Root-Seeking Methods
(03-11-2018 02:26 PM)Claudio L. Wrote:  Based on the tests reported in this thread, what methods would be a "must have" in each category?

Bracketed:
* Bisection
<add others here!>

I'd say that a (modifided) Secant / Regula Falsi method is a must here. The respective Wikipedia article has the two most common variants, the German version adds a third one. Among these the Illinois method has been used in several HP software pacs of the Seventies (cf. HP65 and HP41 Standard Pac).

Pauli may add a few more sophisticated methods, e.g. Brent's – I remember the discussion years ago with regard to the WP34s solver.

Dieter
Find all posts by this user
Quote this message in a reply
03-11-2018, 08:01 PM (This post was last modified: 03-11-2018 08:10 PM by emece67.)
Post: #33
RE: On Convergence Rates of Root-Seeking Methods
(03-11-2018 02:26 PM)Claudio L. Wrote:  Single starting-point:
* Newton-Raphson
<add others here!>

I'll add the Ostrowsky-Traub method. It achieves 4th order convergence with 3 function evaluations on each iteration, being simple enough.

When I wrote a complex solver for the wp34s (now it is the complex solver on its library) I made tests with some different single starting point solvers, up to 1024 digits, and IMHO, this method gave the best overall performance (see this post here and the next one on the same thread). It has an efficiency index of 1.587, higher than Halley (1.442) & Newton (1.414), and although there are methods with a better efficiency index (Kung-Traub has 1.682) they are more complex and, in practice, in many cases slower because of this complexity.

In any case, keep in mind that many of these single starting point methods rely on the computation of the derivative. Many times this derivative is estimated with:
\[y'(x)\approx{f(x+\Delta x)-f(x)\over\Delta x}\]
and, if you do not use a small enough \(\Delta x\), you end up with a, say, 6 correct digits estimation of the derivative, not adequate to get a new 2000 correct digits estimation of the root from a previous 500 correct digits root estimation. My advice is, if you are working with \(d\) digits, use relative increments in \(x\) around \(10^{-d/2}\). In my tests, the way you estimate the derivative may have more impact on the execution time than the algorithm used.

Regards.
Find all posts by this user
Quote this message in a reply
03-11-2018, 09:50 PM
Post: #34
RE: On Convergence Rates of Root-Seeking Methods
I'd go with Brent's method (or any variants that look good to you.) The problem with high order methods is that their radius of convergence may be too small to be useful. Brent's method alternates inverse-iteration, the secant method, and bisection, to achieve about 1.84 degree guaranteed convergence. There are probably ways to incorporate Newton's method if suitable regions can be found. This kicks things up a little in convergence rate. A problem with derivatives is that these get rather complicated in several dimensions; the slope becomes a gradient (vector valued) and the second derivative becomes the Hessian Matrix. One can look at something like the Levenberg-Mardquart method which can be useful but requires user-set parameters. I think Brent's method needs little but the function and a couple of points where the function is of different signs.
Find all posts by this user
Quote this message in a reply
03-11-2018, 11:26 PM
Post: #35
RE: On Convergence Rates of Root-Seeking Methods
(03-11-2018 08:41 PM)Mike (Stgt) Wrote:  
(03-11-2018 12:53 PM)Gerson W. Barbosa Wrote:  [...]

0 seconds

If all your routines run within 0 seconds even digits are set to the max and the timer returns only coarse values you may repeat your method 100..1000 times to get some benchmark figures to compare.

0 + something seconds of course, as stated earlier. But that's too vague, I recognize. Around 3 milliseconds @ 2.60 GHz might be closer.

Apparently, the built-in SQR algorithm in Decimal Basic is not optimal:

-----------------------------------------------------------

OPTION ARITHMETIC DECIMAL_HIGH 
LET x = 2
LET s = EXP(LOG(x)/2)
LET t = TIME
FOR i = 1 TO 1000
   LET r = s
   DO 
      LET a = r
      LET b = r*r   
      LET r = (b*(b*(b*(20*x - 5*b + 40) + (120 - 30*x)*x) + x*x*(20*x - 40)) + (8 - 5*x)*x*x*x)/(128*b*b*r)
   LOOP UNTIL ABS(r - a) < 1E-256
NEXT i
PRINT TIME - t;" seconds"
PRINT
PRINT 
END

 3.14  seconds

 Average of 10 runs: 3.013 (min =  2.33; max = 3.25)

-----------------------------------------------------------

OPTION ARITHMETIC DECIMAL_HIGH 
LET x = 2
LET t = TIME
FOR i = 1 TO 1000
   LET r = SQR(x)
NEXT i
PRINT TIME - t;" seconds"
END

  3.87  seconds  

 Average of 10 runs: 3.720 (min =  2.54; max = 4.09)

-----------------------------------------------------------
Find all posts by this user
Quote this message in a reply
03-11-2018, 11:36 PM
Post: #36
RE: On Convergence Rates of Root-Seeking Methods
Rather than Newton's which requires the derivative, did you mean secant?

Dieter is correct, I'd recommend Brent's method for a bracket solver. It is guaranteed to get a solution and is quadratically convergent almost always. There was a modification in the 34S to also include the Ridder's method in addition to Brent's secant, inverse quadratic and bisection methods. Testing indicated that it was beneficial, although I don't remember the conditions under which it is used.

There is quite a bit of code before the searching can begin. If the initial estimates don't bracket the solution, a bisection step is done before switching to more advanced methods. There might be a forced secant step after the bisection, I'm not sure anymore. If the first two function evaluations are equal, it does a bisection step and if that produces a third equal value a right then left step exponentially increasing interval scan is done. When there is only one initial estimate, it stars with the interval scan.


Pauli
Find all posts by this user
Quote this message in a reply
03-12-2018, 04:08 PM
Post: #37
RE: On Convergence Rates of Root-Seeking Methods
(03-11-2018 09:06 PM)Mike (Stgt) Wrote:  
(03-11-2018 02:26 PM)Claudio L. Wrote:  * Newton-Raphson

For all root finding functions on HP calculators I must only enter the function. Newton-Raphson needs also the first derivate of it, other methods need in addition the second derivate, some even higher derivates. Should the user enter them or will your planned sover find them on its own?

Ciao.....Mike

Almost every method has an implementation using some variant of difference formulae to replace the derivatives and come up with a workable algorithm. I'm not planning to reinvent the wheel, I'll research what's already being used and go with it. I even looked at some promising 16th order methods but I'm not so sure how tight is the convergence radius, it might need other methods to get the first few digits, then a few iterations of those methods can quickly give you a lot of digits.
Find all posts by this user
Quote this message in a reply
03-12-2018, 04:13 PM
Post: #38
RE: On Convergence Rates of Root-Seeking Methods
(03-11-2018 11:36 PM)Paul Dale Wrote:  Rather than Newton's which requires the derivative, did you mean secant?
I did mean Newton, with a formula to approximate the derivative, or if the expression has a derivative the system can determine it algebraically and use it.

(03-11-2018 11:36 PM)Paul Dale Wrote:  Dieter is correct, I'd recommend Brent's method for a bracket solver. It is guaranteed to get a solution and is quadratically convergent almost always. There was a modification in the 34S to also include the Ridder's method in addition to Brent's secant, inverse quadratic and bisection methods. Testing indicated that it was beneficial, although I don't remember the conditions under which it is used.

There is quite a bit of code before the searching can begin. If the initial estimates don't bracket the solution, a bisection step is done before switching to more advanced methods. There might be a forced secant step after the bisection, I'm not sure anymore. If the first two function evaluations are equal, it does a bisection step and if that produces a third equal value a right then left step exponentially increasing interval scan is done. When there is only one initial estimate, it stars with the interval scan.


Pauli

Brent seems like a powerhouse combo of other methods. If I implement Brent, does it make any sense to offer the user a choice to use bisection, etc.? It will probably end up being the default method, unless complex mode is enabled, then something like Muller seems to be the only choice?
Find all posts by this user
Quote this message in a reply
03-12-2018, 04:38 PM
Post: #39
RE: On Convergence Rates of Root-Seeking Methods
(03-12-2018 04:08 PM)Claudio L. Wrote:  I even looked at some promising 16th order methods but I'm not so sure how tight is the convergence radius, it might need other methods to get the first few digits, then a few iterations of those methods can quickly give you a lot of digits.

I suspect that such high-order methods will not give any benefit at all at such 2000 digit counts.

A benefit of the Ostrowsky-Traub method is that it resembles 2 consecutive steps of the Newton-Raphson one, but the 2nd pass is indeed simpler than a Newton-Raphson step, so you end up with twice the convergence order with less less than twice effort.

It can also be used in the complex plane.

Regards.
Find all posts by this user
Quote this message in a reply
03-12-2018, 10:16 PM
Post: #40
RE: On Convergence Rates of Root-Seeking Methods
(03-12-2018 09:51 PM)Mike (Stgt) Wrote:  BTW, the HP-19BII is not mentionded on MoHPC, where the HP-18B is described. Why?

It is included, here.

--Bob Prosperi
Find all posts by this user
Quote this message in a reply
Post Reply 




User(s) browsing this thread: 1 Guest(s)