Post Reply 
On Convergence Rates of Root-Seeking Methods
03-17-2018, 11:59 PM (This post was last modified: 03-18-2018 01:37 AM by emece67.)
Post: #49
RE: On Convergence Rates of Root-Seeking Methods
(03-16-2018 01:49 AM)Claudio L. Wrote:  It seems the simplest methods are really hard to beat. All that complexity to get a 10% speedup only some times? Even Halley was faster than Newton on only one table.

In fact, the order of convergence is a (really) bad measure for the "speed" of an iterative root-seeking algorithm.

Suppose we want to estimate the time needed for a root seeking algorithm to polish a root estimation with \(n_0\) correct digits to get \(n_t\) correct ones. If the method has an order of convergence \(r\), then we expect it will need \(\lceil\log_r{n_t/n_0}\rceil\) iterations to get such precision. If the method needs \(d\) function evaluations for each iteration, then the expected number of functions evaluations to carry out such polishing is \(d\lceil\log_r{n_t/n_0}\rceil\). If the computation time is dominated by the time needed to compute each function evaluation, then the time required for this polishing is proportional to that quantity. Thus, we can conceive a "speed" \(s\) of the algorithm as a quantity inversely proportional to such time (after some simple manipulations & forgetting the ceil operator and \(\ln n_t/n_o\) constant):
\[s\propto\ln\sqrt[d]r=\ln\gamma\]
where \(\gamma=\sqrt[d]r\) is the efficiency index of the algorithm.

Putting numbers on this, you infer that the "powerful" Kung-Traub method, with its amazing 8th order convergence rate (and 4 function evaluations per iteration) may be only 50 % faster than Newton (2nd order, 2 function evaluations per iteration). Add to this that the Kung-Traub iteration is much more complex than the Newton one and you end up concluding that, in practice, Newton may be faster indeed, except if the time required to compute each function evaluation is much longer that the overhead of the algorithm.

Even more, if we accept the Kung-Traub conjecture, that states that the maximum order of convergence attainable for an iterative root-seeking algorithm is \(r=2^{d-1}\), then the maximum speed we can reach with such optimal algorithms is:
\[s_\max\propto{d-1\over d}\]
so, the speed for such optimal method (Newton, Ostrowsky-Traub and Kung-Traub are optimal with \(d\) equal to 2, 3, and 4 respectively, Halley and the other non-bracketing methods mentioned on this thread are sub-optimal) depends only on the number of function evaluations on each iteration. The slowest will be Newton, Ostrowsky-Traub will show a 33 % speedup and Kung-Traub a 50 % over Newton. As the methods get more and more complex, the maximum speed-up will be as much as 100 % above Newton (twice the speed, half the time), but not more. With the great overhead of such complex algorithms, you see there is no point in using them, except at large digit counts where the algorithm overhead may be negligible against the time required to compute each function evaluation. For everyday use, only Ostrowsky-Traub has any real chance to exceed Newton, and it does many times, but still, do not expect speed-ups greater than 33 %.

Also, note that the time required for the algorithm for getting from a "distant" root estimation to the vicinity of the root (where the algorithm will start to show its convergence order) is not at all easily predictable from the convergence order, thus the valuable information on Varona's paper.

Thus, I think it is better to spend time optimizing other parts of the code (convergence criteria, computation of the derivative, ...) than trying out exotic algorithms.

Regards.
Find all posts by this user
Quote this message in a reply
Post Reply 


Messages In This Thread
RE: On Convergence Rates of Root-Seeking Methods - emece67 - 03-17-2018 11:59 PM



User(s) browsing this thread: 1 Guest(s)