On Convergence Rates of Root-Seeking Methods
03-05-2018, 11:39 AM
Post: #21
 Mike (Stgt) Senior Member Posts: 331 Joined: Jan 2014
RE: On Convergence Rates of Root-Seeking Methods
(03-05-2018 03:52 AM)Paul Dale Wrote:
Quote:Would be a CORDIC algorithm for so many digits outreach simple Newton (iterating a = .5 * (a + z / a))?

Not a chance. CORDIC is linear in the number of digits, Newton quadratic. CORDIC might be usable to provide a sufficiently accurate starting point for Newton's method however.

Hello Paul!

Thank you for this statement about CORDIC and Newton, it saves me a lot of time to dig deeper into CORDIC.

Ciao.....Mike
03-05-2018, 02:13 PM
Post: #22
 ttw Member Posts: 94 Joined: Jun 2014
RE: On Convergence Rates of Root-Seeking Methods
Very often, the converge rate is asymptotic. It holds either for very large N or for a guess close enough. Newton's method often converges linearly until the result is much closer to one root than the other; then it becomes quadratic.

A few "better" estimates are available. For example, one can use a Gauss-Seidel relaxation for linear equations or (with even number of elements) a Red-Black relaxation. The Red-Black has the same asymptotic convergence rate as the Gauss-Seidel but a better average rate. Thus is converges more quickly at the beginning.

For those unfamiliar with Red-Black: If the matrix represents a diffusion process, one can treat an even number of points as being colored like a checkerboard. The red sites depend only on themselves and the black sites and vice versa. By alternately solving for the red sites in terms of the black and vice versa, one gets quicker convergence.
03-05-2018, 10:46 PM (This post was last modified: 03-05-2018 11:53 PM by Valentin Albillo.)
Post: #23
 Valentin Albillo Member Posts: 72 Joined: Feb 2015
RE: On Convergence Rates of Root-Seeking Methods
(03-02-2018 01:00 PM)Mike (Stgt) Wrote:  Hi, Namir

what would you as root finding expert recommend to compute sqare roots up to 100'000 digits and more?

Sorry to intrude. You may want to have a look at this PDF document:

AMS Journals - Computing the square root of 2 to 1,000,000 digits

where they discuss the methods used in the past to compute square roots to many thousand places and the particular one they used to compute the square root of 2 to in excess of one million digits.

Also, to check your very own implementation simply google "square root of 2 to 1 million digits" and you'll immediately find a link to the full result.

Regards.
V.
.
03-06-2018, 12:50 AM
Post: #24
 Mike (Stgt) Senior Member Posts: 331 Joined: Jan 2014
RE: On Convergence Rates of Root-Seeking Methods
(03-05-2018 03:52 AM)Paul Dale Wrote:  It might be worth looking into Halley's method, which has cubic convergence

Typically the gain in convergence is slowed down by more complex computations and overall there is no benefit as iterating a = .5 * (a + z / a) is pretty fast, at least for common accurancy. But Halley's method is in this case not too demanding. And indeed, iterating
q = a * a
a *= (3 * z + q) / (z + 3 * q)

is faster than Newton for some input values using ooREXX under Win7. Under z/VM using a (very old) REXX compiler Newton is faster in all cases. Best performance boost I get by 'sliding' accurancy, limiting the digits to calculate to a minimum and increasing it in each iteration according its convergence.

Quote:You could look for Karatsuba and FFT square roots. I remember them being Newton's method with improvements to the basic arithmetic operations.

Looks as if the only way to outperform Newton for sqare root with increased accurancy is to avoid the many-many-digit divide.

Ciao.....Mike
03-06-2018, 01:28 AM
Post: #25
 Mike (Stgt) Senior Member Posts: 331 Joined: Jan 2014
RE: On Convergence Rates of Root-Seeking Methods
(03-05-2018 10:46 PM)Valentin Albillo Wrote:  You may want to have a look at this PDF document:

AMS Journals - Computing the square root of 2 to 1,000,000 digits

where they discuss the methods used in the past to compute square roots to many thousand places and the particular one they used to compute the square root of 2 to in excess of one million digits.

Thank you for this link. Quite interesting the statement: 'The calculation of the square root of 2 [...] was carried out on Columbia University's IBM 360/91 computer at odd times spread out over more than a year. [...] The calculation, which took about 47.5 hours ...'
In other words, our mainframe is fully booked.

Quote:Also, to check your very own implementation simply google "square root of 2 to 1 million digits" and you'll immediately find a link to the full result.

Well, I'd compare the sqare of the result with the input, the same way as they did at Columbia University (your a. m. link). It is not only the square root of 2 I am looking for. I need squre roots as fast as possible here:
Code:
e = 10 ** -(digits() - fuzz() - 1) a = 1 b = 1 / 2 r = 1 s = b x = a DO i = 1 UNTIL k < e    b = Sqrt(x * b)    x = a    a = (a + b) / 2    c = (a - b) ** 2    r = r + r    k = c * r    s = s - k end say 2 * a**2 / s
and there:
Code:
e = 10 ** -((digits() - fuzz()) % 4 + 1) y = sqrt(2) a = 6 - 4 * y y = y - 1 do i = 3 by 2 until y < e    y = sqrt(sqrt(1 - y ** 4))    y = (1 - y) / (1 + y)    a = a * (1 + y) ** 4 - 2 ** i * y * (1 + y + y ** 2) end say 1 / a

Do you see the point?

Ciao.....Mike
03-06-2018, 02:53 AM
Post: #26
 Gerson W. Barbosa Senior Member Posts: 946 Joined: Dec 2013
RE: On Convergence Rates of Root-Seeking Methods
(03-04-2018 11:58 PM)Mike (Stgt) Wrote:  A german proverb says 'no reply is also a reply'.
/M.

So is a reply from a non-expert.

Probably not the best methods, but lots of information I was not aware of:

https://en.wikipedia.org/wiki/Methods_of...uare_roots

a) Bakhshali method: interesting as the source is an ancient Hindu manuscript;

b) Taylor Series: quoting from Wikipedia:

Quote:As an iterative method, the order of convergence is equal to the number of terms used. With two terms, it is identical to the Babylonian method. With three terms, each iteration takes almost as many operations as the Bakhshali approximation, but converges more slowly.

Despite that remark, that's the one I chose to play with, because it is easy to follow. Also, it can be expanded at will if one wants to try it with more terms.

The first three terms:

$$\sqrt{x}= r\left ( 1+\frac{x-r^{2}}{2r^{2}}-\frac{\left ( x-r^{2} \right )^{2}}{8r^{4}} \pm \cdots \right )$$

Equivalently, after a manual simplification:

$$\sqrt{x}\approx \frac{3\left ( r^{2}+2x \right )-\left ( \frac{x}{r} \right )^{2}}{8r}$$

Decimal Basic program and output:

OPTION ARITHMETIC DECIMAL_HIGH ! 1000 digits precision
LET t = TIME
INPUT x
LET r = 10^INT(LOG10(x)/2) + 1 ! initial estimate
PRINT r
DO
LET a = r
LET r = (3*(r^2 + 2*x) - (x/r)^2)/(8*r)
PRINT r
LOOP UNTIL ABS(r - a) < 1E-300
PRINT SQR(x) - r
PRINT TIME - t;" seconds"
END

? 17
2
2.609375
3.831457666622230815360207907180368367929063526645300241783992701606003096535603​1813886658895626223976443827895...
4.122244620637600419719958337875107818477074015229252100479776053793938950195150​0012131448438220412295740265140...
4.123105625598878536178011647766885826496727434593597792311874268691149149019651​3723285709444226805259084101669...
4.123105625617660549821409855974076830276053739312118674761681625480610941537298​4751649808177898214772303507785...
4.123105625617660549821409855974077025147199225373620434398633573094954346337621​5935878636508106842966843263884...
4.123105625617660549821409855974077025147199225373620434398633573094954346337621​5935878636508106842966845440409...
4.123105625617660549821409855974077025147199225373620434398633573094954346337621​5935878636508106842966845440409...
8.202714670745631983631354915192547370538210924268951508828665917189926704332598​08358492948699826676293788E-931...
2.02 seconds

This is a test for small argument as the initial estimation method will cause error for large arguments ( starting around 920 ).

RPL program

%%HP: T(3)A(R)F(.);
\<< DUP XPON 2. / ALOG 1. +
DO DUP2 DUP2 SQ SWAP DUP + + 3. * UNROT / SQ - OVER 8. * / SWAP DUP2 / 1. - ABS NIP .00000001 <
UNTIL
END NIP
\>>

Again, this will cause error for x = 0. No problem, we know Sqrt(0) = 0. Also, these are just tests :-)
03-07-2018, 10:32 PM (This post was last modified: 03-08-2018 02:25 AM by Mike (Stgt).)
Post: #27
 Mike (Stgt) Senior Member Posts: 331 Joined: Jan 2014
RE: On Convergence Rates of Root-Seeking Methods
It's awesome, at least for those who are not awfully bored by this subject.

Triggered by Gerson W. Barbosa's hint to 'Methods of computing square roots' I tested several algorithms, Bakhshali, Goldschmidt, Halley's method, reciprocal square roots with Newton and with Halley compared to 'Classic Newton' (what is the Babylonian and Heron's method which regarding the math are the same as Newton's, but the Babylonians did only one iteration -- if I am informed correctly).

Beforehand my conclusion: You have to test on your own on your system because the overall performance of a method is strongly dependent on the implementation of the algorithm and the speed of the basic math functions at hand on the platform you use. Besides the method the strongest effect on my two systems is the 'sliding accuracy' starting with only few digits (18..25) and increasing it with every iteration according the observed convergence (which is the same or very close to the theoretical advance). Minor but not negligible effects result from coding, 0.5 * a instead of a / 2 or a *= ... instead of a = a * ...

What I did: I have only two systems with (virtual) unlimited number of digits, ooREXX under Win7 and REXX on z/VM with an old compiler. I took the methods and their algorithms from the a. m. Wiki or for Halley's method educed the formula to iterate with the help of an HP40G's CAS -- a virtual calculator of course. Then I tested with limited accuracy (2000..5000 digits) the convergence for few arguments only (2 3 1.001 81 .0081 3.6), next I compared the time elapsed for the different methods with 9000 digits from 0.1 to 10 by .1 to get some more sample points. Same again but with 'sliding accuracy' up to 20000 digits. Same again for 3rd and 4th root too (but not the whole bunch or flock of methods).

What I observed: the outcome depends on the argument, alas in such a way I could weep my tears ran crosswise down my back. No, kidding, but it'n not funny. For several arguments single methods take one loop more to attain the requested accuracy, also the timing varies up to 30% and more. But to be honest, I take the elapsed time what is 'wall clock time', not a convincing value on a multi-processes virtually in parallel time-slice system, and few readings are obviously off, disturbed by screen saver changing the picture or this and that I do not know. The most perturbing observation is for the 4th root, where sqrt(sqrt()) with Newton and sliding accuracy is fastest in the range 0.1 .. 5.1 and then on (tested up to 1000 with different stepwidth) it is 3.5 .. 5 times slower than before. For sqare root and 3rd root the timing only varied within 30%.
Reason enough to end testing with ooREXX I continued on the mainframe only. It runs under Hercules and even it is a "single-user host" there are disconnected machines (not conneted to a terminal) in the background doing some housekeeping. To prevent effects on the elapsed time reading I have dedicated a processor to my userID. The fluctuating performance of Win7 still has its impact. Anyway, my impression is, the differences in timing are lower. An impression only until I compiled the 4th root routine and found the same several times slower execution of sqrt(sqrt()) from 5.2 on. WTF!

What I did not: I did not yet take the results for a 'serious' evaluation, I mean spread sheet, graphically, etc. For that I should take 3..5 measurements to find out the "noise in the wall clock".
Also I did not yet dig deeper into Karatsuba and FFT square roots. I have not found yet papers I may grasp with my math education slowly falling into oblivion. In additon, on my platforms I will never be able to compete with PiFast which was also used to compute sqrt(2) in some degree.

What I will do: for a limited range sqrt(sqrt()) using sliding accuracy is fastest 4th root tested. To take this advantage to the full range of arguments I will normalize the input and correspondingly the output with a fast decimal point shift (similar to the input adjustment for Wilkes, Wheeler and Gill's method). For the 3rd root I will probably use Halley's method, for square root may be the reciprocal one. I'll have to take a closer look first.

Why I do this: for fun.

Ciao.....Mike

EDIT: the slower execution of sqrt(sqrt()) from 5.2 on, from 5.1212510529 on to be precise, was my fault, giving the first sqrt() the wrong initial guess what led to a delta from first iteration to the guess .ge. 0,95 what is rounded 1 what was not foreseen as input for the accuracy adjustment which fell back to a safe value, the digits as requested, so the first sqrt() was done from iteration 2 on with full accuracy. Mea culpa, mea maxima culpa. But my faith in REXX is reconstituted. Whew!
So my a. m. To Do is obsolete. One more Whew!
03-08-2018, 01:04 AM
Post: #28
 ttw Member Posts: 94 Joined: Jun 2014
RE: On Convergence Rates of Root-Seeking Methods
There's a paper that I don't remember somewhere (Before Bailey Borwein and Plouffe) which discussed the time to compute Pi by various methods. The linear, quadratic, cubic, and quartic methods took about the same time overall. What happens is that the multi-precision computations dominate the convergence rate.
A very loose estimate can be obtained by assuming arithmetic of N digits takes time proportional to N^2) For quadratic convergence up to about 100 digits, digit lengths of (1,2,4,8,16,32,64,100) can be used and for cubic, ( 1,3,27,81,100) The sums of squares are: 15461 and 17381 respectively. Although the quadratic case uses 8 iterations and the cubic 5, the number of elementary operations is nearly the same.
03-08-2018, 01:21 AM (This post was last modified: 03-11-2018 01:09 PM by emece67.)
Post: #29
 emece67 Senior Member Posts: 353 Joined: Feb 2015
RE: On Convergence Rates of Root-Seeking Methods
(03-07-2018 10:32 PM)Mike (Stgt) Wrote:  It's awesome, at least for those who are not awfully bored by this subject.

Yes, it is awesome!

I've also made my own tests, in my case in Python with the mpmath package for the arbitrary precision arithmetic. I've tested the algorithms (all are generic root seeking algorithms):
• Halley (3rd order convergence, 1 division)
• Grau+Díaz-Barrero (6th order, 2 divisions)
• Newton-Raphson (2nd order, 1 division)
• Ostrowsky-Traub (4th order, 2 divisions)
• Kung-Traub (8th order, 3 divisions, but more complex)
• Potra-Ptak (3rd order, 2 divisions)

I've not used (by now) sliding accuracy, but fixed ones at 10000, 100000 and 1 million digits with only 2 arguments: 2 and 17. Tested only sqrt().

My findings are that, as in this case the computation of the function to be solved requires only one multiplication and one subtraction, but the algorithm requires one or more divisions, being divisions more time consuming at such large digit counts, the usual parameter used to compare root-seeking algorithm speeds, the efficiency index $$\gamma=\sqrt[r]p$$ (being $$p$$ the convergence order of the method and $$r$$ the number of function evaluations needed on each iteration) is not useful, as function evaluations are simpler than other operations needed by the algorithm.

Defining instead a "division-aware efficiency index" as $$\gamma_d=\sqrt[d]p$$ (being $$d$$ the number of divisions needed on each iteration) one gets a reliable predictor of the speed of the algorithm. Computing this index for the above algorithms one gets:
• Halley: 3
• Grau+Díaz-Barrero: 2.45
• Newton-Raphson: 2
• Ostrowsky-Traub: 2
• Kung-Traub: 2
• Potra-Ptak: 1.7
so Halley is the fastest, then Grau+Díaz-Barrero, then Newton-Raphson, Ostrowsky-Traub and Kung-Traub, with Potra-Ptak the slowest. And this is exactly what I've found, only being Kung-Traub slower than expected (but still faster than Potra-Ptak) because, I think, of its higher complexity.

So the moral is, using this kind of classic root-seeking algorithms to compute square roots, use one with a high convergence order, using few divisions and not too cumbersome (this is the reason I added the Grau+Díaz-Barrero to my tests).

Now I will check if the Taylor method used by Gerson can be used to get a 4th order method with only one division. Such a method, if not too complex, may be even faster than Halley.

In my tests, the more the number of digits used, the higher the correlation between $$\gamma_d$$ and the speed of the method.

For the curious, it took 218 s for Halley's algorithm to compute $$\sqrt2$$ to 1 million digits (Intel Core-i7@2.7 GHz, Win10).

Regards.

César - Information must flow.
03-08-2018, 02:40 AM
Post: #30
 Mike (Stgt) Senior Member Posts: 331 Joined: Jan 2014
RE: On Convergence Rates of Root-Seeking Methods
(03-08-2018 01:21 AM)emece67 Wrote:  Defining instead a "division-aware efficiency index" as $$\gamma_d=\sqrt[d]p$$ (being $$d$$ the number of divisions needed on each iteration) one gets a reliable predictor of the speed of the algorithm.

With your formula how would you estimate the efficiency of division-free methods?

/M.
03-08-2018, 03:07 AM
Post: #31
 Mike (Stgt) Senior Member Posts: 331 Joined: Jan 2014
RE: On Convergence Rates of Root-Seeking Methods
(03-08-2018 01:04 AM)ttw Wrote:  There's a paper that I don't remember somewhere ...

Until you find it again you may take this one.
BTW, the code snippets here are Brent-Salamin and Borwein&Borwein.

/M.
03-08-2018, 04:17 AM
Post: #32
 ttw Member Posts: 94 Joined: Jun 2014
RE: On Convergence Rates of Root-Seeking Methods
(03-08-2018 03:07 AM)Mike (Stgt) Wrote:
(03-08-2018 01:04 AM)ttw Wrote:  There's a paper that I don't remember somewhere ...

Until you find it again you may take this one.
BTW, the code snippets here are Brent-Salamin and Borwein&Borwein.

/M.

The first one is the paper I remembered. Thanks.
03-08-2018, 05:35 AM
Post: #33
 Gamo Member Posts: 196 Joined: Dec 2016
RE: On Convergence Rates of Root-Seeking Methods
I'm new to this subject and I found this subject is very interesting.

My first time to try the Root-Seeking feature is from the HP-15C SOLVE function.
I don't know much about the convergence speed of the SOLVE function because I don't have the actual HP-15C but use the Emulator for PC computer that always run very fast on all kind of equations.

Since I just got the HP-11C and this doesn't have the SOLVE feature like the 15C. Then I noticed that in the 11C Owner's Handbook it got the Newton's Method program so this is the second time that I try this root-seeking program. This turn out to be a very slow root-seeking program which I noticed that the reason that way very slow because of the 11C slow computational CPU itself.

I happen to try other program that use "Regula Falsi method"
I try on 11C this method is much faster than the Newton's Method with this Regula Falsi program you need to provide reference points to X1 and X2 while providing the reference point this can be check right away if that two points is far apart or not when done give it the tolerant and start to compute when done can check for the accuracy against 0 value. If the answer not accurate enough just press R/S again for better more accuracy.

I'm new to this Algorithm I'll post this program later on.

Gamo
03-08-2018, 08:00 PM
Post: #34
 Dieter Senior Member Posts: 1,888 Joined: Dec 2013
RE: On Convergence Rates of Root-Seeking Methods
(03-06-2018 02:53 AM)Gerson W. Barbosa Wrote:  Decimal Basic program and output:

OPTION ARITHMETIC DECIMAL_HIGH ! 1000 digits precision
LET t = TIME
INPUT x
...
PRINT TIME - t;" seconds"
END

I suggest you place the INPUT line first, i.e. before TIME is saved in t. ;-)

Dieter
03-08-2018, 08:10 PM (This post was last modified: 03-08-2018 08:11 PM by Dieter.)
Post: #35
 Dieter Senior Member Posts: 1,888 Joined: Dec 2013
RE: On Convergence Rates of Root-Seeking Methods
(03-08-2018 05:35 AM)Gamo Wrote:  I'm new to this subject and I found this subject is very interesting.

First of all: the original post by Namir indeed was about root-seeking methods, i.e. methods for solving f(x)=0. But the last posts deal with a different topic, they are about methods for calculating square roots of a number.

(03-08-2018 05:35 AM)Gamo Wrote:  I happen to try other program that use "Regula Falsi method"
I try on 11C this method is much faster than the Newton's Method with this Regula Falsi program you need to provide reference points to X1 and X2 while providing the reference point this can be check right away if that two points is far apart or not when done give it the tolerant and start to compute when done can check for the accuracy against 0 value.

Wow, that was quite a long sentence. Even without a single period or comma. ;-)

Anyway, the Newton method usually is converging with a similar speed as Regula Falsi and its variants. But the latter has the advantage that the user can provide two guesses that bracket the solution. It may be faster here and there because each iteration only requires one call of f(x) instead of two for the Newton method (f(x) and f(x+h) or f(x) and f'(x)).

BTW, Gamo, I just noticed the PM you sent a few days ago. I now sent a reply – and an 11C program. ;-)

Dieter
03-08-2018, 09:20 PM
Post: #36
 Mike (Stgt) Senior Member Posts: 331 Joined: Jan 2014
RE: On Convergence Rates of Root-Seeking Methods
(03-08-2018 08:10 PM)Dieter Wrote:  First of all: the original post by Namir indeed was about root-seeking methods, i.e. methods for solving f(x)=0. But the last posts deal with a different topic, they are about methods for calculating square roots of a number.

Could you pls explain in detail the odds between root-seeking or solving f(x)=0 and calculating square roots, for example x^2-z=0 with z being a number? Fair enough, up to now the focus was on positve real numbers only.

Ciao.....Mike

BTW, I would be so glad I could "calculate" square roots like my father did with a pencil on a piece of paper, but he always stopped after 4..5 digits. A few digits more but then "my" HP41 stops too. But -- did this calculator calculate or compute it?
03-08-2018, 10:43 PM (This post was last modified: 03-10-2018 03:59 AM by Gerson W. Barbosa.)
Post: #37
 Gerson W. Barbosa Senior Member Posts: 946 Joined: Dec 2013
RE: On Convergence Rates of Root-Seeking Methods
(03-08-2018 08:00 PM)Dieter Wrote:
(03-06-2018 02:53 AM)Gerson W. Barbosa Wrote:  Decimal Basic program and output:

OPTION ARITHMETIC DECIMAL_HIGH ! 1000 digits precision
LET t = TIME
INPUT x
...
PRINT TIME - t;" seconds"
END

I suggest you place the INPUT line first, i.e. before TIME is saved in t. ;-)

Previously I had x = 2 there. Then I simply replaced it with INPUT x to test the program with various arguments without having to edit it every time. That's what happens when you're in a hurry. 2.02 seconds seemed a bit excessive time to me just for computing the square root of 17 to 930+ places. The 0.02 seconds I get now is more like it :-)

It was used by Tom L. in another thread. Rexx has also been suggested, but that was easier to find and install. The latter appears to be much better, but I haven't tried it yet.

Decimal BASIC

Gerson.

------------------------------

PS:

0 seconds! (Ok, 0 seconds + something)

OPTION ARITHMETIC DECIMAL_HIGH ! 1000 digits precision
INPUT x
LET r = EXP(LOG(x)/2)
LET t = TIME
PRINT r
DO
LET a = r
LET b = r*r
LET r = (3*b*(b + 2*x) - x*x)/(8*b*r)
PRINT r
LOOP UNTIL ABS(r - a) < 1E-300
PRINT SQR(x) - r
PRINT TIME - t;" seconds"
END

? 17
4.1231056256176604
4.12310562561766054982140985597407702514719922537352...

...

4.123105625617660549821409855974077025147199225373620434398633573094954346337621​59358786365081068429668454404093921414161530142084041586836307954814574690697767​70232664362408630877905675723857082255213807325630838603091427498046719135293221​47978718167815796475906080565496973900766721383689212106708921029005520264976997​22778846139925992459137314565781925474362237723251578307340066247689146089499331​41024362794433862805526374750609050808692574826754037575769274646316663510330968​17122919874195864431971054705958485725931943603620656058152613585046428067872150​06410491422236752224348673725804703777127499856657121857043210030360260650648715​46906982815468464595645034418499305976395090786199590433342077830367324661050023​83305603648597891517738125149725101393295630516977396156134483704021469549517283​77477512833208677543247930196450385894596773652195702235648129282323237309165004​47557094601657217491431755474511227183616353174924756240651955600227559343988224​60451518623945769412122844523427764255913

...

0 seconds
03-08-2018, 11:16 PM (This post was last modified: 03-09-2018 04:55 AM by Mike (Stgt).)
Post: #38
 Mike (Stgt) Senior Member Posts: 331 Joined: Jan 2014
RE: On Convergence Rates of Root-Seeking Methods
(03-08-2018 10:43 PM)Gerson W. Barbosa Wrote:  It was used by Tom L. in another thread. Rexx has also been suggested, but that was easier to find and install. The latter appears to be much better, but I haven't tried it yet.

Decimal BASIC

Just stumbled on Calc - a C-style arbitrary precision calculator, which obviously was used for some serious tasks (if you take prime numbers for serious). Did not test it yet.

Quote:The latter appears to be much better

What do you exactly mean by 'much better'?

Ciao.....Mike

EDIT: As I do not have Win10 yet (where you could run Linux programs without installing cygwin), I tested a. m. Calc under LUbuntu under VMware Workstation Player. Installation of calc was so simple, I opened a LXterminal window, entered calc and got a message like 'not installed yet, to install type ...'. I did so and -- will I really get that calc I want? -- a moment later I could type 2+3*4 and got 14 as reply. 2^6972593-1 starts output after about 12 seconds for some 10 seconds and the last 16 digits are the same as published here -- so with this test I am convinced that I will use this tool when I do need arbitrary precission. Why? Yesterday I changed my 2^n-1 routine in REXX for a 20% speed-up and it still took 2h19 for that same Mersenne prime. I am a big fan of REXX even it is not for real big number crunching.
/M.
03-09-2018, 02:00 PM (This post was last modified: 03-09-2018 04:31 PM by emece67.)
Post: #39
 emece67 Senior Member Posts: 353 Joined: Feb 2015
RE: On Convergence Rates of Root-Seeking Methods
(03-08-2018 01:21 AM)emece67 Wrote:  Now I will check if the Taylor method used by Gerson can be used to get a 4th order method with only one division. Such a method, if not too complex, may be even faster than Halley.

I've tested this 4th order Taylor with only 1 division. The results are a little disappointing. At 100000 digits it behaves as the other 4th order method I've tested (Ostrowsky-Traub), the effect of avoiding a division is balanced by the more complex algorithm and also by needing one iteration more. At 1 million digits it gets faster than Ostrowsky-Traub and Newton-Raphson & comparable to Grau+Díaz-Barrero, but still slower than Halley.

The test at 10 million digits is now in progress (but it may take some days to complete).

Regards.

César - Information must flow.
03-10-2018, 11:16 PM
Post: #40
 Mike (Stgt) Senior Member Posts: 331 Joined: Jan 2014
RE: On Convergence Rates of Root-Seeking Methods

(03-08-2018 08:10 PM)Dieter Wrote:  First of all: the original post by Namir indeed was about root-seeking methods, i.e. methods for solving f(x)=0. But the last posts deal with a different topic, ...

For a real hands-on experience you may either pick from Antony Nixon an HP-34C (you need also MultiCalc.zip) - or take the HP-15 from here (both are for Windoze only). After installation enter on the HP-34C this program
Code:
001-25,13,11  LBL A 002-   15  3  x^2 003-   24  0  RCL 0 004-      41  - 005-   25 12  RTN

respectively on the HP-15C
Code:
001-42,21,11  LBL A  002-   43 11  x^2    003-45,30, 0  RCL- 0 004-   43 32  RTN

Switch back from PRGM mode to RUN mode, then
2 STO 0
0 SOLVE A

and take a closer look to the result. You know it? Yes -- it is

$$\frac{7}{5}\left ( 1-\frac{1}{50}\right )^{-\frac{1}{2}}$$

About the two "emulators": Tony offers the nice feature (for the inquisitive...) to have a look behind the curtains, the firmware, all data registers, the entered user program in human readable form, observe the execution of your program by highlighting the 'current line' and more. In contrast the virtual HP15C runs at maximum speed and comes with the original user manual. There on p. 180 ff you may read more details about SOLVE. I learned that it takes two arguments in X and Y. At first I only entered one and still had the result from previous test in Y -- and wondered why it took only four iterations to get the answer so accurate.

Ciao.....Mike
 « Next Oldest | Next Newest »

User(s) browsing this thread: 1 Guest(s)