Re: HP-15C Mini-challenge: My Original Solutions & Comments Message #55 Posted by Valentin Albillo on 16 July 2006, 7:33 p.m., in response to message #1 by Valentin Albillo
Hi all,
Thanks again for your interesting and highly imaginative efforts to solve this mini-challenge,
even if at times they were somewhat derivative (or 'integrative', come to think of it ...), I hope
you enjoyed it all and here are my original solutions with a thorough explanation of how
can one empirically get to the correct idea and the theory behind it.
As stated in the challenge's description, the task is to find a way to use the
well-known series:
Pi = 4 * (1 - 1/3 + 1/5 - 1/7 + ... )
to compute Pi to 10 correct places while keeping program size and running time small. A direct
approach seems doomed to failure as this series converges so incredibly slowly that millions
of terms must be added up to get no more than 6 or 7 correct digits, let alone 10. To clearly
demonstrate it, this simple 15-step HP-15C program, which will serve as the basis for my
solutions, will add up any specified even number of terms from the series:
01*LBL A 06 0 11 STO 0
02 STO I 07*LBL 0 12 RCL/I
03 STO+I 08 DSE I 13 +
04 4 09 RCL 0 14 DSE I
05 STO O 10 CHS 15 GTO 0
To improve accuracy, the program begins adding up the smallest terms and goes
backwards until it reaches the largest term, 1. Upon running it, you'll
see that, as expected, the convergence is awfully slow. Let's try
to add 4 terms, then 44, then 444:
4 , GSB A -> 2.895238096 (barely one correct digit)
44 , GSB A -> 3.118868314 (barely three correct digits)
444, GSB A -> 3.139340404 (barely four correct digits)
This last result took almost 7 minutes, yet we've got no more than four
not-so-correct digits, so the situation seems hopeless. At this point, it seems we can do no better
than try some relatively complicated techniques, such as the Euler-McLaurin formula
or extrapolation mechanisms for summation of infinite, alternating series such as
this one. This would incur in a serious penalty in vastly increased complexity
and program size, as seen in several working programs posted by contributors.
A bit of sleuthing:
However, math is full of surprises and serendipitous findings are bound to happen when
and where you least expect them, as we'll immediately see. Let's use our basic program
to add up exactly 50 terms:
50, GSB A -> 3.121594653
Now, this has a fairly large error, as we're getting 3.12+ instead of 3.14+, so that
the 3rd digit is alreay 2 units off. But, don't you notice something truly eerie ?
Yes, we get a "2" where a "4" should be. But the following three digits (159) are correct !
Then we get another wrong digit, a "4" which should be a "2", but then the next three
digits (653) are once again correct !!
Let's align our value and the correct Pi value and carefully examine the differences:
Sum -> 3.121594653
PI -> 3.141592653 (58979...)
----------------------
+2 -2
which, in absolute values means:
+0.02 -0.000002
Let's see if this is just a weird coincidence, or else it also happens for other
numbers of terms being added up. Let's try 100 terms, for instance:
100, GSB A -> 3.131592904
3.141592654
------------------
+1 -25
+0.01 -0.00000025
and we see that our initial impression does hold, because after one wrong digit, the
subsequent four digits (1592) are indeed correct, then another a couple of wrong digits, and
once again another correct digit follows.
Let's call these two 'corrections' C1 and C2 (i.e: +0.02 and -0.000002 for 50 terms,
+0.01 and -0.00000025 for 100 terms, respectively) and see how they relate to the number
of terms being used. A little insight or a little data-fitting will allow
us to issue the following plausible, tentative hypothesis, where N is the number of terms:
C1 = 1/N
C2 = -0.25/N3 = -1/4N3
which do indeed work for N = 50 and N = 100 terms. Now we'll put our hypothesis to the test,
by using it to predict the values of C1 and C2 for N=200 terms:
Prediction for N = 200 -> C1 = 1/200 = 0.005
C2 = -1/(4*2003) = -0.000000031
and we'll now check if they agree with actual results, by running our basic program with
200 as the input value:
200, GSB A -> 3.136592685
3.141592654
-----------------
+5 -31
which indeed do exactly agree with our predicted corrections, +0.005 and -0.000000031.
At this point, we can be fairly sure that our empirical finding holds, and can then
proceed to make use of it by simply computing one or both correction terms, C1 and
C2, and using them to refine the sum provided by our basic program, as follows:
First version, using just one correction term, C1 = 1/N:
Just two little changes to our basic program will compute and add the correction
term C1, resulting in a program just a single step longer, at 16 steps, yet much faster and
accurate:
01*LBL A
02 STO I
03 STO+I 50, GSB A -> 3.141594653 in 55"
04 1/X
05 4 error = 2E-6, actually all digits are correct except the "4" !
06 STO O
07 X<>Y 100, GSB A -> 3.141592904 in 1'50"
08*LBL 0
09 DSE I error = 2.5E-7
10 RCL 0
11 CHS 400, GSB A -> 3.141592658 in 7'45"
12 STO 0
13 RCL/I error = 4E-9
14 +
15 DSE I
16 GTO 0
so this simple version, with just the one correction term C1 does achieve a 10-digit correct
value (within 4 ulps) while using just 400 terms, in less than 8 minutes. That's many orders
of magnitude better than the basic program could achieve, but we can do still much better:
Second version, using two correction terms, C1=1/N and C2=-1/4N3:
A few stack manipulations will allow us to compute and use both correction
terms, C1 and C2 while using just 5 additional steps, for a very
small total of just 21 steps:
01*LBL A
02 STO I
03 STO+I 40, GSB A -> 3.141592651 in 40" (error = 3E-9)
04 1/X
05 ENTER
06 ENTER 50, GSB A -> 3.141592653 in 50" (error = 1E-9)
07 3
08 Y^X
09 4 62, GSB A -> 3.141592654 in 60" (error = 0)
10 STO O
11 /
12 -
13*LBL 0
14 DSE I
15 RCL 0
16 CHS
17 STO 0
18 RCL/I
19 +
20 DSE I
21 GTO 0
so this improved version needs to add up just 62 terms to return
a full 10 correct-digit value within 60 seconds.
Here's a table summarizing the different
degrees of approximation using 0, 1, and 2 correction terms, for
up to 60 terms added up:
N bare series +C1 +C1 +C2 t
------------------------------------------------------
10 3.041839619 3.141839619 3.141589619 10"
20 3.091623807 3.141623807 3.141592557 20"
30 3.108268567 3.141601900 3.141592641 30"
40 3.116596557 3.141596557 3.141592651 40"
50 3.121594653 3.141594653 3.141592653 50"
60 3.124927144 3.141593811 3.141592653 60"
Further empirical confirmation:
As we've been able to indeed get 10 correct digits by using our empirically
discovered corrections, we can be fairly confident that they are no
mere coincidences but hold for greater number of terms added up and
thus greater precision. To test this, just out of curiosity, these
are the results for N = 500, 5000, 50000, 500000, and 5 million terms
added up:
N = 500 terms added up
3.13959265558978323858464...
3.14159265358979323846264...
+2 -2 +10 -122
N = 5,000 terms added up
3.14139265359179323836264339547950...
3.14159265358979323846264338327950...
+2 -2 +10 -122
N = 50,000 terms added up
3.14157265358979523846264238327950410419716...
3.14159265358979323846264338327950288419716...
+2 -2 +10 -122
N = 500,000 terms added up
3.14159065358979324046264338326950288419729139937510...
3.14159265358979323846264338327950288419716939937510...
+2 -2 +10 -122
N = 5,000,000 terms added up
3.14159245358979323846464338327950278419716939938730582097494...
3.14159265358979323846264338327950288419716939937510582097494...
+2 -2 +10 -122
Notice in particular the values for N = 5,000,000 terms: the 7th decimal
is already in error by 2 units. But the next 13 digits are all correct !
Then, the following digit is also 2 units wrong. But the next 12
digits are again correct !! All in all, among the first 47 digits,
only 3 of them are a few units wrong !
In other words, the original series converges incredibly slowly, granted, but the errors
when you stop at N terms are extremely predictable and easy to compute,
so you can increase your accuracy 3-fold or 5-fold by using just one
or two easily derived correction terms.
Final notes
This empirical discovery, once made, can be substantiated by theory,
and a nifty expression is arrived at which results in an
asymptotic approximation to Pi based on the sum of the original
series truncated to N terms plus a 'correction' series (the
asymptotic component) in negative powers of N (1/N, 1/N3, etc)
where the so-called Euler numbers are the coefficients.
Similar phenomena occur for constants other than Pi, for example
similarly truncating the series:
Ln(2) = 1 - 1/2 + 1/3 - 1/4 + 1/5 - ...
results in:
Sum = 0.69314708055995530941723212125817656807551613436025525...
Ln(2) = 0.69314718055994530941723212145817656807550013436025525...
1 -1 2 -16
and another asymptotic series can be theoretically substantiated, the required
coefficients being now the so called "tangent numbers" instead: 1, -1, 2, -16, ...
Thanks for your interest and many excellent posted contributions, hope you
enjoyed yourselves while working them out.
Best regards from V.
|