# HP Forums

Full Version: The Four Meanings of "Accurate to 3 Places"
You're currently viewing a stripped down version of our content. View the full version with proper formatting.
I recently saw in the article’s forum a very interesting information from Joe Horn about decimal to fraction conversion and the meaning of “accuracy to n places” term. I had recently implemented fraction display for my RPN on HP39gs program (before I saw that post). I had used the PPC ROM algorithm with a slight modification.

I was comparing the results of my own implementation with the original PPC ROM DF routine (running on a 41 emulator) and the HP48G (→Q function).
If I store ‘3’ in R07 (41/PPC ROM) and use Fixed 3 display on HP48G I get the same result for SQRT(84): 999/109 (producing in fact an error slightly less than .0000138), not the expected 614/67 fraction (row 14) according to the article, I wonder why...

Additionally, I would like to play more with the continued fraction method. It seems that the implementation is not difficult but (being lazy) does anyone happen to have the algorithm already in (Sys) RPL / RPN or any other simple format?

Thank you / Best regards.

Andres

(12-29-2017 10:06 PM)acapde Wrote: [ -> ]I was comparing the results of my own implementation with the original PPC ROM DF routine (running on a 41 emulator) and the HP48G (→Q function).
If I store ‘3’ in R07 (41/PPC ROM) and use Fixed 3 display on HP48G I get the same result for SQRT(84): 999/109 (producing in fact an error slightly less than .0000138), not the expected 614/67 fraction (row 14) according to the article, I wonder why...

The →Q function in the HP 48 uses the same simplistic continued-fraction algorithm as the DF routine in the PPC ROM. Hence the same outputs.
Thank you Joe. I'll analyze the routine further to see if I can get a better control of the accuracy.

Andres.-
(12-29-2017 10:06 PM)acapde Wrote: [ -> ]Additionally, I would like to play more with the continued fraction method. It seems that the implementation is not difficult but (being lazy) does anyone happen to have the algorithm already in (Sys) RPL / RPN or any other simple format?

In 1994, Branimir Buljan asked me via email how the "Continued Fraction by Fast Recursion Algorithm" works. Everything below is an excerpt from my reply to him. Please note that this posting is ONLY intended to elucidate the continued-fraction algorithm itself. This is NOT the best way to code it. The code below introduces round-off errors due to executing 1/FP(real), and the algorithm itself misses ALL intermediate convergents. Avoiding these problems takes more code; cf. my PDQ Algorithm posting for details.

-----

Key the following two little programs into your HP48:

SETUP: << (0,1) (1,0) >>

ITER8: << ROT IP LASTARG FP DUP { INV } IFT 4 ROLLD OVER * ROT + >>

To use, type any decimal number, press SETUP once, then press ITER8 (pronounced "iterate") repeatedly, carefully watching the stack. You'll see the "fast recursion formula" in action.

For example, Press PI (with flag -2 set, of course) and then SETUP:

3: 3.14159265359
2: (0,1)
1: (1,0)

Level 1 contains the current approximation in (numerator, denominator) format. As you can see, it's 1/0 at setup (represents infinity).

Press ITER8:

3: 7.06251330592
2: (1,0)
1: (3,1)

This means that 3/1 is the first approximation of pi. The first approximation is always less than the input decimal. The second approximation is always greater than the input decimal:

Press ITER8:

3: 15.9965944095
2: (3,1)
1: (22,7)

So 22/7 is the next approximation of pi, and is greater than pi, as expected, but closer than the previous approximation. Each time you press ITER8, the previous approximation goes up to level 2. This is necessary for the algorithm, but it's also nice to visually compare levels 1 & 2.

The algorithm is easy: take the integer part (IP) of level 3, multiply it by level 1, and add level 2. That's the new approximation. The old approximation goes to level 2, and level 3 gets replaced with 1/FP(x) where x is the former contents of level 3.

So the next iteration, by this definition, would be equal to:

3: 1/0.9965944095
2: (22,7)
1: 15*(22,7)+(3,1)

Press ITER8 and see it:

3: 1.00341722818
2: (22,7)
1: (333,106)

So 333/106 is the next approximation of pi, and is less than pi, as expected, and even closer than the previous approximation. Each press of ITER8 will produce the next approximation, alternating less/greater than pi, and always getting closer to it. Eventually (after 9 iterations in this case) the fraction will be equal to the input as far as the HP 48 is concerned (12 digits max).

The HP 48 ->Q function uses a similar technique, iterating until the difference between the input and the approximation equals zero (rounded to the current display format).

If level 3 ever becomes equal to 0, then stop iterating; the exact fraction has been found. (Running ITER8 further will just perform a SWAP).
This recursion is basically x=1/[1/x] where [] means floor is called the Gauss Map. It drifts rather quickly because the error in continued fraction approximation (p/q) is of order 1/q^2 (actually much better). The accuracy of the continued fraction part quickly exceeds the floating point precision of the calculator. The algorithm's greatest strength (good accuracy) tends to make it tricky.

Try with Sqrt(2) which has the continued fraction (1;2,2,2,2...) where the semicolon separates the first (or integer) part from the fraction. When done with symbolic arithmetic, it works fine. When done in floating point, the computation deteriorates.

HP50g program

<< INV DUP FLOOR DUP UNROT - EVAL>>

Sqrt(2) gives 0,1,2,2,2....
Sqrt(2.) gives the same but then after 15 steps or so deteriorates.

Exact mode works well for algebraic stuff. Floating point is always tricky.
Thank you Joe and TTW, this is very good info.

Happy New Year!
(12-30-2017 08:10 PM)ttw Wrote: [ -> ]This recursion is basically x=1/[1/x] where [] means floor is called the Gauss Map. It drifts rather quickly because the error in continued fraction approximation (p/q) is of order 1/q^2 (actually much better). The accuracy of the continued fraction part quickly exceeds the floating point precision of the calculator. The algorithm's greatest strength (good accuracy) tends to make it tricky.

Hi, ttw:

On machine that can produce good modulus, x = 1/fraction(x) error can be removed.

Code:
```def genCF(n, d=1):     "generate continued fraction of n/d (n can be float)"     while d:         yield int(n/d)         n, d = d, n%d```

>>> list(genCF(2**0.5))
[1, 2, 2, 2, 2, 2,
2, 2, 2, 2, 2,
2, 2, 2, 2, 2,
2, 2, 2, 2, 2,
1, 1, 1, 2, 7,
1, 2, 33, 2, 7,
5, 2, 1, 1, 16, 2]

Above look bad (exact coefs of sqrt(2) = [1, 2, 2 ...]), but not really.
genCF actually generated correct coefficients.

Python cannot produce 2**0.5 exactly, but as: 12738103345051546 / 2**53 ~ 1.4142135623730951

>>> list(genCF(12738103345051546, 2**53) ### all done in integers
... # exact coefficents as above

You can also confirm this IEEE fraction in Mathematica. It will generate same coefs.

Why the accuracy ?
Because modulus operation is exact, using this algorithm.
Reference URL's
• HP Forums: http://www.hpmuseum.org/forum/index.php
• :