# HP Forums

Full Version: A quick precision test
You're currently viewing a stripped down version of our content. View the full version with proper formatting.
Pages: 1 2 3 4
(06-05-2014 02:00 AM)Paul Dale Wrote: [ -> ]
(06-05-2014 01:18 AM)Claudio L. Wrote: [ -> ]I'd love to test some corner cases to see how the algorithms behave (like that case that needs 1600 digits, do you have any links to good material on this subject?). I did some tests of my own but I don't know what the corner cases are. I'd like to have some literature where they list cases that I can use to stress the algorithms and see the results.

I doubt that the 1600 bits (not digits) is actually required in reality. I suspect rather, that a proof was devised that this was sufficient for all possible input values rather than finding the absolute worst case. There has been a fair bit of work done on actually locating the worst cases in the past few years.

For binary: http://hal.inria.fr/inria-00072594/PS/RR-4044.ps
For decimal: http://perso.ens-lyon.fr/damien.stehle/d...malexp.pdf

There are plenty of others a search away.

The example at the bottom of the second page of the second paper is interesting. Forty seven correct decimal digits being required to correctly round towards zero -- 34 plus 9 guards won't get this right and this is only a 16 digit input, a 34 digit one would be worse.

As I've said many times, "numerical analysis is hard".

- Pauli

I will never understand this obsession for correct rounding. When rounding, you are introducing an error, so why obsess too much about it?
Case in point:
If I use 9 extra digits for internal calcs, then my result will only be affected by double rounding if my number is of the form:
nnnnn|499999999|5
Where nnnnn is digits we want to keep, the 4999 are my extra digits, and the last 5 belongs to the real number, is not stored anywhere and we don't really know it. That number could be a 5 or more.
So when rounding the higher precision numer, we'd turn the 49999999... into 500000.. causing the last digit to go off by one (insert your horror face here!).
But the magnitude of the error is:
With "correct" rounding e = -.4999999995
and with our "bad" result e = 0.5000000005

The last digit is not what we expected, so what? the number is still accurate within ~0.5 ULP and it will only happen 1 in 10^digits (when all 9 extra digits are exactly 499999999).

Honestly, I don't lose too much sleep about it, just because a mathematician doesn't get his way every now and then doesn't mean we should all go crazy about it.

But I'll test the algorithms and see what happens.

Claudio
(06-05-2014 11:03 AM)Claudio L. Wrote: [ -> ]The last digit is not what we expected, so what? the number is still accurate within ~0.5 ULP

This exactly is the point: there is no way to be absolutely sure the error does not exceed half a ULP. Not even HP can or will guarantee this level of accuracy. A nice overview on this can be found in the HP-15C Advanced Functions Handbook.

That's why my personal accuracy target for "machince precision" is 0,6 ULP or 1 unit in the n+1st significant digit. This is what can be expected on a 15C or 41C for the trig functions, and if it's good enough there, it's good enough for me either.

BTW there are much larger errors in other functions on regular HP calculators that can be off by several ULP, or the error may even spoil last two digits. So ±0,6 or sometimes even ±1 ULP is fine by me.

Dieter
(06-05-2014 11:03 AM)Claudio L. Wrote: [ -> ]If I use 9 extra digits for internal calcs, then my result will only be affected by double rounding if my number is of the form:
nnnnn|499999999|5

There are other rounding cases. What about nnnnm|500000000|1 when m is odd and you are rounding to even? Or nnnnn|000000000|1 when you are rounding towards +infinity? Assuming you support rounding modes of course.

Agreed, the chance of being incorrect - if your internal result is correct - is low but it is still there. Correct rounding is a worthwhile, if somewhat quixotic, goal. One ULP is far more achievable.

Are you sure all of your guard digits are correct or even all the digits in your result? In general they (the guard digits) won't all be. There are many pitfalls for the unwary here. Even changing the order of operations can cause problems. You need to be amazingly careful to get all digits correct, if indeed this is even possible (which it isn't in general).

As an example (which isn't related to rounding), what does COS(1.57079632679489661923132169163975144) equal? Is it 2.09858469968755291048739331660132323e-36 or something different? I cheated here and downloaded your (rather nice BTW) newRPL demo and it gets the last thirteen digits incorrect. TAN of the same value loses fourteen digits and this answer is a huge number rather than something very tiny.

My original 34S test case was COS(1.570796326794896619231321691639751) where newRPL gets the last ten digits incorrect -- the 34S gets more digits incorrect here. With more memory available, I'd fix this.

- Pauli
FYI - a quick test (27x x^2) done in the shop with cheapo Canon X Mark I Pro:
Code:
```674530.4707 When subtracting 674530 0.4707399243```
A little bit better than wp34 with single precision.
(06-05-2014 08:49 PM)pito Wrote: [ -> ]FYI - a quick test done in the shop with cheapo Canon X Mark I Pro:
Code:
```674530.4707 When subtracting 674530 0.4707399243```
A little bit better than wp34 with single precision.

No idea what you're referring to. BTW, after some 40 posts, would you please learn to use <Quote> to indicate the post you're responding? Thanks in advance.

d:-/
(06-05-2014 12:06 PM)Paul Dale Wrote: [ -> ]As an example (which isn't related to rounding), what does COS(1.57079632679489661923132169163975144) equal? Is it 2.09858469968755291048739331660132323e-36 or something different? I cheated here and downloaded your (rather nice BTW) newRPL demo and it gets the last thirteen digits incorrect. TAN of the same value loses fourteen digits and this answer is a huge number rather than something very tiny.

My original 34S test case was COS(1.570796326794896619231321691639751) where newRPL gets the last ten digits incorrect -- the 34S gets more digits incorrect here. With more memory available, I'd fix this.

- Pauli

2 things: the first demo came out with a bug that returns the TAN of PI/2-Arg instead of TAN(arg) (among several other bugs I've already fixed), I'll publish a new demo this weekend, now with DOLIST [end of shameless advertising].
COS(1.57079632679489661923132169163975144)

and I got
2.098584699687552910487472296153908

What makes you think those digits are incorrect? I did the same with preccalc (from sourceforge, very nice calc for windows), and it gives me the exact same value.
I've also increased precision to many more digits (keeping the argument at 36 digits they way you have it) and still the same result.

newRPL does lose digits, of course, it's part of the implementation, but in this case, your angle is in the order of 10^-35 difference with PI/2, so I'm losing 35 digits, but you can only see them at maximum precision (2016 digits).
I for instance ran it at 2016-35=1981 digits precision and still gave me identical results as preccalc, up to the very last digit.
If I run it at 2016 digits, I can clearly see the last 35 digits are garbage.

I've also checked with Wolfram Alpha, and... exact same number! (I checked up to 36 digits only, the others are left to the reader to compare...)

hint: Alpha won't give you more than the number of digits you input, so it will show only 5 digits unless you add lots of trailing zeros to the input argument.

I also tried your COS() case with 34 digits and once more, it agrees with preccalc and Alpha on all 34 digits.

I think you are taking for granted some results as "correct" that are perhaps not too good?

Claudio
(06-05-2014 09:07 PM)walter b Wrote: [ -> ]
(06-05-2014 08:49 PM)pito Wrote: [ -> ]FYI - a quick test done in the shop with cheapo Canon X Mark I Pro:
Code:
```674530.4707 When subtracting 674530 0.4707399243```
A little bit better than wp34 with single precision.

No idea what you're referring to. BTW, after some 40 posts, would you please learn to use <Quote> to indicate the post you're responding? Thanks in advance.

d

Dear Walter, I've just added an info to the topic I started.
BTW, you contributed to this thread with 4 absolutely unrelated postings
(06-05-2014 09:13 PM)pito Wrote: [ -> ]Dear Walter, I've just added an info to the topic I've started.
BTW, you contributed to this thread with 4 absolutely unrelated postings

Franz, is that you? Are you nymshifting now?

Mods, feel free to delete this. But it just has to be said.
(06-05-2014 12:06 PM)Paul Dale Wrote: [ -> ]There are other rounding cases. What about nnnnm|500000000|1 when m is odd and you are rounding to even? Or nnnnn|000000000|1 when you are rounding towards +infinity? Assuming you support rounding modes of course.

Yes, there's one case for each rounding mode, so it's still 1 in 10^9 chance no matter which mode you select.

(06-05-2014 12:06 PM)Paul Dale Wrote: [ -> ]Agreed, the chance of being incorrect - if your internal result is correct - is low but it is still there. Correct rounding is a worthwhile, if somewhat quixotic, goal. One ULP is far more achievable.

Are you sure all of your guard digits are correct or even all the digits in your result? In general they (the guard digits) won't all be. There are many pitfalls for the unwary here. Even changing the order of operations can cause problems. You need to be amazingly careful to get all digits correct, if indeed this is even possible (which it isn't in general).

No, I cannot guarantee that all my guard digits are all correct, and that's one more reason not to obsess about rounding.
As long as the most significant of my guard digits is good, there's only 0.5 ULP worst-case error (if I round the last digit up or down incorrectly).

On the other hand, when I implemented transcendentals, I thought it this way:
* The user should have 2000 good digits, that was my target. Since the library works in groups of 9 digits, you actually get 2007 (multiple of nine).
* For me to be able to get guard digits, I calculated all the CORDIC constants with 2016 digits (9 digits more).
* In the CORDIC loop, I set the precision to 9 digits more (2025) for all intermediate operations. It may seem dumb, because the constants have only 2016, but it actually helps get me a few more good digits.

With this, I can get transcendentals correct to the last digit that will be displayed to the user on all the test cases I tried (except the ones where you lose precision, like your COS(near PI) case, but that comes from the argument, not the algorithm).

Claudio
(06-05-2014 09:08 PM)Claudio L. Wrote: [ -> ]I've also checked with Wolfram Alpha, and... exact same number! (I checked up to 36 digits only, the others are left to the reader to compare...)

When I do this on Wolfram Alpha, which is usually my gold standard for accuracy I get the numbers I quoted. This is the query I used.

Quote:hint: Alpha won't give you more than the number of digits you input, so it will show only 5 digits unless you add lots of trailing zeros to the input argument.

It will if you ask for them

- Pauli
(06-05-2014 09:25 PM)Claudio L. Wrote: [ -> ]With this, I can get transcendentals correct to the last digit that will be displayed to the user on all the test cases I tried (except the ones where you lose precision, like your COS(near PI) case, but that comes from the argument, not the algorithm).

The COS example I gave can be calculated correctly. The argument isn't the issue -- it is represented exactly in the floating point number system. It really is the algorithm causing the loss. Witness that there are several libraries available that handle the equivalent bad case properly for binary arithmetic GNU's libc, MPFR are just two.

- Pauli
(06-05-2014 10:09 PM)Paul Dale Wrote: [ -> ]
(06-05-2014 09:08 PM)Claudio L. Wrote: [ -> ]I've also checked with Wolfram Alpha, and... exact same number! (I checked up to 36 digits only, the others are left to the reader to compare...)

When I do this on Wolfram Alpha, which is usually my gold standard for accuracy I get the numbers I quoted. This is the query I used.

Quote:hint: Alpha won't give you more than the number of digits you input, so it will show only 5 digits unless you add lots of trailing zeros to the input argument.

It will if you ask for them

- Pauli

Huh? The result given by alpha is different if you ask "with 72 decimal digits" or if you add 36 trailing zeroes to the number.
I just did:
COS(1.57079632679489661923132169163975144) with 72 decimal digits
(I didn't know you could write that in "human words", that's cool! if it only gave good answers...)
And I also did
COS(1.57079632679489661923132169163975144000000000000000000000000000000000000) (36 zeroes, for a total of 72 decimal digits, so it should be the same...right?)

The result in each case is:
2.098584699687552910487393316601323231×10^-36
2.09858469968755291048747229615390820 × 10^-36

What's going on? is it making the digits up or what? The second answer agrees with newRPL and preccalc, which is why I "conveniently" choose to believe that one. I say "conveniently" because it agrees with my code, not because I know it for a fact to be correct.

I used to trust the results from Alpha blindly (and from the past few posts, I can tell you too). Now I just don't know what to believe, and neither should you from now on.

Claudio
(06-05-2014 10:20 PM)Paul Dale Wrote: [ -> ]
(06-05-2014 09:25 PM)Claudio L. Wrote: [ -> ]With this, I can get transcendentals correct to the last digit that will be displayed to the user on all the test cases I tried (except the ones where you lose precision, like your COS(near PI) case, but that comes from the argument, not the algorithm).

The COS example I gave can be calculated correctly. The argument isn't the issue -- it is represented exactly in the floating point number system. It really is the algorithm causing the loss. Witness that there are several libraries available that handle the equivalent bad case properly for binary arithmetic GNU's libc, MPFR are just two.

- Pauli

In my case it is not the algorithm, it's the argument. CORDIC does not converge for angles > 1 radian, so I compute that cos() as sin(PI/2-alpha). Here's where I loose the digits: My PI constant only has 2016 digits, so if alpha = PI/2-1e-35, my substraction PI/2-(PI/2-1e-35) gives me 1e-35 with only 2016-35 accurate digits. That's where I'm losing my 35 digits. I could get them back only by storing PI/2 with twice the system precision, so PI/2-1ULP would give me (2016*2-2016)=2016 good digits.

Claudio
(06-06-2014 12:57 AM)Claudio L. Wrote: [ -> ]I used to trust the results from Alpha blindly (and from the past few posts, I can tell you too). Now I just don't know what to believe, and neither should you from now on.
Claudio

Paul,
I got to give you credit: when you said it's hard to write good numeric code, you were right: if the mighty Wolfram can't get it right what's left for the rest of us.

Claudio
(06-06-2014 12:57 AM)Claudio L. Wrote: [ -> ]I used to trust the results from Alpha blindly (and from the past few posts, I can tell you too). Now I just don't know what to believe, and neither should you from now on.

I'm very surprised by this too & I'm definitely going to be less trusting in the future. Can anyone try this out on mathematica?

I also use http://keisan.casio.com/ for high precision computations but it doesn't cope with these cases well.

The UNIX program bc is also capable of simple functions, although it can be slow. It gives at 1000 decimals:

Code:
```.0000000000000000000000000000000000020985846996875529104874722961539\ 08203143104499314017412671058533991074041716259787756261727399478831\ 81185207919839671783270457963357452998367829339194961906347965703281\ 47240869600249270055638757772742607382068677735487482252910132430921\ 25011163621763952597207721771071265357158278110764553758510887962258\ 51742560721740180609129907245224513459993921421070558798154580236042\ 32407372510759903342617788153853840878295245182108915941345297357152\ 95467887428299279732274687499923779112476604887692015658454432511395\ 20186523310121703812758838897823845975508622927202662531025498360451\ 74355215955818970707014736785295006140539706462425805627474361779111\ 46786377286992186246904485553797769184501523663646490051717437459809\ 22322920642489618488130423650012447886418342843748598070677109909502\ 89199115883068012988455235118715782492326453234782257817862953771152\ 40579453052914192224813788402907439508961163082003815873154380098742\ 1749306835713073531394568393273816656830251173885```

The digits don't change working to 10000 digits. The code is calculating cos(x) = sin(x + pi/2) and it is using 1.2 times the digits (1200 or 12000 respectively). As for, sin(x) it is using a Taylor series with a couple of additional digits.

This matches the values you're getting and your result from Wolfram.

- Pauli
(06-06-2014 01:03 AM)Claudio L. Wrote: [ -> ]In my case it is not the algorithm, it's the argument. CORDIC does not converge for angles > 1 radian, so I compute that cos() as sin(PI/2-alpha).

Sounds like the algorithm. As in you need a different way to compute cos
cos(x) = sin(x + pi/2) is generally considered an okay transformation from memory. Taylor series also converge well if you range reduce the argument suitably -- aim for pi/4 or less so that the first term in the series is the most significant.

- Pauli
(06-06-2014 12:57 AM)Claudio L. Wrote: [ -> ]..
I used to trust the results from Alpha blindly (and from the past few posts, I can tell you too). Now I just don't know what to believe, and neither should you from now on.

Claudio

This is with N trailing zeros after the last "4":
Code:
```Block[{\$MinPrecision = 120}, N[Cos[1.57079632679489661923132169163975144], 130]] none 0.*10^-35 0 2.09858469968759412260293728493047158318234204531375910645959950911100\ 455087648653787132935200077099580084905028343200684*10^-36 00 2.09858469968759412260293728493047158318234204531375910645959950911100\ 455087648653787132935200077099580084905028343200684*10^-36 000 2.09858469968759412260293728493047158318234204531375910645959950911100\ 455087648653787132935200077099580084905028343200684*10^-36 0000 2.09858469968755291048739331660132323148430199128683081113869140777373\ 772224262464875860587211945559288771556897645713391*10^-36 00000 2.09858469968755291048739331660132323148430199128683081113869140777373\ 772224262464875860587211945559288771556897645713391*10^-36  .. 000000000000000 2.09858469968755291048747229615389016692898991768376891330235232863694\ 116237367172368967960981143453674385805543780051439*10^-36 Update 000000000000000000000000000000000000000000 2.09858469968755291048747229615390820314310449931401630854573253771804\ 786486140357601175963329108761735269031324560904414*10^-36```
(06-06-2014 11:16 AM)pito Wrote: [ -> ]
(06-06-2014 12:57 AM)Claudio L. Wrote: [ -> ]..
I used to trust the results from Alpha blindly (and from the past few posts, I can tell you too). Now I just don't know what to believe, and neither should you from now on.

Claudio

This is with N trailing zeros after the last "4":
Code:
```Block[{\$MinPrecision = 120}, N[Cos[1.57079632679489661923132169163975144], 130]] none 0.*10^-35 0 2.09858469968759412260293728493047158318234204531375910645959950911100\ 455087648653787132935200077099580084905028343200684*10^-36```

00
2.09858469968759412260293728493047158318234204531375910645959950911100\
455087648653787132935200077099580084905028343200684*10^-36

000
2.09858469968759412260293728493047158318234204531375910645959950911100\
455087648653787132935200077099580084905028343200684*10^-36

0000
2.09858469968755291048739331660132323148430199128683081113869140777373\
772224262464875860587211945559288771556897645713391*10^-36

00000
2.09858469968755291048739331660132323148430199128683081113869140777373\
772224262464875860587211945559288771556897645713391*10^-36
..
000000000000000
2.09858469968755291048747229615389016692898991768376891330235232863694\
116237367172368967960981143453674385805543780051439*10^-36

I put the "good" digits in BOLD.
The last one seems to give good 33 digits (well, the last one off by 1 per rounding, but let's not start on that subject again...)

I think it was Paul's error: he asked Alpha "with 120 decimal digits", he should've written "with 120 decimal correctly calculated and correctly rounded digits". See? it was human error, not Alpha.

Claudio
Quote:
(06-06-2014 12:04 PM)Claudio L. Wrote: [ -> ]The last one seems to give good 33 digits (well, the last one off by 1 per rounding, but let's not start on that subject again...)
I've added a result (see above) - maybe you'll get the 34th digit

2.098584699687552910487472296153890
166928989917683768..
2.098584699687552910487472296153908203143104499314016..
(06-06-2014 12:04 PM)Claudio L. Wrote: [ -> ]I think it was Paul's error: he asked Alpha "with 120 decimal digits", he should've written "with 120 decimal correctly calculated and correctly rounded digits". See? it was human error, not Alpha.

I understand my mistake now

Alpha padded the result with semi-random digits just like I asked for.

- Pauli
Pages: 1 2 3 4
Reference URL's
• HP Forums: https://www.hpmuseum.org/forum/index.php
• :