Not a new discovery, but nicely written up and probably new to many here.

http://randomascii.wordpress.com/2014/10...intillion/
"The worst-case error for the fsin instruction for small inputs is actually about 1.37 quintillion units in the last place, leaving fewer than four bits correct. For huge inputs it can be much worse..."

Yap, it doesn't matter how deep you debug your code or logic circuits, at least one bug remains hidden to be discovered in the field.

I remember to use a WANG PC running a 66MHz Pentium inside at my office, and many jokes were circulating between the IT people in those days.

One joke was like this:

Question: Intel have created the 80286, followed by the 80386, and then the 80486; so, why is the Pentium not called 80586?

Answer: Because when they tried to add 486 + 100 on the new processor they got 584.9999992913.

The latest version of Free42 uses the double precision Intel decimal float library, and this gets it right: SIN(PI) is correct to all 34 digits.

WP-34S' SIN(PI) in double precision is correct to 17 digits.

Cheers, Werner

(10-18-2014 04:40 PM)Werner Wrote: [ -> ]WP-34S' SIN(PI) in double precision is correct to 17 digits.

Mine gives -1.15 x 10^-34 which seems more than 17 digits precision. Am I missing something?

Yes, you are.

Free42: SIN(3.14159 26535 89793 23846 26433 83279 503) =

-1.15802 83060 06248 94179 02505 54076 922 e-34, correct to 34 digits

WP34S: SIN(3.14159 26535 89793 23846 26433 83279 503) =

-1.15802 83060 06248 91773 57874 54453 501 e-34, correct to 17 digits

Werner