|Re: Trigonometrics for Really Big Input|
Message #13 Posted by Rodger Rosenbaum on 1 Mar 2007, 1:30 p.m.,
in response to message #12 by Gunnar Degnbol
On James Gosling's blog page, he quotes extensively from Joe Darcy, his local floating point God.
"The implementation challenge is that sin/cos are implemented using argument reduction whereby any input is mapped into a corresponding input in the [-pi/4, pi/4] range. Since the period of sin/cos is pi and pi is transcendental, this amounts to having to compute a remainder from the division by a transcendental number, which is non-obvious. A few years after the x87 was designed, people figured out how to do this division as if by an exact value of pi."
He also says:
"The fix for this problem was figured out quite a long time ago. In the excellent paper 'The K5 transcendental functions' by T. Lynch, A. Ahmed, M. Schulte, T. Callaway, and R. Tisdale a technique is described for doing argument reduction as if you had an infinitely precise value for pi."
It sounds to me like Darcy is suggesting that a new technique has been discovered since the x87 was designed. I was eager to learn of this new technique.
I downloaded this paper from IEEEXplore and the only place where the authors mention range reduction for the sin function is on page 264, first column, where they say:
"The trigonometric functions reduce the domain of theta in [-2^63,2^63] to a range of (-pi/4,pi/4) by subtracting integer multiples of pi/2 from the input operand. Pi/2 is represented with up to about 256 bits depending on how much precision is required. The multiple precision arithmethc made handling such a precise value of pi straight forward and efficient."
I don't see anything new here. What they have done is to use an approximation to pi/2 with a large number of significant bits up to a maximum of 256 bits.
The Saturn processor based HP machines use a 31 decimal digit range reduction constant. Since these return 12 digit results, the largest input argument to the SIN function for which we ought to expect correct 12 digit results is about 10^(31-12). On my HP50, SIN(1E20) (in radian mode, or course) returns a result that is correct to 12 digits.
The K5 SIN function presumably can return a double precision result (53 bit significand) to the user. If a 256 bit range reduction constant is used, then the SIN function with input arguments as large as 2^(256-53) = 2^203 (about 1.286E61) should return full accuracy on the K5. It would be interesting to see how the K5 SIN function performs when given a substantially larger argument, say 1E90.
This paper was published in 1995, so when Joe Darcy says "The fix for this problem was figured out quite a long time ago." apparently he doesn't realize that the "fix" (as described in this paper anyway) was figured out even longer before 1995. Maybe he meant to refer to some other paper describing a truly new technique.
He said "Since the period of sin/cos is pi and pi is transcendental, this amounts to having to compute a remainder from the division by a transcendental number, which is non-obvious. A few years after the x87 was designed, people figured out how to do this division as if by an exact value of pi."
Non-obvious to whom? Certain other people figured it out no later than the early 80's (actually, I'm sure it was done well before that). Maybe he should have used HP calculators. The HP71 did this sort of thing (extended precision range reduction constant) in 1983.
I haven't obtained the ACM SIGNUM paper, but the abstract doesn't suggest that anything other than "reduction by division" (with a large range reduction constant) is being done. Multiple precision division is required, of course, and maybe the mechanics of doing that is the substance of this paper.
Were Payne and Hanek really the first to describe how to do it correctly? I wonder.