Re: More on approximating the cumulative normal distribution Message #3 Posted by Les Wright on 28 June 2006, 10:36 a.m., in response to message #2 by Valentin Albillo
Thanks!
Another forum contributor directed me to an older version of SPSS TableCurve2D, which is impressively diverse and fast given that it is over a decade old. Regrettably, user-defined functions are restricted to a maximum of 10 parameters, and the NR approximation that intrigues me is up to that already.
I should share a little more about what I like about the NR routine and why I want to improve on it if I can. For esoteric reasons, I am preoccupied with accuracy and, hopefully, precision, in the upper tail of the distribution, where the function values get quite tiny. The reported 1.2e-7 relative error does indeed seem to hold when I plot the error curve in Maple, and indeed the fact that it is a relative error, not absolute, does really assist the accuracy for the high values of the argument.
For example, on my HP48G, my implementation of the NR exponentiated polynomial approximation gives erfcc(33.82) = 3.01532436445E-499, whereas 0 1 33.82 SQRT * UTPN 2 * gives 3.01532450586E-499. This actually impresses and intrigues me greatly. The Hastings approximations quoted in Abramowitz and Stegun boast respectable absolute errors, but in the upper tail this is meaningless since tiny minus tiny equals tiny in absolute terms, but it can be huge in proportional terms when you calculate tiny divided by tiny to get something not tiny at all.
For example, 26.2.17 in Abramowitz and Stegun computes the cumulative normal distribution with an absolute max error of 7.5E-8. Using the complement of this to estimate the upper tail probability on my on my HP48G, I get roughly 2.12E-482 for input of 47, compared with roughly 1.78E-482 for 0 1 47 UTPN. (This latter value matches up with the results in Maple and Mathematica as well as the relevant continued fraction expansion out to a few terms, so I do trust it.) Right order of magnitude, and tiny absolute error, but the relative error is an abysmal 1.9E-1. Abramowitz and Stegun 26.2.17 does even more sadly in that upper tail, from what I understand from Rodger Rosenbaum who has done the number crunching.
Thanks for your feedback. I am learning I need to learn a bit more about minimax curve fitting and Chebyshev fits using transformed inputs. I am no computer scientist or mathematician, so this can take awhile. At least I will have fun in the process.
Cheers,
Les
|