(01-29-2021 11:27 PM)Paul Dale Wrote: [ -> ]I'd expect something like Newton's method for Q-1. Calculating the CDF gives the PDF at the same time, so the first derivative is free. Likely, there will be an attempt to get a decent initial estimate and possibly some bounding or correction applied to avoid shooting off somewhere where it shouldn't be.
A higher order method is also viable. Second and higher order derivatives also come for free. Thus, Halley's method is also possible. This quickly becomes a balancing game between code size and speed of convergence.
Taking your advice, I tried a nice erf
-1(x) approximation, from Sergei Winitzki, 2008
A handy approximation for the error function and its inverse
As expected, Halley's method is not that complicated, because erf(x)'' / erf(x)' = -2x
Code:
from mpmath import *
def erf_inv(y, k=2/sqrt(pi), b=4.33, verbal=False):
x = log1p(-y*y)/2 # max_rel_err = 0.002
x = sign(y) * sqrt(sqrt(b*b+x*((2-pi)*b+x)) - (b+x))
while 1:
f = erf(x) - y
h = -f/(k*exp(-x*x)+x*f) # Halley correction
if verbal: print repr(x), float(h)
if x+h*h==x: return x+h
x += h
icdf(p) = sqrt(2) * erf_inv(2*(p-.5))
>>> p = 0.7
>>> erf_inv(2*(p-.5), verbal=1)
mpf('0.37083921155017374') -3.20529566034e-05
mpf('0.37080715859357027') -1.2418308831e-14
mpf('0.37080715859355784')
>>> sqrt(2) * _ # icdf(0.7)
mpf('0.52440051270804067')
>>> p = 0.99
>>> erf_inv(2*(p-.5), verbal=1)
mpf('1.6433866997434172') 0.00158965243267
mpf('1.6449763521760914') 4.95709478826e-09
mpf('1.6449763571331861')
>>> sqrt(2) * _ # icdf(0.99)
mpf('2.3263478740408399')