10-22-2015, 07:45 PM
There's this fast inverse square root method that is often used in 3D graphics programs to calculate a normalized vector:
While it's not precisely known how the magic number 0x5f3759df was determined the idea of the algorithm appears already in a drafted paper by Prof W. Kahan and K.C. Ng, written in May, 1986:
Since Kahan was the primary architect behind the IEEE 754-1985 standard for floating-point computation I wonder whether he already had that trick in mind when defining their representation:
Though the HP-71B was the first handheld to implement the IEEE floating-point standard I doubt that this trick could be used. Since it was also HP's first calculator based on the Saturn processor the floating point numbers are BCD-formatted coded.
But then the approximation \(\log(1+x)\approx x\) can't be used since \(x\) isn't \(\ll 1\).
Cheers
Thomas
Code:
float Q_rsqrt( float number )
{
long i;
float x2, y;
const float threehalfs = 1.5F;
x2 = number * 0.5F;
y = number;
i = * ( long * ) &y; // evil floating point bit level hacking
i = 0x5f3759df - ( i >> 1 ); // what the fuck?
y = * ( float * ) &i;
y = y * ( threehalfs - ( x2 * y * y ) ); // 1st iteration
// y = y * ( threehalfs - ( x2 * y * y ) ); // 2nd iteration, this can be removed
return y;
}
While it's not precisely known how the magic number 0x5f3759df was determined the idea of the algorithm appears already in a drafted paper by Prof W. Kahan and K.C. Ng, written in May, 1986:
Quote:B. sqrt(x) by Reciproot Iteration-- sqrt implementation in fdlibm
(1) Initial approximation
Let x0 and x1 be the leading and the trailing 32-bit words of
a floating point number x (in IEEE double format) respectively
(see section A). By performing shifs and subtracts on x0 and y0,
we obtain a 7.8-bit approximation of 1/sqrt(x) as follows.
k := 0x5fe80000 - (x0>>1);
y0:= k - T2[63&(k>>14)]. ... y ~ 1/sqrt(x) to 7.8 bits
Here k is a 32-bit integer and T2[] is an integer array
containing correction terms. Now magically the floating
value of y (y's leading 32-bit word is y0, the value of
its trailing word y1 is set to zero) approximates 1/sqrt(x)
to almost 7.8-bit.
Since Kahan was the primary architect behind the IEEE 754-1985 standard for floating-point computation I wonder whether he already had that trick in mind when defining their representation:
Though the HP-71B was the first handheld to implement the IEEE floating-point standard I doubt that this trick could be used. Since it was also HP's first calculator based on the Saturn processor the floating point numbers are BCD-formatted coded.
But then the approximation \(\log(1+x)\approx x\) can't be used since \(x\) isn't \(\ll 1\).
Cheers
Thomas