Post Reply 
evil floating point bit level hacking
10-22-2015, 07:45 PM
Post: #1
evil floating point bit level hacking
There's this fast inverse square root method that is often used in 3D graphics programs to calculate a normalized vector:
Code:
float Q_rsqrt( float number )
{
    long i;
    float x2, y;
    const float threehalfs = 1.5F;

    x2 = number * 0.5F;
    y  = number;
    i  = * ( long * ) &y;                       // evil floating point bit level hacking
    i  = 0x5f3759df - ( i >> 1 );               // what the fuck? 
    y  = * ( float * ) &i;
    y  = y * ( threehalfs - ( x2 * y * y ) );   // 1st iteration
//    y  = y * ( threehalfs - ( x2 * y * y ) );   // 2nd iteration, this can be removed

    return y;
}

While it's not precisely known how the magic number 0x5f3759df was determined the idea of the algorithm appears already in a drafted paper by Prof W. Kahan and K.C. Ng, written in May, 1986:
Quote:B. sqrt(x) by Reciproot Iteration

(1) Initial approximation

Let x0 and x1 be the leading and the trailing 32-bit words of
a floating point number x (in IEEE double format) respectively
(see section A). By performing shifs and subtracts on x0 and y0,
we obtain a 7.8-bit approximation of 1/sqrt(x) as follows.

k := 0x5fe80000 - (x0>>1);
y0:= k - T2[63&(k>>14)]. ... y ~ 1/sqrt(x) to 7.8 bits

Here k is a 32-bit integer and T2[] is an integer array
containing correction terms. Now magically the floating
value of y (y's leading 32-bit word is y0, the value of
its trailing word y1 is set to zero) approximates 1/sqrt(x)
to almost 7.8-bit.
-- sqrt implementation in fdlibm

Since Kahan was the primary architect behind the IEEE 754-1985 standard for floating-point computation I wonder whether he already had that trick in mind when defining their representation:
[Image: 618px-IEEE_754_Single_Floating_Point_Format.svg.png]

Though the HP-71B was the first handheld to implement the IEEE floating-point standard I doubt that this trick could be used. Since it was also HP's first calculator based on the Saturn processor the floating point numbers are BCD-formatted coded.

But then the approximation \(\log(1+x)\approx x\) can't be used since \(x\) isn't \(\ll 1\).

Cheers
Thomas
Find all posts by this user
Quote this message in a reply
10-22-2015, 09:29 PM
Post: #2
RE: evil floating point bit level hacking
As the "father of floating point," Kahan discovering this trick would not surprise me "one bit"

Graph 3D | QPI | SolveSys
Find all posts by this user
Quote this message in a reply
06-09-2024, 03:06 PM
Post: #3
RE: evil floating point bit level hacking
I was digging into the fast inverse square again last night, and made the personal discovery that Kahan was connected to this.

The algorithm reminded me so much of the CORDIC algorithm, in that it is equal parts a mixture of maths and computer science hacking.

And it ties in with the TVM stuff too, in that Kahan had a hand in improving the algorithm for the HP-27 and later, and also that Newton's method is used for the fast inverse square and often used to solve for i.
Find all posts by this user
Quote this message in a reply
06-09-2024, 05:18 PM
Post: #4
RE: evil floating point bit level hacking
.
Kahan!!!

[Image: attachment.php?thumbnail=12642]
Visit this user's website Find all posts by this user
Quote this message in a reply
Post Reply 




User(s) browsing this thread: 2 Guest(s)