Day 335 – Surprise and Wonder

31 – 100 Algorithms

Algorithms.
It sounds like such a sturdy workmanlike word. The sound conjures visions of dusty books, careful craftsmanship and transparent function.

Oh, how wrong first impressions can be…

Algorithms can be whimsical. Magical. Mercurial.

They can hide their function through misdirection and obfuscation.

They can be wonderful, brilliant and beautiful. Even when their meaning can barely be glimpsed through the cracks of their lines. Or especially then.

Way-way back in late 2011 I wrote a post to illustrate how even the most well-known algorithm can be unrecognisable when it takes an unfamiliar form. Fibonacci number calculation is probably the most well-known algorithm because in its traditional form it is an excellent example of recursion. But in this different form, it is both much more efficient and at the same time, much more cryptic.

There is a series of books filled with great examples of incomprehensible hacks and approximations. When I studied in university, I found a book in the library called “Graphics Gems” and it did its name more than justice. The first thought that will come to your mind when I talk about computers is very precise calculation engines that do not make mistakes. The eye-opener about this book was that sometimes you can get “good-enough” answers by doing a not-quite-right calculation.

I had never considered approximation as a valid strategy in computing.

It turns out that often by sacrificing some minimal precision (in some cases up to 10%) you can make certain calculations a lot faster than if you were aiming for the exact answer. Linear approximations are a perfect example. Sometimes by pretending a curved graph is really nothing more than a few straight lines (approximately), you can actually calculate the value of the graph much more quickly.

A lot of the hacks I read about back then were invented out of necessity. When processors were slow, and didn’t have efficient built-in mathematical units, any little tweak that could speed up a calculation was worth considering. Today, processors are so fast that in most cases we do not bother with those precise kind of hacks any more.

Except for when we do.

John Carmack (you may know him as the creator of Quake III, the game), used a seemingly incomprehensible hack to calculate “1 / square-root-of(x)“. I won’t try to explain why this is useful; with one formula apparently I already lose half the readers (if I hadn’t already by this point), just accept that in 3D graphics there is a purpose for this calculation.

The fast-inverse-square-root algorithm looks as follows for the coders in the audience (an interesting note follows for the non-coders):

float Q_rsqrt( float number )
{
	long i;
	float x2, y;
	const float threehalfs = 1.5F;
 
	x2 = number * 0.5F;
	y  = number;
	i  = * ( long * ) &y;
	i  = 0x5f3759df - ( i >> 1 );
	y  = * ( float * ) &i;
	y  = y * ( threehalfs - ( x2 * y * y ) );
 
	return y;
}

To non-coders all of this may look like a magic incantation. The more interesting fact is that line 10 looks like magic to coders as well. You see that weird set of numbers that doesn’t look like anything else? “0x5f3759df”? … That’s what we call a magic number. Not because it does something magical to the processor, but because coders cannot readily understand why it works.

You cannot pluck that number out of thin air and have it work. Someone did some very deep numerical analysis to discover that exact number gave a very good approximation for a useful calculation. And even though it looks like a lot of instructions there for the computer, it is about 4 times faster than doing the real proper calculation.

The reason we could play Quake III at acceptable frame-rates when it was released was because some mad genius somewhere came up with “0x5f3759df” and determined that the loss of precision would be beyond what anyone would be able to detect with the naked eye.

I love magic like that.

2 thoughts on “Day 335 – Surprise and Wonder”

Comments are closed.