I know I was hacking sqrt stuff circa 1992-4 & I thought at the time that this sort of trick was unremarkable; I thought I had seen it before somewhere else. I wasn't doing 1/sqrt(x), but sqrt(x), so there are slight differences. I remember this stuff specifically, because I was upset about how many cycles it took to move numbers back & forth between the integer & floating point units of the ill-fated Intel (N90??) processor of the day. It may well be that this trick was used in the Intel library sqrt routine from that era. There's also a cool IBM trick to computing abs(z), where z=x+iy, which converges cubically (?). It works by rotating the vector onto the x-axis in steps, while preserving the length of the vector. There is a paper in one of the IBM journals from the 1980's. With the multiplicity of fp units in today's processors, it should work pretty well. At 10:50 AM 4/14/2011, Robert Munafo wrote:
A definite No on Knuth. Volume II introduces floating-point and its index lists many pages with exercises mentioning square root, but I looked at them all and none seem to match this. At any rate, all published editions (including my 1997 3rd edition) use the classic base-agnostic MIX virtual machine for all floating-point algorithm examples. MIX is either base 64 or base 100 (i.e. not base 2) and the hack wouldn't work even if you corrupted the base-64 machine implementation into a 31-bit wide base-2 (because the floating-point representation still has to be interpreted as base 64). The next ("ultimate") editions of Knuth's books, if he manages to finish them will use MMIX[1] which uses IEEE floating-point, but I think it's unlikely Knuth would have held on to anecdotes about this right-shift hack just to include them 40 years later...
Any early computer expressing the float as Sign, Exponent excess 2^N, Implicit leading mantissa bit, Mantissa would be a good candidate for this hack -- but have you found any actual specific evidence for any of those machines in your list?
Note that a right shift isn't quite enough, as the 1997 Blinn article shows: in order to divide the exponent by two you need to add back half of the exponent offset (IEEE binary32 is excess-127) after shifting, which is what the "AsInteger(1.0)>>1" part does. Since AsInteger(1.0) is 0.01111111.00000000000000000000000 = 1.0*2^(127-127), the right shift gives 0.00111111.10000000000000000000000 = 1.5*2^2^-64. So we're also adding an 0.5 to the mantissa to fill in for half of the power of 2 that would have been lost by shifting a bit out of the exponent. In the case where the exponent is odd, this takes the high explicit mantissa bit and moves it back into the exponent, in the other cases it adds 0.5 to the mantissa, and in both cases this is exactly what we want.
- Robert
[1] http://www-cs-faculty.stanford.edu/~knuth/mmix.html
On Thu, Apr 14, 2011 at 10:31, Henry Baker <hbaker1@pipeline.com> wrote: I don't have my Knuth volumes handy. Perhaps this trick is in one of the exercises?
Current additional guesses:
IBM 704/709/7040/7090 Fortran libraries PDP-6/10 Fortran libraries CDC 6600 Fortran libraries Cray-1 Fortran libraries _not_ the IBM 360 (not a binary exponent) Kahan's original SANE IEEE floating point SW for the Motorola 68000 used in the original Apple Macintosh. Various DSP (Digital Signal Processors): Analog Devices, Texas Instruments, ATT (Bell Labs), etc., from the 1980's.
At 11:35 PM 4/13/2011, Robert Munafo wrote:
I've just traced it back to 1997 ("Floating-point tricks" by Blinn, see below) where one finds the following:
float ASqrt(float x) { int i = (AsInteger(x)>>1) + (AsInteger(1.0f)>>1); return(AsFloat(i); }
Here are the breadcrumbs I followed:
[...] 7b. http://rufus.hackish.org/~rufus/FPtricks.pdf ("Floating-point tricks" by Jim Blinn, relevant section is on top of page 83)
-- Robert Munafo -- mrob.com Follow me at: fb.com/mrob27 - twitter.com/mrob_27 - mrob27.wordpress.com - youtube.com/user/mrob143 - rilybot.blogspot.com