I'll bite. Let's come up with a Newton approximation for x^2, which is the inverse function for sqrt(x). Find x, s.t. f(x) = y-sqrt(x) = 0. x_i+1 = x_i - f(x_i)/f'(x_i) = x_i - (y-sqrt(x_i))/(-1/(2*sqrt(x_i))) = x_i + 2*sqrt(x_i)*(y-sqrt(x_i)) = sqrt(x_i)*(2*y-sqrt(x_i)) So (leaving aside the problem of how to compute sqrt(x)!), we would need to figure out how many iterations are required to get a decent approximation to x^2. BTW, I have an analogous problem with my proposed "asinh" representation for numbers. Traditionally simple operations like x^2 aren't that simple, so a lot of "simple" operations are going to involve Newton and/or other approximations. If you look carefully at asinh(x), the "high order" bits form the "exponent" of x, while the "low order" bits form the "mantissa" of x. Asinh(x) is thus seen to be a perfectly smooth floating point system without the "bumps" introduced by the arbitrary split between the exponent & mantissa of usual floating point systems. I.e., asinh(x) is already a representation with "gradual underflow" a la Kahan. The asinh "number system" deserves an entry in a 21st Century HAKMEM ("HakMem" ??). BTW, asinh isn't just my proposal; it is used by astronomers as an "improvement" for measuring magnitudes: http://www.sdss.org/dr7/algorithms/photometry.html#asinh http://adsabs.harvard.edu/cgi-bin/bib_query?1999AJ....118.1406L At 05:46 PM 4/14/2011, Marc LeBrun wrote:
Suppose you go t'other way: given flonum x approximate x^2 by left shifting it (assume no overflow). How close? Can you get it even closer using only a fixed number of "linear" operations (add, shift, bitwise logical ops etc)--short, of course, of "morally" multiplying via shift-and-add etc?