I was considering a way to represent any real number by a binary fraction in ( -1, +1 ).
Obviously not! Grr, hasty edit. I wanted a function that mapped (-1 .. +1) to (-oo .. +oo), to map binary fractions to numbers "near" a lot of reals. As a floating point representation, it has a couple cute properties, besides representing a larger range of numbers. It's easy to get the representation for -x, 2^x, log2(x) and 1/x. But it's impossible to either add or multiply with it, which I think counts against it. Also, the fractalliness makes me worry that the precision would be too low in certain points, that you'd have to add bits that would be wasted most of the time. But at the resolution I've looked at (1e-6 in the range 0.. .5) the ratio between the peak and average slope is less than 8, three extra bits. The spikiness gets worse as you zoom in and it's an interesting problem to locate peaks in the differences of 2^52 numbers... but I'm not sure I'm that interested. Saying binary fraction sort of gives away how that implementation works, if it wasn't obvious. Eugene Salamin gene_salamin at yahoo.com writes (about f(1/3)):
The function terminates due to the finite precision of floating point. When the recursion finally unwinds, the resulting calculation consists of starting with x = 0, and successively substituting x = 2^(-x) as many times as the bit precision of floating point. Because the slope of 2^(-x) has absolute value less than 1, the sequence converges to the intersection point of y = x and y = 2^(-x).
Yes. In general the procedure turns bits of the input number into a string of either "-x" or "2^x" instructions (and finally 0). So any "repeating decimal" sets up a transcendental equation to be solved. And I'm pretty sure it converges for any x if |x| < 1 --at least if you calculate with enough digits. Another point against this function is that I can't see a non-ugly extension of it to complex numbers, so feh. My handwave that that function reaches all the reals is that more bits always give you larger numbers and finer resolution everywhere, so if infinite strings of decimal digits work, why not infinite strings of +/-( 2^ +/-( 2^ ... +/-( 2^0 ) ... ) ) ? "Payton, Paul" <paul.payton@lmco.com> writes:
I don't know what function it is, but it produces a fractal. Also, it appears the length of the curve in the range [-.5,+.5] appears to approach a limit. I tried your function with various granularities...
10000 -- 2.48434421186375 50000 -- 2.52759048548038 100000 -- 2.54491491266804 500000 -- 2.58282460188065
Yes, but of course the worst possible case is the Hamming distance. From (-.5,-1) to (.5,1) that's 3.0. For the case where half the steps are (+d,+0) and the other half are (+d,+d), the length would be 1+sqrt(2). More pictures, a more bulletproof version of the function, and an inverse function (binary search, argh!) here: http://www.tiac.net/~sw/2010/03/Superbola --Steve