[math-fun] more asinh representation
After thinking about it some more, the "binary" version of asinh doesn't seem to help very much in calculating/manipulating the representation. So it seems easier to stick with the traditional definition, which still has the advantage that for small x, asinh(x)~x. To extract the binary exponent from asinh(|x|), multiply by 1/ln(2)~1.442695041, convert to an integer, and subtract 1. So to compete with IEEE single precision floating point, I suggest the representation: asinhIEEE(x)=round(asinh(x*2^126)*2^23) as a 32-bit fixed point number. The "sign bit" is the high-order bit, the "exponent" is the next 8 bits, and the "mantissa" is 23 bits. The largest IEEE single precision float is ~2^128 ~ 3.4x10^38, so asinhIEEE(3.4*10^38) ~ 1,482,700,732 (easily representable in 31 bits) The smallest IEEE single precision normalized float is ~2^-126 ~ 1.18*10^-38, so asinhIEEE(1.18*10^-38) ~ 7,416,212 (a 23-bit number) The smallest IEEE single precision denormalized float is ~2^-149 ~ 1.4*10^-45, so asinhIEEE(1.4*10^-45) ~ 1 The "binary exponent" of |x| can be extracted as asinhIEEE(|x|)/2^23/ln(2)-127, i.e., shift to the right 23 bits, multiply by 1/ln(2) and subtract 127. So it appears that asinh can smoothly represent the IEEE "gradual underflow"/denorms.
This is all very well, but the point of a representation is that it not only lets you represent values, it expedites calculating with them. Storing the asinh makes addition difficult, and multiplication very difficult. Franklin T. Adams-Watters -----Original Message----- From: Henry Baker <hbaker1@pipeline.com> After thinking about it some more, the "binary" version of asinh doesn't seem to help very much in calculating/manipulating the representation. So it seems easier to stick with the traditional definition, which still has the advantage that for small x, asinh(x)~x. To extract the binary exponent from asinh(|x|), multiply by 1/ln(2)~1.442695041, convert to an integer, and subtract 1. So to compete with IEEE single precision floating point, I suggest the representation: asinhIEEE(x)=round(asinh(x*2^126)*2^23) as a 32-bit fixed point number. The "sign bit" is the high-order bit, the "exponent" is the next 8 bits, and the "mantissa" is 23 bits. The largest IEEE single precision float is ~2^128 ~ 3.4x10^38, so asinhIEEE(3.4*10^38) ~ 1,482,700,732 (easily representable in 31 bits) The smallest IEEE single precision normalized float is ~2^-126 ~ 1.18*10^-38, so asinhIEEE(1.18*10^-38) ~ 7,416,212 (a 23-bit number) The smallest IEEE single precision denormalized float is ~2^-149 ~ 1.4*10^-45, so asinhIEEE(1.4*10^-45) ~ 1 The "binary exponent" of |x| can be extracted as asinhIEEE(|x|)/2^23/ln(2)-127, i.e., shift to the right 23 bits, multiply by 1/ln(2) and subtract 127. So it appears that asinh can smoothly represent the IEEE "gradual underflow"/denorms. _______________________________________________ math-fun mailing list math-fun@mailman.xmission.com http://mailman.xmission.com/cgi-bin/mailman/listinfo/math-fun
That's why this group is called "math-fun" rather than "engineering-fun" ! Some of the more important applications don't involve calculations at all, but are primarily "standard/exchange format" related. E.g., the traditional phone system uses a hacked up log+linear representation for digital audio; log+linear systems have been proposed for digital video encoding. I believe that the asinh representation is as compact or more compact than many of these standards and/or proposals. I've also been working on calculational issues. For very small numbers, the representation is the usual fixed point binary format. For very large numbers, the representation is straight logarithmic. The only problems come with numbers in between and mixed uses. Logarithmic arithmetic is well covered by the An Wang system from the 1960's. At 11:56 AM 8/12/2006, franktaw@netscape.net wrote:
This is all very well, but the point of a representation is that it not only lets you represent values, it expedites calculating with them. Storing the asinh makes addition difficult, and multiplication very difficult.
Franklin T. Adams-Watters
-----Original Message----- From: Henry Baker <hbaker1@pipeline.com>
After thinking about it some more, the "binary" version of asinh doesn't seem to help very much in calculating/manipulating the representation. So it seems easier to stick with the traditional definition, which still has the advantage that for small x, asinh(x)~x.
To extract the binary exponent from asinh(|x|), multiply by 1/ln(2)~1.442695041, convert to an integer, and subtract 1.
So to compete with IEEE single precision floating point, I suggest the representation:
asinhIEEE(x)=round(asinh(x*2^126)*2^23) as a 32-bit fixed point number.
The "sign bit" is the high-order bit, the "exponent" is the next 8 bits, and the "mantissa" is 23 bits.
The largest IEEE single precision float is ~2^128 ~ 3.4x10^38, so
asinhIEEE(3.4*10^38) ~ 1,482,700,732 (easily representable in 31 bits)
The smallest IEEE single precision normalized float is ~2^-126 ~ 1.18*10^-38, so
asinhIEEE(1.18*10^-38) ~ 7,416,212 (a 23-bit number)
The smallest IEEE single precision denormalized float is ~2^-149 ~ 1.4*10^-45, so
asinhIEEE(1.4*10^-45) ~ 1
The "binary exponent" of |x| can be extracted as
asinhIEEE(|x|)/2^23/ln(2)-127, i.e., shift to the right 23 bits, multiply by 1/ln(2) and subtract 127.
So it appears that asinh can smoothly represent the IEEE "gradual underflow"/denorms.
participants (2)
-
franktaw@netscape.net -
Henry Baker