Re: [math-fun] Beyond Floats: Next Gen Computer Arithmetic
If I understood Gustafson's talk correctly, IEEE doubles aren't good enough. He showed some examples where IEEE quads (128 bits) weren't good enough. Is there any way you could convert to Maxima bigfloats, do the operation, and convert back? Bigfloats can be set to any precision you want. Alternatively, could you convert to rationals, do the operation, and convert back? (This should be relatively easy -- if slow -- in most Common Lisp implementations.) I'm not interested in speed, but in precision of the *representation*, as divorced from the precision of the operation itself. I'll talk about this again in another post about "asinh numbers", which Yonemoto (Gustafson's programmer) dismissed out of hand to me as requiring table lookup. At 11:03 PM 3/25/2017, you wrote:
For every operation, my implementation converts posits to IEEE doubles, performs the operation, and converts the result back to a posit by figuring out the number of fraction bits based on the exponent, and rounding the scaled fraction to an integer.
What about these asinh numbers, is there a reference?
-----Original Message----- From: math-fun [mailto:math-fun-bounces@mailman.xmission.com] On Behalf Of Henry Baker Sent: Sunday, March 26, 2017 10:08 AM To: math-fun@mailman.xmission.com Subject: Re: [math-fun] Beyond Floats: Next Gen Computer Arithmetic
If I understood Gustafson's talk correctly, IEEE doubles aren't good enough. He showed some examples where IEEE quads (128 bits) weren't good enough.
Is there any way you could convert to Maxima bigfloats, do the operation, and convert back?
Bigfloats can be set to any precision you want.
Alternatively, could you convert to rationals, do the operation, and convert back? (This should be relatively easy -- if slow -- in most Common Lisp implementations.)
I'm not interested in speed, but in precision of the *representation*, as divorced from the precision of the operation itself.
I'll talk about this again in another post about "asinh numbers", which Yonemoto (Gustafson's programmer) dismissed out of hand to me as requiring table lookup.
At 11:03 PM 3/25/2017, you wrote:
For every operation, my implementation converts posits to IEEE doubles, performs the operation, and converts the result back to a posit by figuring out the number of fraction bits based on the exponent, and rounding the scaled fraction to an integer.
_______________________________________________ math-fun mailing list math-fun@mailman.xmission.com https://mailman.xmission.com/cgi-bin/mailman/listinfo/math-fun
Sorry, didn't see your earlier post. At any rate, a limited-precision real encoding must support efficient addition and multiplication, at least. Suppose we propose a fixed-width binary representation S = {k in Z : 0 <= k < 2^w} of the real numbers. We must also propose a mapping m such that real number a is represented as m(a) in S. Our encoding must support addition and multiplication: (a + b)' = m(m^-1(a) + m^-1(b)) (ab)' = m(m^-1(a)m^-1(b)) If the right sides can be simplified, or optimized using binary operations, this may be a useful representation. For asinh numbers, this seems not to be the case, we have (a + b)' = asinh(sinh(a') + sinh(b')) (ab)' = asinh(sinh(a')sinh(b)') Unless I am missing some magic simplification or algorithm, asinh addition and multiplication require at least an evaluation of a power series, which seems much more costly than a floating-point addition or multiplication.
-----Original Message----- From: math-fun [mailto:math-fun-bounces@mailman.xmission.com] On Behalf Of David Wilson Sent: Sunday, March 26, 2017 3:19 PM To: 'math-fun' Subject: Re: [math-fun] Beyond Floats: Next Gen Computer Arithmetic
What about these asinh numbers, is there a reference?
-----Original Message----- From: math-fun [mailto:math-fun-bounces@mailman.xmission.com] On Behalf Of Henry Baker Sent: Sunday, March 26, 2017 10:08 AM To: math-fun@mailman.xmission.com Subject: Re: [math-fun] Beyond Floats: Next Gen Computer Arithmetic
If I understood Gustafson's talk correctly, IEEE doubles aren't good enough. He showed some examples where IEEE quads (128 bits) weren't good enough.
Is there any way you could convert to Maxima bigfloats, do the operation, and convert back?
Bigfloats can be set to any precision you want.
Alternatively, could you convert to rationals, do the operation, and convert back? (This should be relatively easy -- if slow -- in most Common Lisp implementations.)
I'm not interested in speed, but in precision of the *representation*, as divorced from the precision of the operation itself.
I'll talk about this again in another post about "asinh numbers", which Yonemoto (Gustafson's programmer) dismissed out of hand to me as requiring table lookup.
At 11:03 PM 3/25/2017, you wrote:
For every operation, my implementation converts posits to IEEE doubles, performs the operation, and converts the result back to a posit by figuring out the number of fraction bits based on the exponent, and rounding the scaled fraction to an integer.
_______________________________________________ math-fun mailing list math-fun@mailman.xmission.com https://mailman.xmission.com/cgi-bin/mailman/listinfo/math-fun
_______________________________________________ math-fun mailing list math-fun@mailman.xmission.com https://mailman.xmission.com/cgi-bin/mailman/listinfo/math-fun
Re "a limited-precision real encoding must support efficient addition and multiplication, at least": Recall that asinh#(x)=round(alpha*asinh(beta*x)). In my previous example, if alpha=1/asinh(beta), then the asinh # for 1 is also 1. If beta ~ 2^(-46), then for every integer |i|<2^32, asinh#(i)=i, so the "center" of the mapping asinh#() is the linear identity function. Thus, if |i|,|j| are both somewhat less than 2^32, then +,-,* are all the standard integer operations. These cases are trivially recognized: the high order bits are either all 0's or all 1's (in 2's complement representation). If |i|,|j|<2^64 are both somewhat larger than 2^32, then asinh#(i*j)=asinh#(i)+asinh#(j), because asinh#() in that region is logarithmic. "Somewhat less" and "somewhat larger" need some work to pin down, but this can be done with straightforward algebra. The other cases are much harder, but could be broken down into perhaps 16-32 cases in total, depending upon the number of terms required in the approximating series. Difficult to understand, but if you can generate the HW mechanically, you can check every case mechanically. As I mentioned before, these "asinh" numbers with this particular alpha & beta wouldn't be appropriate to replace IEEE floats; you'd want to something more like asinh#'(x) = 2^(-32)*asinh#(x*2^32). I.e., we want the *linear portion* of asinh#(x) to be a very small section quite close to zero, and roughly of the same size as IEEE denormalized numbers. At 01:16 PM 3/26/2017, David Wilson wrote:
Sorry, didn't see your earlier post.
At any rate, a limited-precision real encoding must support efficient addition and multiplication, at least.
Suppose we propose a fixed-width binary representation S = {k in Z : 0 <= k < 2^w} of the real numbers. We must also propose a mapping m such that real number a is represented as m(a) in S. Our encoding must support addition and multiplication:
(a + b)' = m(m^-1(a) + m^-1(b)) (ab)' = m(m^-1(a)m^-1(b))
If the right sides can be simplified, or optimized using binary operations, this may be a useful representation.
For asinh numbers, this seems not to be the case, we have
(a + b)' = asinh(sinh(a') + sinh(b')) (ab)' = asinh(sinh(a')sinh(b)')
Unless I am missing some magic simplification or algorithm, asinh addition and multiplication require at least an evaluation of a power series, which seems much more costly than a floating-point addition or multiplication.
-----Original Message----- From: math-fun [mailto:math-fun-bounces@mailman.xmission.com] On Behalf Of David Wilson Sent: Sunday, March 26, 2017 3:19 PM To: 'math-fun' Subject: Re: [math-fun] Beyond Floats: Next Gen Computer Arithmetic
What about these asinh numbers, is there a reference?
-----Original Message----- From: math-fun [mailto:math-fun-bounces@mailman.xmission.com] On Behalf Of Henry Baker Sent: Sunday, March 26, 2017 10:08 AM To: math-fun@mailman.xmission.com Subject: Re: [math-fun] Beyond Floats: Next Gen Computer Arithmetic
If I understood Gustafson's talk correctly, IEEE doubles aren't good enough. He showed some examples where IEEE quads (128 bits) weren't good enough.
Is there any way you could convert to Maxima bigfloats, do the operation, and convert back?
Bigfloats can be set to any precision you want.
Alternatively, could you convert to rationals, do the operation, and convert back? (This should be relatively easy -- if slow -- in most Common Lisp implementations.)
I'm not interested in speed, but in precision of the *representation*, as divorced from the precision of the operation itself.
I'll talk about this again in another post about "asinh numbers", which Yonemoto (Gustafson's programmer) dismissed out of hand to me as requiring table lookup.
At 11:03 PM 3/25/2017, you wrote:
For every operation, my implementation converts posits to IEEE doubles, performs the operation, and converts the result back to a posit by figuring out the number of fraction bits based on the exponent, and rounding the scaled fraction to an integer.
At 01:16 PM 3/26/2017, David Wilson wrote:
Unless I am missing some magic simplification or algorithm, asinh addition and multiplication require at least an evaluation of a power series, which seems much more costly than a floating-point addition or multiplication.
Chip HW cost & chip HW speed are more dependent upon bus sizes and speeds than functional units. You can have immensely complicated functional units, but they don't require much more power or area than relatively simply functional units *of the same word size*. Thus, even complex functional units that have to do case analysis on 50-100 different cases can still be quite doable, so long as you have tools to mechanically design & check all of these cases. I admit that the standard unary & binary ops for asinh arithmetic might be substantially more complicated than for IEEE arithmetic, but once you've designed it, and if you can deeply pipeline it (e.g., for GPU's), you can get phenomenal throughput.
At 12:18 PM 3/26/2017, David Wilson wrote:
What about these asinh numbers, is there a reference?
Here are some 20-year-old (1997) postings to math-fun: From hbaker@netcom.com Mon Nov 3 10:34:08 1997 Message-Id: <199711031833.KAA06500@netcomsv.netcom.com> Date: Mon, 03 Nov 1997 11:32:10 -0800 From: hbaker@netcom.com (Henry Baker) Subject: Q: Companding with ulaw, Alaw, etc. (range compression) (A copy of this message has also been posted to the following newsgroups: comp.compression.research, comp.compression,comp.dsp,sci.engr.television.advanced,sci.engr.television.broadcast,sci.math) In audio and video compression, various efforts are made to reduce the number of bits required to represent sounds and images based on the fact that ears and eyes are essentially logarithmic in response. Thus, telephone 'companding' reduces 16-bit (or so) audio to 8-bit quasi-logarithmic coding, and digital video coding does the same sort of thing. In A-law sound companding (and also in digital TV companding, I believe), there is a point at which the companding becomes linear -- i.e., it is no longer logarithmic near zero. I seem to recall that both of these systems switch abruptly from logarithmic to linear, and therefore may not have a continuous derivative, though they are continuous themselves. What I am wondering is why companding functions don't use a perfectly good _existing_ mathematical function which is well-behaved and smooth? I suggest the use of the inverse hyperbolic sine function -- asinh(x). This function is probably not well known to many EE people, although it is merely a 90 degree rotation in the complex plane of the arcsin function that they probably know somewhat better. For real x, we have the expansion asinh(x) = sgn(x) ln[|x|+sqrt(1+x^2)], so for x >> 1, we have asinh(x) ~~ sgn(x) ln[2|x|], and for x << 1, we have asinh(x) ~~ sgn(x) ln[1+|x|] ~~ x. So, asinh(x) _is_ logarithmic for arguments of large absolute value, and linear for arguments of small absolute value, and smooth inbetween, so this function is ideal for use as a companding function. Consider an asinh(x) approximation to the A-law companding function. It should be a function f(x)=asinh(Bx)/A, where A,B are suitably chosen constants. Since f(1)=1, we must have A=asinh(B). We can now choose B such that f(x) matches the A-law function at some point. If we choose B~~119, then f(x) will approximate the A-law function closely at the point where the A-law function switches from logarithmic to linear. On the other hand, if we choose B~~87.6, then the 'breakpoint' where f(x)=asinh(Bx)/A switches from linear to logarithmic behavior will be near the right place. (Note that f(x)=asinh(Bx)/A switches from linear to logarithmic at about Bx=1, or x=1/B.) So a good voice companding function would be f(x)=asinh(Bx)/asinh(B), for B in the range 80-120. For many systems, asinh(x) is computed by table lookup or table lookup plus interpolation, so the computation cost should be no more than the companding functions used today. The additional mathematical properties of the asinh function could be used for more closed form analysis, as well as eliminating the potential problems caused by any abrupt transition from the linear to the logarithmic function. ------ Has any standards body ever considered using such an elegant method for companding? Or do standards always have to have these weird breakpoints with magic constants in them? ------------------------------------------- From rcs@cheltenham.cs.arizona.edu Sun Nov 9 06:46:49 1997 Message-Id: <v01540b00b08b85640c92@[10.0.2.15]> Date: Sun, 9 Nov 1997 07:43:38 -0800 To: math-fun@cs.arizona.edu From: hbaker@netcom.com (Henry G. Baker) Subject: 'Hyperbolic' floating point numbers? The usual floating point numbers can be characterized as a crude approximation to a logarithmic/exponential notation. However, there seems to be a yearning for a representation which is _linear_ for numbers near zero, and _exponential_ for numbers far away from zero. For example, 'gradual underflow' in IEEE floats and the A-law 'companding' function used in representing telephone speech signals both provide for a linear portion near zero. My question is this: what is wrong with the hyperbolic sine function? Or more precisely, why not encode the number x with round(asinh(Bx)/A), where B,A are scaling constants, and 'round()' rounds to the nearest integer? For numbers |Bx|<<1, asinh(Bx)/A is linear, while for numbers |Bx|>>1, asinh(Bx)/A is logarithmic. So we choose B,A in such a way to position the transition between the linear and the logarithmic regions -- i.e., where 'gradual underflow' takes place. One nice thing about using hyperbolic numbers is that the _sign_ of the number doesn't have to be specially treated, since sgn(asinh(x))=sgn(x). Another nice thing about using hyperbolic sines is that we have the usual hyperbolic trig addition/doubling formulas to work with. I haven't yet gone into any deep investigation of clever methods for doing arithmetic calculations on hyperbolic numbers, because I wanted to find out if someone has already suggested such a thing before. I'd appreciate any references to any work along these lines. Thanks in advance. ---------------------------- Astronomers use 'asinh magnitudes':
https://arxiv.org/abs/astro-ph/9903081
https://arxiv.org/pdf/astro-ph/9903081.pdf
A Modified Magnitude System that Produces Well-Behaved Magnitudes, Colors, and Errors Even for Low Signal-to-Noise Ratio Measurements
Robert Lupton (1,2), Jim Gunn (2), Alex Szalay (3) ((1) for the SDSS consortium, (2) Princeton University Observatory, (3) The Johns Hopkins University)
(Submitted on 4 Mar 1999)
We describe a modification of the usual definition of astronomical magnitudes, replacing the usual logarithm with an inverse hyperbolic sine function; we call these modified magnitudes 'asinh magnitudes'. For objects detected at signal-to-noise ratios of greater than about five, our modified definition is essentially identical to the traditional one; for fainter objects (including those with a formally negative flux) our definition is well behaved, tending to a definite value with finite errors as the flux goes to zero.
On Sun, Mar 26, 2017 at 7:08 AM, Henry Baker <hbaker1@pipeline.com> wrote:
If I understood Gustafson's talk correctly, IEEE doubles aren't good enough. He showed some examples where IEEE quads (128 bits) weren't good enough.
IEEE doubles are good enough to emulate 32-bit posits.
Is there any way you could convert to Maxima bigfloats, do the operation, and convert back?
Bigfloats can be set to any precision you want.
I could have used any of the available multi-precision libraries, but for the 32-bit posit experiment that wouldn't be necessary.
The question is, how many of the existing algorithms depend on the fact that the relative precision is stable across almost all representable range, Leo
participants (3)
-
David Wilson -
Henry Baker -
Leo Broukhis