At 12:18 PM 3/26/2017, David Wilson wrote:
What about these asinh numbers, is there a reference?
Here are some 20-year-old (1997) postings to math-fun: From hbaker@netcom.com Mon Nov 3 10:34:08 1997 Message-Id: <199711031833.KAA06500@netcomsv.netcom.com> Date: Mon, 03 Nov 1997 11:32:10 -0800 From: hbaker@netcom.com (Henry Baker) Subject: Q: Companding with ulaw, Alaw, etc. (range compression) (A copy of this message has also been posted to the following newsgroups: comp.compression.research, comp.compression,comp.dsp,sci.engr.television.advanced,sci.engr.television.broadcast,sci.math) In audio and video compression, various efforts are made to reduce the number of bits required to represent sounds and images based on the fact that ears and eyes are essentially logarithmic in response. Thus, telephone 'companding' reduces 16-bit (or so) audio to 8-bit quasi-logarithmic coding, and digital video coding does the same sort of thing. In A-law sound companding (and also in digital TV companding, I believe), there is a point at which the companding becomes linear -- i.e., it is no longer logarithmic near zero. I seem to recall that both of these systems switch abruptly from logarithmic to linear, and therefore may not have a continuous derivative, though they are continuous themselves. What I am wondering is why companding functions don't use a perfectly good _existing_ mathematical function which is well-behaved and smooth? I suggest the use of the inverse hyperbolic sine function -- asinh(x). This function is probably not well known to many EE people, although it is merely a 90 degree rotation in the complex plane of the arcsin function that they probably know somewhat better. For real x, we have the expansion asinh(x) = sgn(x) ln[|x|+sqrt(1+x^2)], so for x >> 1, we have asinh(x) ~~ sgn(x) ln[2|x|], and for x << 1, we have asinh(x) ~~ sgn(x) ln[1+|x|] ~~ x. So, asinh(x) _is_ logarithmic for arguments of large absolute value, and linear for arguments of small absolute value, and smooth inbetween, so this function is ideal for use as a companding function. Consider an asinh(x) approximation to the A-law companding function. It should be a function f(x)=asinh(Bx)/A, where A,B are suitably chosen constants. Since f(1)=1, we must have A=asinh(B). We can now choose B such that f(x) matches the A-law function at some point. If we choose B~~119, then f(x) will approximate the A-law function closely at the point where the A-law function switches from logarithmic to linear. On the other hand, if we choose B~~87.6, then the 'breakpoint' where f(x)=asinh(Bx)/A switches from linear to logarithmic behavior will be near the right place. (Note that f(x)=asinh(Bx)/A switches from linear to logarithmic at about Bx=1, or x=1/B.) So a good voice companding function would be f(x)=asinh(Bx)/asinh(B), for B in the range 80-120. For many systems, asinh(x) is computed by table lookup or table lookup plus interpolation, so the computation cost should be no more than the companding functions used today. The additional mathematical properties of the asinh function could be used for more closed form analysis, as well as eliminating the potential problems caused by any abrupt transition from the linear to the logarithmic function. ------ Has any standards body ever considered using such an elegant method for companding? Or do standards always have to have these weird breakpoints with magic constants in them? ------------------------------------------- From rcs@cheltenham.cs.arizona.edu Sun Nov 9 06:46:49 1997 Message-Id: <v01540b00b08b85640c92@[10.0.2.15]> Date: Sun, 9 Nov 1997 07:43:38 -0800 To: math-fun@cs.arizona.edu From: hbaker@netcom.com (Henry G. Baker) Subject: 'Hyperbolic' floating point numbers? The usual floating point numbers can be characterized as a crude approximation to a logarithmic/exponential notation. However, there seems to be a yearning for a representation which is _linear_ for numbers near zero, and _exponential_ for numbers far away from zero. For example, 'gradual underflow' in IEEE floats and the A-law 'companding' function used in representing telephone speech signals both provide for a linear portion near zero. My question is this: what is wrong with the hyperbolic sine function? Or more precisely, why not encode the number x with round(asinh(Bx)/A), where B,A are scaling constants, and 'round()' rounds to the nearest integer? For numbers |Bx|<<1, asinh(Bx)/A is linear, while for numbers |Bx|>>1, asinh(Bx)/A is logarithmic. So we choose B,A in such a way to position the transition between the linear and the logarithmic regions -- i.e., where 'gradual underflow' takes place. One nice thing about using hyperbolic numbers is that the _sign_ of the number doesn't have to be specially treated, since sgn(asinh(x))=sgn(x). Another nice thing about using hyperbolic sines is that we have the usual hyperbolic trig addition/doubling formulas to work with. I haven't yet gone into any deep investigation of clever methods for doing arithmetic calculations on hyperbolic numbers, because I wanted to find out if someone has already suggested such a thing before. I'd appreciate any references to any work along these lines. Thanks in advance. ---------------------------- Astronomers use 'asinh magnitudes':
https://arxiv.org/abs/astro-ph/9903081
https://arxiv.org/pdf/astro-ph/9903081.pdf
A Modified Magnitude System that Produces Well-Behaved Magnitudes, Colors, and Errors Even for Low Signal-to-Noise Ratio Measurements
Robert Lupton (1,2), Jim Gunn (2), Alex Szalay (3) ((1) for the SDSS consortium, (2) Princeton University Observatory, (3) The Johns Hopkins University)
(Submitted on 4 Mar 1999)
We describe a modification of the usual definition of astronomical magnitudes, replacing the usual logarithm with an inverse hyperbolic sine function; we call these modified magnitudes 'asinh magnitudes'. For objects detected at signal-to-noise ratios of greater than about five, our modified definition is essentially identical to the traditional one; for fainter objects (including those with a formally negative flux) our definition is well behaved, tending to a definite value with finite errors as the flux goes to zero.