[math-fun] Calculating asinh(x)
Of perhaps only historical interest are the digit-at-a-time CORDIC methods based on tables and trigonometric addition formulæ: https://en.wikipedia.org/wiki/CORDIC , (which claims bombers as recent as the B58 used analog navigation computers). Here's my take on Sinh and ArcSinh. For illustration, simulate a fixed table of floating point "rotation" matrices Cosh[2^n] -I Sinh[2^n] I Sinh[2^n] Cosh[2^n] via the function cish[n_] := cish[n] = {{Cosh@#, -I Sinh@#}, {I Sinh@#, Cosh@#}} &[2^n] here left unfloated, again for illustration. In practice, we'd only store the two real floats Cosh[2.^n],Sinh[2.^n], or possibly Tanh[2.^(n-1)],Sinh[2.^n]. Then to compute Sinh[r] In[53]:= CISh[r_Real] := Block[{L, s}, {L, s} = RealDigits[r, 2]; MapIndexed[MatrixPower[cish[s - #2[[1]]], #1] &, L /. {a__, 0 ..} :> {a}]] I.e., use the bits of r's mantissa as exponents of successive "rotation" matrices (which incidentally commute). s is the exponent. The /. {a__, 0 ..} :> {a}] just drops trailing 0s from the mantissa bits. MapIndexed automatically assign {1}, {2}, ... to #2 while assigning consecutive mantissa bits to #1. E.g., to compute Sinh[69./128] In[56]:= CISh[69./128] Out[56]= {{{Cosh[1/2], -I Sinh[1/2]}, {I Sinh[1/2], Cosh[1/2]}}, {{1, 0}, {0, 1}}, {{1, 0}, {0, 1}}, {{1, 0}, {0, 1}}, {{Cosh[1/32], -I Sinh[1/32]}, {I Sinh[1/32], Cosh[1/32]}}, {{1, 0}, {0, 1}}, {{Cosh[1/128], -I Sinh[1/128]}, {I Sinh[1/128], Cosh[1/128]}}} (Recall that 69 = octal(105) = binary(1000101).) Again, in practice, these matrices would be simply pairs of "reals". Multiplying them, In[58]:= Dot @@ %% // FullSimplify Out[58]= (Cosh[69/128] -I Sinh[69/128] I Sinh[69/128] Cosh[69/128] gets our Sinh, with the Cosh to boot. Using Tanh[t/2] instead of Cosh[t] is attractive because, as with real rotations, we can replace each matrix multiplication with three lossless skews: (1 -I Tanh[t/2] 0 1) . (1 0 I Sinh[t] 1) . (1 -I Tanh[t/2] 0 1) For ArcSinh, e.g. to recover 69/128 from Sinh[69/128] (a float, remember), we negate the Sinh: %58 /. Sinh@x_ :> -Sinh@x Out[76]//MatrixForm= (Cosh[69/128] I Sinh[69/128] -I Sinh[69/128] Cosh[69/128]) (we also need the Cosh, which will cost us a √,) and then scan our original cish table large-to-small, only multiplying by the matrices which don't overdeplete: In[61]:= FullSimplify[%%.cish(0)] Out[61]= (cosh(59/128) -I sinh(59/128) I sinh(59/128) cosh(59/128)) (Oops, 0 overdepleted.) In[62]:= FullSimplify[%%%.cish(-1)] Out[62]= (cosh(5/128) I sinh(5/128) -I sinh(5/128) cosh(5/128)) In[63]:= FullSimplify[%.cish(-2)] Out[63]= (cosh(27/128) -I sinh(27/128) I sinh(27/128) cosh(27/128)) Oops, -2 overdepleted. In[66]:= FullSimplify[%%%%.cish(-5)] Out[66]= (cosh(1/128) I sinh(1/128) -I sinh(1/128) cosh(1/128)) In[68]:= FullSimplify[%%.cish(-7)] Out[68]= (1 0 0 1) The correct depleters were -1, -5, and -7: In[69]:= Total[2^{-1,-5,-7}] Out[69]= 69/128 which is the ArcSinh of Sinh[69/128] --rwg Date: 2017-04-03 09:46 From: Mike Speciner <ms@alum.mit.edu> To: math-fun <math-fun@mailman.xmission.com> Reply-To: math-fun <math-fun@mailman.xmission.com> For what it's worth, I looked up how I compute asinh in my javascript library, based on the observations... asinh(x) = sgn(x) ln(|x| + sqrt(x^2+1)) which works best when |x| is "large" asinh(x) = atanh(x/sqrt(1+x^2)) which works best when |x| is "small", using atanh(x) = (ln ((1+x)/(1-x))/2 = (ln(1+x) - ln(1-x))/2 So, trying to get good precision... Math.atanh = function(x) { return .5*(Math.log1p(x)-Math.log1p(-x)); } Math.asinh = function(x) { var s = Math.sqrt(1+x*x); return s<Math.SQRT2 ? Math.atanh(x/s) : Math.sgn(x)*Math.log(Math.abs(x)+s); } On 03-Apr-17 12:32, Henry Baker wrote:
y = asinh(x) = log(x + sqrt(1+x^2))
the "obvious" formula.
We have several choices:
* Calculate using the obvious log formula * Calculate using Newton's method on sinh(y) * More complicated methods
One more complicated method is to factor the above expression as:
y = asinh(x) = f(g(x)), where f(x) = log(x) and g(x) = x+sqrt(1+x^2)
Note that if y=g(x), then x=(y-1/y)/2.
So we could first invert g(x) using Newton's method, and then apply the log function.
The iteration to invert y=g(x) is
x -> 2*(y+(x-y)/(1+x^2))
with an initial guess of 2*y.
This iteration has quadratic convergence, and 6 iterations provides fpprec=128 precision.
It remains to be seen if this is faster and/or more precise than the obvious formula.
The other possibility is to get a very good approximation, and finish it off with a very small (1,2,3 ?) iterations of Newton to invert y=sinh(x).
The iteration is x -> x-tanh(x)+y/cosh(x), with quadratic convergence:
x0+eps -> x0+eps^2*tanh(x0)/2
participants (1)
-
Bill Gosper