On Tue, May 8, 2012 at 6:26 PM, Bill Gosper <billgosper@gmail.com> wrote:
On Sun, May 6, 2012 at 6:55 PM, Bill Gosper <billgosper@gmail.com> wrote:
The Minsky recurrence handily generates long sequences of a*sin(n*ω+φ). [...]
So let's run cheb vs minsky a few million and see who's better. Using ω := π φ (which will figure in a later email):
[linearly growing misbehavior for both]
The explanation: These recurrences don't compute Sin[n*π*GoldenRatio], but rather Sin[n*N[π*GoldenRatio]], which is off by
In[161]:= SetPrecision[N[π*GoldenRatio], 22] - π*GoldenRatio
Out[161]= -3.34860*10^-16
For cheb, this gets intractable for ω near a multiple of 2π. E.g., for the vertices of a (near) megagon:
In[218]:= Timing[Nest[stepcheb, chebsin[N[π/999999], 0], 999999]]
Out[218]= {2.29889, {-5.83135*10^-6, -2.68975*10^-6, 2.}}
(Loss: 10^10 lsb) whereas
In[217]:= Timing[ Nest[stepm1nsk, Drop[minsksin[N[π/9999999], 0, 1], -1], 9999999]]
Out[217]= {40.412, {-7.25373*10^-14, 3.1831*10^6, 9.86961*10^-14}}
(100 lsb)
We are doomed to linearly creeping error. But hey, we can tack on a dozen
extra digits and promise not to drive past a trillion. But first, let's drive just 999 to see how long it will take. (This is where it hits the fan.)
In[166]:= Nest[stepcheb, chebsin[N[π*GoldenRatio, 33]], 999]
Out[166]= {0.*10^121, 0.*10^121, 0.72474978016096023991729327494997}
WAH?
In[167]:= Nest[stepm1nsk, Drop[minsksin[N[π*GoldenRatio, 33], 0, 1], -1], 999]
Out[167]= {0.*10^434, 0.*10^434, 1.27525021983903976008270672505003}
WHA?? This is too gross:
In[168]:= Nest[stepm1nsk, Drop[minsksin[N[π*GoldenRatio, 288], 0, 1], -1], 999]
Out[168]= {0.*10^179, 0.*10^179, 1.27525021983903976008270672505002620127826891198880290709968642251975\ 2987505102052895607541471314178683631575021864167072493028631467693604\ 7459492180699315909580232204640529914388966962377507409450195194836649\ 3221557106208837998002035152821718325478473977678977255669689503145798\ 805151318}
"Precision Tracking" (precision trashing, interval arithmetic) *totally* destroys the computation. Whereas the actual loss during the fixed machine precision iteration was negligible. --rwg
Julian to the rescue:
On Mon, May 7, 2012 at 12:50 PM, Julian Ziegler Hunts < julianj.zh@gmail.com> wrote:
You probably know this, but in case you haven't gotten around to it yet:
And, as you suggested,
minsksinint[ω_, ϕ_: 0, ∂_: 2 Sin[ω/2], a_: 1] := {Round[a*Sin[ϕ]], Round[-(a/2) ∂ Cos[ϕ + ω/2] Csc[ω/2]], (4 Sin[ω/2]^2)/∂, ∂}
stepminskint[{x_, y_, δ_, ∂_}] := {Round[#], Round[y + ∂*#], δ, ∂} &[x - δ*y]
stepm1nskint[{x_, y_, δ_}] := {Round[#], Round[y + #], δ} &[x - δ*y]
chebsinint[ω_, ϕ_: 0, a_: 1] := {Round[a*Sin[ϕ]], Round[a*Sin[ϕ - ω]], 2*Cos[ω]}
stepchebint[{x_, y_, δ_}] := {Round[δ*x - y], x, δ}
rwg>You madperson. Why did you change ε to the other flavor of δ (i.e. ∂)??
j> It's not a delta. There's no agreed-upon name for ∂<http://en.wikipedia.org/wiki/%E2%88%82> .
j>works much better [...]
I.e., choose a large integer amplitude a and use big integers (like in the "good" old days) instead of floats. E.g., for floats:
In[175]:= Timing[ Nest[stepm1nsk, Drop[minsksin[N[π*GoldenRatio], 0, 1], -1], 999999]]
Out[175]= {4.08617, {0.918646, 0.748074, 1.27525}}
In[176]:= Sin[999999*π*GoldenRatio] - SetPrecision[%[[2, 1]], 33]
Out[176]= 1.14393260781121554602005*10^-10
Loss=~10^6 lsb
Integers, with a:=10^69:
In[184]:= Timing[ Nest[stepm1nskint, Drop[minsksinint[N[π*GoldenRatio, 69], 0, 1, 10^69], -1], 999999]]
Out[184]= {6.74437, \ {918645523252150530219574122157946569212458324866848819204746126477690, 748073906145249981381508759485701971803310022978410476968586624220216, 1.27525021983903976008270672505002620127826891198880290709968642251975}}
In[185]:= N[Round[Sin[999999*π*GoldenRatio]*10^69 - %[[2, 1]]]]
Out[185]= -214.
I.e., loss= 214 lsb.
Way more accurate. Less than twice as slow. But the error still scales (almost) linearly:
In[186]:= Timing[Nest[stepm1nskint, Drop[minsksinint[N[π*GoldenRatio, 69], 0, 1, 10^69], -1],9999999]]
Out[186]= {68.2078, \ {748998425544230349596759154767033168207054569581106778235375691401471, 858748004778120235089047360826898766127851223417284925293705386758193, 1.27525021983903976008270672505002620127826891198880290709968642251975}}
In[187]:= N[Round[Sin[9999999*π*GoldenRatio]*10^69 - %[[2, 1]]]]
Out[187]= -1220.
This is egregious rubbish. Julian actually said j>works much better–the error is sqrt or less, and (with a=10^60, and 100 digits of precision on the multipliers) it takes less than twice as long as normal with machine precision to compute. which I censored in disbelief, too stupid to see how reducing the amplitude accuracy could actually suppress the linear phase drift, and mistaking a random walk for linear growth. He later explained j>Yes, the loss is still linear, but the linear component is the result of the angle being approximate. Give the angle 40 extra digits (without changing the amplitude) and this goes away until past the point where it's impossible to compute, leaving us with sqrt-ish error. Julian It turns out, you need nothing like 40 digits of cushion. Changing 100 to 69 in In[86]:= Table[(Print[#]; #) &[ Round[10^60*Sin[\[Pi]*GoldenRatio*10^n]] - Nest[stepchebint, chebsinint[N[\[Pi]*GoldenRatio, 100], 0, 10^60], 10^n][[1]]], {n, 9}] Out[86]= {1, -1, -9, -2, 98, 38, 1248, 410, 11484} makes ZERO difference! But with integers, we avoid the deadly precision trashing; the dozen digit
cushion idea works fine.
And way better than I thought. I.e., good for ~ 10^24 steps, not 10^12.
--rwg
I don't actually recall a case where deliberately reducing accuracy in one part of a calculation significantly enhances the overall accuracy. And I'm surprised that (cheaper) cheb competes with minsky except for ω near 2 n π. --rwg