Re: [math-fun] Fast factoring ?? (really denesting)
On Fri, Dec 19, 2008 at 12:20 PM, <rwg@sdf.lonestar.org> wrote:
Amen. I've now tried ~ 10^8 cases of (a^(1/5)-b^(1/5))^(1/3) without further success, and for sqrt, only
3/5 4/5 3/5 2/5 2/5 4/5 1/5 1/5 1/5 1/5 - 2 3 + 3 + 2 2 3 - 2 3 + 2 sqrt(4 - 3 ) = ---------------------------------------------------. 5
And, of course, these can only give pentanomials. --rwg ALGORISMIC MICROGLIAS
I have no technical knowledge on denesting, but here is how I look at the problem:
Since sqrt( a^(1/5) + b^(1/5) ) = a^(-2/5) * sqrt( a + (a^4*b)^(1/5) ), I'll consider only the form: sqrt( a + b^(1/5) ). If sqrt( a + b^(1/5) ) could be denested, I'd expect it to be denested into the form:
x0 + x1*b^(1/5) + x2*b^(2/5) + x3*b^(3/5) + x4*b^(4/5).
Notice that in the (comparatively easy) 4th root case, your conjecture catches 1/4 3/4 1/4 sqrt(161 - 12 5 ) = 2 5 - 3 sqrt(5) + 4 5 + 6 but not 1/4 1/4 1/4 3/4 3/4 1/4 sqrt(2) sqrt(31 - 4 3 5 ) = 3 5 - 2 sqrt(5) + 3 5 + 2 sqrt(3).
That's why I don't think you could find anything longer than pentanomial. OTOH, sqrt( a^(1/7) + b^(1/7) ) is a better bet for finding hexanomials or longer.
Indeed, but the only one I've found so far is perversely pentanomial: 1/7 Sqrt(7 (2 - 2 )) 1/7 3/7 5/7 6/7 ------------------ = -1 + 2 2 + 2 + 2 - 2 . 1/14 2
Just my 2 cents.
Warut
You didn't propose 6th roots. Did you have reason to suspect that they are as infertile as they seem to be? --rwg PRECHRISTMAS PETRARCHISMS ALGORISMIC MICROGLIAS
On Sat, Dec 20, 2008 at 12:56 PM, <rwg@sdf.lonestar.org> wrote:
On Fri, Dec 19, 2008 at 12:20 PM, <rwg@sdf.lonestar.org> wrote:
Amen. I've now tried ~ 10^8 cases of (a^(1/5)-b^(1/5))^(1/3) without further success, and for sqrt, only
3/5 4/5 3/5 2/5 2/5 4/5 1/5 1/5 1/5 1/5 - 2 3 + 3 + 2 2 3 - 2 3 + 2 sqrt(4 - 3 ) = ---------------------------------------------------. 5
And, of course, these can only give pentanomials. --rwg ALGORISMIC MICROGLIAS
I have no technical knowledge on denesting, but here is how I look at the problem:
Since sqrt( a^(1/5) + b^(1/5) ) = a^(-2/5) * sqrt( a + (a^4*b)^(1/5) ), I'll consider only the form: sqrt( a + b^(1/5) ). If sqrt( a + b^(1/5) ) could be denested, I'd expect it to be denested into the form:
x0 + x1*b^(1/5) + x2*b^(2/5) + x3*b^(3/5) + x4*b^(4/5).
Notice that in the (comparatively easy) 4th root case, your conjecture catches
1/4 3/4 1/4 sqrt(161 - 12 5 ) = 2 5 - 3 sqrt(5) + 4 5 + 6
but not 1/4 1/4 1/4 3/4 3/4 1/4 sqrt(2) sqrt(31 - 4 3 5 ) = 3 5 - 2 sqrt(5) + 3 5 + 2 sqrt(3).
That's why I don't think you could find anything longer than pentanomial. OTOH, sqrt( a^(1/7) + b^(1/7) ) is a better bet for finding hexanomials or longer.
Indeed, but the only one I've found so far is perversely pentanomial:
1/7 Sqrt(7 (2 - 2 )) 1/7 3/7 5/7 6/7 ------------------ = -1 + 2 2 + 2 + 2 - 2 . 1/14 2
Just my 2 cents.
Warut
You didn't propose 6th roots. Did you have reason to suspect that they are as infertile as they seem to be? --rwg PRECHRISTMAS PETRARCHISMS ALGORISMIC MICROGLIAS
In fact, I didn't mean x0, x1, ... to be rational numbers. It could be sqrt(2) if it could disappear after squaring. Of course, for the fifth root case, sqrt(2) could appear in x0 iff it appears in x1, ..., x4, too. I used to think that 7th roots would have more chance than 6th roots due to more variables (i.e., more flexible), but now I believe I was wrong and that pentanomial may be the limit. Later, Warut
On Sat, Dec 20, 2008 at 12:56 PM, <rwg@sdf.lonestar.org> wrote:
On Fri, Dec 19, 2008 at 12:20 PM, <rwg@sdf.lonestar.org> wrote:
Amen. I've now tried ~ 10^8 cases of (a^(1/5)-b^(1/5))^(1/3) without further success, and for sqrt, only
3/5 4/5 3/5 2/5 2/5 4/5 1/5 1/5 1/5 1/5 - 2 3 + 3 + 2 2 3 - 2 3 + 2 sqrt(4 - 3 ) = ---------------------------------------------------. 5
And, of course, these can only give pentanomials. --rwg ALGORISMIC MICROGLIAS
I have no technical knowledge on denesting, but here is how I look at the problem:
Since sqrt( a^(1/5) + b^(1/5) ) = a^(-2/5) * sqrt( a + (a^4*b)^(1/5) ), I'll consider only the form: sqrt( a + b^(1/5) ). If sqrt( a + b^(1/5) ) could be denested, I'd expect it to be denested into the form:
x0 + x1*b^(1/5) + x2*b^(2/5) + x3*b^(3/5) + x4*b^(4/5).
Notice that in the (comparatively easy) 4th root case, your conjecture catches
1/4 3/4 1/4 sqrt(161 - 12 5 ) = 2 5 - 3 sqrt(5) + 4 5 + 6
but not 1/4 1/4 1/4 3/4 3/4 1/4 sqrt(2) sqrt(31 - 4 3 5 ) = 3 5 - 2 sqrt(5) + 3 5 + 2 sqrt(3).
That's why I don't think you could find anything longer than pentanomial. OTOH, sqrt( a^(1/7) + b^(1/7) ) is a better bet for finding hexanomials or longer.
Indeed, but the only one I've found so far is perversely pentanomial:
1/7 Sqrt(7 (2 - 2 )) 1/7 3/7 5/7 6/7 ------------------ = -1 + 2 2 + 2 + 2 - 2 . 1/14 2
Just my 2 cents.
Warut
You didn't propose 6th roots. Did you have reason to suspect that they are as infertile as they seem to be? --rwg ALGORISMIC MICROGLIAS
Oops, found this in my "notes": 1/3 5/6 1/6 5/6 1/6 1/6 2/3 1/6 5 5 2 sqrt(5) 2 5 2 sqrt(4 - sqrt(3) 5 ) = ------- - --------- + ------------ - --------- + ------- sqrt(6) 3 sqrt(2) 3 3 sqrt(3)
In fact, I didn't mean x0, x1, ... to be rational numbers. It could be sqrt(2) if it could disappear after squaring. Of course, for the fifth root case, sqrt(2) could appear in x0 iff it appears in x1, ..., x4, too.
I used to think that 7th roots would have more chance than 6th roots due to more variables (i.e., more flexible), but now I believe I was wrong and that pentanomial may be the limit.
It seems so incidental that one or two coeffs keep vanishing. It's like a carnival sucker game. It looks so solvable, but something is secretly forbidding it.
Later, Warut
[Sorry about that double post: Wireless kept dropping when I hit Send.] Some months ago, mention was made here of Ben Franklin remarking that he was able to learn math only after becoming a vegetarian. I wondered how he managed without fish. I wouldn't think(!) of attacking a hard problem without a fish dinner, a pot of tea, and a decaKochel of Mozart. Here's an excerpt from Franklin's autobiography, as quoted in the March 2002 Access to Energy. (Tyron wrote the book that turned Franklin veggie): "I believe I have omitted mentioning that, in my first voyage from Boston, being becalm'd off Block Island, our people set about catching cod, and hauled up a great many. Hitherto I had stuck to my resolution of not eating animal food, and on this occasion consider'd with my master Tyron, the taking of every fish as a kind of unprovoked murder, since none of them had, or ever could do us any injury that might justify the slaughter. "All this seemed very reasonable. But I had formerly been a great lover of fish, and, when this came hot out of the frying-pan, it smelt admirably well. I balanc'd some time between principle and inclination, till I recollected that, when the fish were opened, I saw smaller fish taken out of their stomachs; then thought I, 'If you eat one another, I don't see why we mayn't eat you.' So I din'd upon cod very heartily, and continued to eat with other people, returning only now and then occasionally to a vegetable diet. So convenient a thing it is to be a reasonable creature, since it enables one to find or make a reason for everything one has a mind to do." --rwg Of course, Franklin's proposal to use PISCATOR APRICOTS for bait was no help at all. Luckily, someone had PISCATORY CYRTOPIAS
happy holidays everyone! http://www.stetson.edu/~efriedma/xmas/2008/puzzle.html erich
n(x):= the largest n s.t. binom(n,2) has no prime divisor >x. Apparently x n(x) 2 2 3 9 5 16 7 4375 11 9801 13 9801 17 336141 19 11859211 23 11859211 29 18085705 31 370256250 37 370256250 41 45105689161 These are highly untrustworthy, but at least suggestive. (Should they all be square??) Anybody know more? (Ditto for n^2-1 instead of binom(n,2).) --rwg ON ONE'S TOES NOSE TO NOSE ALGORISMIC MICROGLIAS
Your conjecture is a fairly straightforward consequence of Stormer's theorem that there are only a finite number of pairs of consecutive n-smooth numbers (the theorem also provides an algorithm for finding them all, although I don't know if this algorithm is tractable). ----- Original Message ----- From: <rwg@sdf.lonestar.org> To: "math-fun" <math-fun@mailman.xmission.com> Sent: Wednesday, December 24, 2008 4:30 AM Subject: [math-fun] Prove it's defined?
n(x):= the largest n s.t. binom(n,2) has no prime divisor >x. Apparently x n(x)
2 2 3 9 5 16 7 4375 11 9801 13 9801 17 336141 19 11859211 23 11859211 29 18085705 31 370256250 37 370256250 41 45105689161
These are highly untrustworthy, but at least suggestive. (Should they all be square??) Anybody know more? (Ditto for n^2-1 instead of binom(n,2).) --rwg ON ONE'S TOES NOSE TO NOSE ALGORISMIC MICROGLIAS
_______________________________________________ math-fun mailing list math-fun@mailman.xmission.com http://mailman.xmission.com/cgi-bin/mailman/listinfo/math-fun
-------------------------------------------------------------------------------- No virus found in this incoming message. Checked by AVG - http://www.avg.com Version: 8.0.176 / Virus Database: 270.10.0/1861 - Release Date: 12/22/2008 11:23 AM
Mma 7.0 just startled me by turning the Fourier series for the line Pi -- (4 Pi - 3 t + I Sqrt[3] t), 0 <= t <= 2 Pi, 3 into -I t 2 LerchPhi[E , 2, -] 3 (I t)/3 I t 1 L(t):= --------------------- + E LerchPhi[E , 2, -]. (2 I t)/3 3 E But L(t + 2 Pi) = E^(2 Pi/3) L(t). I.e., translating by 2 Pi *rotates* by 120 degrees! Eh? Sure enough, plotting L(t), 0 < t < 6 Pi, draws a perfect equilateral triangle. There seems to be such a relation among n-1 Lerchs for each regular n-gon. Some simple consequence of n-secting the series? Psychoanalytic continuation. --rwg PS, Veit Elser's difference map algorithm, http://en.wikipedia.org/wiki/Difference_map_algorithm , has become only the second entity to solve the 82% Arnold Dozenegger disk packing puzzle completely unaided. (Not counting Emma Cohen, who got massive clues from Emma Cohen.) Also, it's clear that Rod Stephenson's clustering algorithm will do it, probably in ~1 hr --way longer than Veit's, who clearly has something dangerous. ------- Merriam-Webster's Unabridged: Main Entry: prince albert Usage: usually capitalized P&A Etymology: after Prince Albert Edward (later Edward VII king of England) [...] 2 : a man's house slipper with a low counter and goring on each side ALGORISMIC MICROGLIAS
Interesting! I encountered the LerchPhi function recently, too. Take the standard Gregory series for Pi/4 = 1 - 1/3 + 1/5 ... and introduce powers of Sinc into each term: Define f[k_, x_] := Sum[ Sinc[(2n-1)x]^k * (-1)^(n-1)/(2n-1), {n, 1, Infinity}] Then f[0, x] = Pi/4 for all x. MMA 7 expresses f[1, x], f[2, x] etc., in terms of the Lerch function. I can prove that f[1, x] = Sum[ Sinc[(2n-1)x] * (-1)^(n-1)/(2n-1) ] equals Pi/4 for x in [-Pi/2 , Pi/2]. This means that, for those x, we can multiply each term of the Gregory series by Sinc[(2n-1)x] without changing the sum. I conjecture that for k = 1, 2, 3, ..., f[k, x] equals Pi/4 for x in [-Pi/(2k) , Pi/(2k)]. (Was Lerch in the Addams family, or was it the Munsters?) Bob Baillie -------------------- rwg@sdf.lonestar.org wrote:
Mma 7.0 just startled me by turning the Fourier series for the line
Pi -- (4 Pi - 3 t + I Sqrt[3] t), 0 <= t <= 2 Pi, 3 into -I t 2 LerchPhi[E , 2, -] 3 (I t)/3 I t 1 L(t):= --------------------- + E LerchPhi[E , 2, -]. (2 I t)/3 3 E
But L(t + 2 Pi) = E^(2 Pi/3) L(t). I.e., translating by 2 Pi *rotates* by 120 degrees! Eh? Sure enough, plotting L(t), 0 < t < 6 Pi, draws a perfect equilateral triangle. There seems to be such a relation among n-1 Lerchs for each regular n-gon. Some simple consequence of n-secting the series? Psychoanalytic continuation. --rwg PS, Veit Elser's difference map algorithm, http://en.wikipedia.org/wiki/Difference_map_algorithm , has become only the second entity to solve the 82% Arnold Dozenegger disk packing puzzle completely unaided. (Not counting Emma Cohen, who got massive clues from Emma Cohen.) Also, it's clear that Rod Stephenson's clustering algorithm will do it, probably in ~1 hr --way longer than Veit's, who clearly has something dangerous. ------- Merriam-Webster's Unabridged: Main Entry: prince albert Usage: usually capitalized P&A Etymology: after Prince Albert Edward (later Edward VII king of England) [...] 2 : a man's house slipper with a low counter and goring on each side
ALGORISMIC MICROGLIAS
_______________________________________________ math-fun mailing list math-fun@mailman.xmission.com http://mailman.xmission.com/cgi-bin/mailman/listinfo/math-fun
> Interesting!
>
> I encountered the LerchPhi function recently, too. Take the standard Gregory
> series for Pi/4 = 1 - 1/3 + 1/5 ... and introduce powers of Sinc into each term:
>
> Define
> f[k_, x_] := Sum[ Sinc[(2n-1)x]^k * (-1)^(n-1)/(2n-1), {n, 1, Infinity}]
>
> Then f[0, x] = Pi/4 for all x.
>
> MMA 7 expresses f[1, x], f[2, x] etc., in terms of the Lerch function.
Mma tends to Lerch munstrously when polylog would suffice,
and polylogs have more relations. For f[1,x], Macsyma and I get
inf
==== n
\ (- 1) sin((2 n - 1) x)
> -----------------------
/ 2
==== (2 n - 1) 2 %i x 2 - %i x
n = 1 log (- %i %e ) - log (- %i %e )
- ----------------------------- = ---------------------------------------
x 8 x
and you may have better luck with trilogs for f[2], etc. (Caution:
7.0 has a d/dk(Li[k](...)) numerics bug.)
Speaking of psychoanalytic, the moment Mma Lerched, I inadvertently
recalled the Addams counterpart while trying to retrieve "Lurch".
Somehow error-correcting this has absolutely erased the Addams
name from my brain. All that remain are trespassers Grimace and
Hamburglar.
--rwg
Why, oh, why didn't they name their daughters Desicca?
> I can prove that f[1, x] = Sum[ Sinc[(2n-1)x] * (-1)^(n-1)/(2n-1) ] equals Pi/4
> for x in [-Pi/2 , Pi/2]. This means that, for those x, we can multiply each
> term of the Gregory series by Sinc[(2n-1)x] without changing the sum.
> I conjecture that for k = 1, 2, 3, ..., f[k, x] equals Pi/4 for x in [-Pi/(2k) ,
> Pi/(2k)].
>
> (Was Lerch in the Addams family, or was it the Munsters?)
>
> Bob Baillie
> --------------------
>
> rwg@sdf.lonestar.org wrote:
>> Mma 7.0 just startled me by turning the Fourier series for the line
>>
>> Pi
>> -- (4 Pi - 3 t + I Sqrt[3] t), 0 <= t <= 2 Pi,
>> 3
>> into
>> -I t 2
>> LerchPhi[E , 2, -]
>> 3 (I t)/3 I t 1
>> L(t):= --------------------- + E LerchPhi[E , 2, -].
>> (2 I t)/3 3
>> E
>>
>> But L(t + 2 Pi) = E^(2 Pi/3) L(t). I.e., translating by 2 Pi
>> *rotates* by 120 degrees! Eh? Sure enough, plotting L(t),
>> 0 < t < 6 Pi, draws a perfect equilateral triangle. There seems
>> to be such a relation among n-1 Lerchs for each regular n-gon.
>> Some simple consequence of n-secting the series? Psychoanalytic
>> continuation.
>> --rwg
>> PS, Veit Elser's difference map algorithm,
>> http://en.wikipedia.org/wiki/Difference_map_algorithm , has become only
>> the second entity to solve the 82% Arnold Dozenegger disk packing puzzle
>> completely unaided. (Not counting Emma Cohen, who got massive clues from
>> Emma Cohen.) Also, it's clear that Rod Stephenson's clustering algorithm
>> will do it, probably in ~1 hr --way longer than Veit's, who clearly has
>> something dangerous.
>> -------
>> Merriam-Webster's Unabridged:
>> Main Entry: prince albert
>> Usage: usually capitalized P&A
>> Etymology: after Prince Albert Edward (later Edward VII king of England)
>> [...] 2 : a man's house slipper with a low counter and goring on each side
>>
>> ALGORISMIC MICROGLIAS
>>
>>
>> _______________________________________________
>> math-fun mailing list
>> math-fun@mailman.xmission.com
>> http://mailman.xmission.com/cgi-bin/mailman/listinfo/math-fun
>>
>>
>
> _______________________________________________
> math-fun mailing list
> math-fun@mailman.xmission.com
> http://mailman.xmission.com/cgi-bin/mailman/listinfo/math-fun
>
>> Interesting!
>>
>> I encountered the LerchPhi function recently, too. Take the standard Gregory
>> series for Pi/4 = 1 - 1/3 + 1/5 ... and introduce powers of Sinc into each term:
>>
>> Define
>> f[k_, x_] := Sum[ Sinc[(2n-1)x]^k * (-1)^(n-1)/(2n-1), {n, 1, Infinity}]
>>
>> Then f[0, x] = Pi/4 for all x.
>>
>> MMA 7 expresses f[1, x], f[2, x] etc., in terms of the Lerch function.
>
> Mma tends to Lerch munstrously when polylog would suffice,
> and polylogs have more relations. For f[1,x], Macsyma and I get
>
> inf
> ==== n
> \ (- 1) sin((2 n - 1) x)
> > -----------------------
> / 2
> ==== (2 n - 1) 2 %i x 2 - %i x
> n = 1 log (- %i %e ) - log (- %i %e )
> - ----------------------------- = ---------------------------------------
> x 8 x
>
> and you may have better luck with trilogs for f[2], etc. (Caution:
> 7.0 has a d/dk(Li[k](...)) numerics bug.)
>
> Speaking of psychoanalytic, the moment Mma Lerched, I inadvertently
> recalled the Addams counterpart while trying to retrieve "Lurch".
> Somehow error-correcting this has absolutely erased the Addams
> name from my brain. All that remain are trespassers Grimace and
> Hamburglar.
>
> --rwg
> Why, oh, why didn't they name their daughters Desicca?
>
>> I can prove that f[1, x] = Sum[ Sinc[(2n-1)x] * (-1)^(n-1)/(2n-1) ] equals Pi/4
>> for x in [-Pi/2 , Pi/2]. This means that, for those x, we can multiply each
>> term of the Gregory series by Sinc[(2n-1)x] without changing the sum.
>
>> I conjecture that for k = 1, 2, 3, ..., f[k, x] equals Pi/4 for x in [-Pi/(2k) ,
>> Pi/(2k)].
>>
>> (Was Lerch in the Addams family, or was it the Munsters?)
>>
>> Bob Baillie
>> --------------------
>>
>> rwg@sdf.lonestar.org wrote:
>>> Mma 7.0 just startled me by turning the Fourier series for the line
>>>
>>> Pi
>>> -- (4 Pi - 3 t + I Sqrt[3] t), 0 <= t <= 2 Pi,
>>> 3
>>> into
>>> -I t 2
>>> LerchPhi[E , 2, -]
>>> 3 (I t)/3 I t 1
>>> L(t):= --------------------- + E LerchPhi[E , 2, -].
>>> (2 I t)/3 3
>>> E
>>>
>>> But L(t + 2 Pi) = E^(2 Pi/3) L(t). I.e., translating by 2 Pi
>>> *rotates* by 120 degrees! Eh? Sure enough, plotting L(t),
>>> 0 < t < 6 Pi, draws a perfect equilateral triangle. There seems
>>> to be such a relation among n-1 Lerchs for each regular n-gon.
>>> Some simple consequence of n-secting the series? Psychoanalytic
>>> continuation.
>>> --rwg
>>> PS, Veit Elser's difference map algorithm,
>>> http://en.wikipedia.org/wiki/Difference_map_algorithm , has become only
>>> the second entity to solve the 82% Arnold Dozenegger disk packing puzzle
>>> completely unaided. (Not counting Emma Cohen, who got massive clues from
>>> Emma Cohen.) Also, it's clear that Rod Stephenson's clustering algorithm
>>> will do it, probably in ~1 hr --way longer than Veit's, who clearly has
>>> something dangerous.
>>> -------
>>> Merriam-Webster's Unabridged:
>>> Main Entry: prince albert
>>> Usage: usually capitalized P&A
>>> Etymology: after Prince Albert Edward (later Edward VII king of England)
>>> [...] 2 : a man's house slipper with a low counter and goring on each side
>>>
>>> ALGORISMIC MICROGLIAS
>>>
PS (With the help of my laptop overheating) I forgot to add that the 7.0
Sinc doc exhibits, in effect, this strangissimo sequence:
In[6]:= Table[2/Pi*Integrate[Product[Sinc[x/k], {k, 1, 2*n - 1, 2}],
{x, 0, Infinity}], {n, 8}]
467807924713440738696537864469
Out[6]= {1, 1, 1, 1, 1, 1, 1, ------------------------------}
467807924720320453655260875000
(unattributed, but A068214.)
This may be simply an artifact of excessive haste. Specifically,
3636.98 mph. (Actually, the last term is
7
491
1 - --------------------.)
3 12 6 7 6 6
2 3 5 7 11 13
Old Life mail exchange with CNWH. (The Pulsar is a big oscillator with a small period.) -------------------- From: <rwg@TSUNAMI.macsyma.com> Subject: Jodrell Bank [Was: Rudy Rucker drops out] To: life@cs.arizona.edu In-Reply-To: <199409212147.AA17854@leibniz.cs.arizona.edu> Message-Id: <"19940922091151.5.rwg@TSUNAMI"@SWEATHOUSE.macsyma.com> (In New Jersey is rumored to be the Long Branch Beach Branch of the Red Bank Bank.) Date: Wed, 21 Sep 1994 14:47 PDT From: "Richard Schroeppel" <rcs@cs.arizona.edu> Rudy Rucker has quit the Life list, due to volume. Well, he gets to miss me asking JHC this historical question: I believe you named the Pulsar during the excitement surrounding the astronomical discovery, and prior to an accepted physical theory. Reading popular accounts at the time, I was irritated by the unanimity that pulsars were small because c*period was. I always wondered how they summarily ruled out a systolic, synchronized oscillation in an extended object, precisely exemplified by your Pulsar, which is several periods across. (Prior to the acceptance of the neutron star model) did you challenge any astronomers with this? From: "John Conway" <conway@math.Princeton.EDU> Received: by ginger.princeton.edu (4.1/Math-Client) id AA14267; Thu, 22 Sep 94 09:43:57 EDT Date: Thu, 22 Sep 94 09:43:57 EDT Message-Id: <9409221343.AA14267@ginger.princeton.edu> To: life@cs.arizona.edu, rwg@TSUNAMI.macsyma.com Subject: Re: Jodrell Bank [Was: Rudy Rucker drops out] Sorry to hear of Rudy's disappearance. In answer to your question about the Pulsar - it was found a few years after the first discovery of the real pulsars, but they were still fairly hot news. After reading some Scientific-American-type article that mentioned some numerical name for an interesting pulsar, I called "our" one the Cambridge Pulsar, CP48-56-64 or whatever the three populations are. Your remark about synchronized oscillation did not occur to me, so I didn't bug any real astronomers with it! John Conway --------------- I was about to say that a better reason would have been: Because it's a stupid question! Suppose the Sun got something in its eye and started blinking at one Hertz. Earth would be thrown into half-second paroxysms of daylight and darkness... No! The Sun's radius is > 2 light seconds, so we'd just see expanding light and dark bands on its face, essentially invisible at interstellar distances. A blinking extended source would need to be flat and face-on to us ... Wrong again! Imagine a "point source" of intense bursts of neutrinos, say, which "illuminate" some large object from behind (from our viewpoint). Then the object will appear to flash pretty much all at once, regardless of its size and shape. Admittedly, the *cause* of the flashing would be small, but the transducer could be large. Observers well off our line of sight, however, would see the flashing muted by banding. The astronomers turned out to be right, but they should have admitted they were betting on Occam. --rwg Thirty Hertz? That's just a planet that uses lots of AC. PHYSICAL ASTRONOMER SPHERICAL ASTRONOMY PLATINOUS PULSATION
I was about to say that a better reason would have been: Because it's a stupid question! Suppose the Sun got something in its eye and started blinking at one Hertz. Earth would be thrown into half-second paroxysms of daylight and darkness... No! The Sun's radius is > 2 light seconds, so we'd just see expanding light and dark bands on its face, essentially invisible at interstellar distances. A blinking extended source would need to be flat and face-on to us ...
Wrong again! Imagine a "point source" of intense bursts of neutrinos, say, which "illuminate" some large object from behind (from our viewpoint). Then the object will appear to flash pretty much all at once, regardless of its size and shape.
Only if the point source is far away from the object it's illuminating - otherwise there'd still be significant path-length differences (imagine the center of the Sun doing the blinking - the center would appear to blink at quite a different time than the limb). But then, if that's the case, the point source must be either very tightly beamed and/or be emitting ridiculous amounts of power. So I think the astronomers had it right. But maybe I'm still missing something. --Joshua Zucker
It's very easy to "shew that" / x [ I floor(sqrt(t)) dt ] / 0 (floor(sqrt(x)) + 1) (2 floor(sqrt(x)) + 1) = floor(sqrt(x)) (x + -------------------------------------------), 6 but a little surprising when you look at it. / x [ 3/2 I sqrt(floor(t)) dt = floor (x) + x sqrt(floor(x)) ] / 0 1 1 + hurwitz_zeta(- -, floor(x)) - zeta(- -). 2 2 --rwg
The generating functions (gfs) for the sum of divisors of n, 2n-1, and 2n+1 are respectively inf inf inf ==== n ==== n ==== n \ n q \ (2 n - 1) q \ (2 n + 1) q (d6) [ > ------, > ------------, > ------------] / n / 2 n - 1 / 2 n + 1 ==== 1 - q ==== 1 - q ==== 1 - q n = 1 n = 1 n = 0 Check: (c7) taylor(%,q,0,9) 2 3 4 5 6 7 8 9 (d7)/T/ [q + 3 q + 4 q + 7 q + 6 q + 12 q + 8 q + 15 q + 13 q 2 3 4 5 6 7 8 + . . ., q + 4 q + 6 q + 8 q + 13 q + 12 q + 14 q + 24 q 9 2 3 4 5 6 + 18 q + . . ., 1 + 4 q + 6 q + 8 q + 13 q + 12 q + 14 q 7 8 9 + 24 q + 18 q + 20 q + . . .] What is the gf for the sum of divisors of 3n+1? Ans: http://gosper.org/3n+1.png . Should this be obvious? --rwg COMMERCIALIST MICROCLIMATES RHINOPLASTIES RELATIONSHIPS DIMESTORE DOSIMETER MOUNTAINEER ENUMERATION
I may have sent a bogus Mathematica 7.0 bug report. SumConvergence[(n + 1)!/((n + 2)*(n + I + 1)!), n,...] abstains for all four advertised Methods, and HypergeometricPFQ[{1, 2, 2}, {3, I + 2}, 1] gives ComplexInfinity Macsyma promptly says (c37) (hyper_f[3,2]([1,2,2],[3,%i+2]),%% = dfloat(%%)) (d37) hyper_f ([1, 2, 2], [3, %i + 2]) = 3, 2 0.66246689227311d0 - 1.48553272123584d0 %i via a 3x3 matrix transformation. These terms decrease like the harmonic series, but a denominator (n+I)! provides oscillation, so I bet on Macsyma. But further analysis shows the oscillation periods increasing exponentially, which is reminiscent of the proof of harmonic divergence by taking exponentially bigger gulps, except here we have alternating "sign". I.e., the nth partial sum may go like C + A n^i, with C possibly (d37). Macsyma's formula comes from path invariant matrix products along two edges of an infinite rectangle, where the other two edges generally vanish, but maybe not in this case. Could these oscillations ever-so-gradually grow or shrink by some higher order effect of taking doubly exponentially many terms? ComplexInfinity seems unlikely, but do I owe Wolfram's bugchasers an apology for claiming convergence? A simpler and probably equivalent problem might be sum_n n!/(n+i+1)!, which Macsyma claims is simply -1/Gamma(i) = 0.56960764103668d0 - 1.83074439659052d0 * %i simply by the 2F1[1] formula, but gets the same result by the 3x3 transformation. --rwg
I may have sent a bogus Mathematica 7.0 bug report. SumConvergence[(n + 1)!/((n + 2)*(n + I + 1)!), n,...] abstains for all four advertised Methods, and
HypergeometricPFQ[{1, 2, 2}, {3, I + 2}, 1] gives ComplexInfinity
Macsyma promptly says (c37) (hyper_f[3,2]([1,2,2],[3,%i+2]),%% = dfloat(%%))
(d37) hyper_f ([1, 2, 2], [3, %i + 2]) = 3, 2 0.66246689227311d0 - 1.48553272123584d0 %i
via a 3x3 matrix transformation. These terms decrease like the harmonic series, but a denominator (n+I)! provides oscillation, so I bet on Macsyma. But further analysis shows the oscillation periods increasing exponentially, which is reminiscent of the proof of harmonic divergence by taking exponentially bigger gulps, except here we have alternating "sign". I.e., the nth partial sum may go like C + A n^i, with C possibly (d37). Macsyma's formula comes from path invariant matrix products along two edges of an infinite rectangle, where the other two edges generally vanish, but maybe not in this case. Could these oscillations ever-so-gradually grow or shrink by some higher order effect of taking doubly exponentially many terms?
ComplexInfinity seems unlikely, but do I owe Wolfram's bugchasers an apology for claiming convergence?
Obviously yes.
A simpler and probably equivalent problem might be sum_n n!/(n+i+1)!, which Macsyma claims is simply -1/Gamma(i) = 0.56960764103668d0 - 1.83074439659052d0 * %i
= (1-i)/(i+1)! = :z
simply by the 2F1[1] formula, but gets the same result by the 3x3 transformation.
The latter sum telescopes, and differs from the former by a convergent series. The telescoped partial sums orbit z at eventually unit radius and exponentially increasing period.
--rwg
(c6) ZETA(S) = ('INTEGRATE((THETA[3](0,%E^-T)^1-1)*T^(S/2-1),T,0,INF))/GAMMA(S/2)/2 inf / [ - t s/2 - 1 I (theta (0, %e ) - 1) t dt ] 3 / 0 (d6) zeta(s) = --------------------------------------- s 2 gamma(-) 2 (c8) DFLOAT(EVAL(SUBST([S = 2*%PI,NOUNIFY('INTEGRATE) = QUAD_INF],D6))) (d8) 1.01407286015004d0 = 1.01407286015011d0 inf inf ==== ==== \ \ 1 EZ(s) := 2 zeta(s) + 2 > > ------------ / / 2 2 s/2 ==== ==== (k + j ) k = 1 j =-inf inf / [ 2 - t s/2 - 1 I (theta (0, e ) - 1) t dt ] 3 / 0 = --------------------------------------. s Gamma(-) 2 E.g., In:= N[List @@ % /. s -> Pi] Out= {EpsteinZeta[3.14159], 8.27511, 8.27511}. (c12) \e\z(S)+2*SUM(SUM(SUM((I^2+J^2+K^2)^-(S/2),I,-INF,INF),J,-INF,INF),K,1,INF) = ('INTEGRATE((THETA[3](0,%E^-T)^3-1)*T^(S/2-1),T,0,INF))/GAMMA(S/2); inf inf inf ==== ==== ==== \ \ \ 1 (d12) EZ(s) + 2 > > > ----------------- = / / / 2 2 2 s/2 ==== ==== ==== (k + j + i ) k = 1 j = - inf i = - inf inf / [ 3 - t s/2 - 1 I (theta (0, %e ) - 1) t dt ] 3 / 0 ---------------------------------------, s gamma(-) 2 etc. (First two proved only for even integer s. Last one(s) completely untested.) --rwg
Is it clear to anyone why inf inf , ==== ==== \ \ 1 epsteinzeta(2 s) := > > ---------- / / 2 2 s ==== ==== (k + j ) j = - inf k = - inf has the negative integers as real roots, and a proper superset of zeta's complex roots, all with realpart 1/2? The first few epstein roots are 0.50000000000000 + 6.020948904697597 I, 0.50000000000000 + 10.243770304166555 I, 0.50000000000000 + 12.988098012312423 I, 0.50000000000000 + 14.13472514173469 I, Z 0.50000000000000 + 16.34260710458722 I, 0.50000000000000 + 18.29199319612353 I, 0.50000000000000 + 21.02203963877156 I, 0.50000000000000 + 21.45061134398345 I, Z 0.50000000000000 + 23.27837652045958 I, 0.50000000000000 + 25.01085758014479 I, Z with those marked with the sign of Zorro matching Zeta's. Does Epstein[1-s] come out in Epstein[s]? --rwg (No answers so far on:
(c6) ZETA(S) = ('INTEGRATE((THETA[3](0,%E^-T)^1-1)*T^(S/2-1),T,0,INF))/GAMMA(S/2)/2
inf / [ - t s/2 - 1 I (theta (0, %e ) - 1) t dt ] 3 / 0 (d6) zeta(s) = --------------------------------------- s 2 gamma(-) 2
(c8) DFLOAT(EVAL(SUBST([S = 2*%PI,NOUNIFY('INTEGRATE) = QUAD_INF],D6)))
(d8) 1.01407286015004d0 = 1.01407286015011d0
inf inf ==== ==== \ \ 1 EZ(s) := 2 zeta(s) + 2 > > ------------ / / 2 2 s/2 ==== ==== (k + j ) k = 1 j =-inf
inf / [ 2 - t s/2 - 1 I (theta (0, e ) - 1) t dt ] 3 / 0 = --------------------------------------. s Gamma(-) 2
E.g., In:= N[List @@ % /. s -> Pi] Out= {EpsteinZeta[3.14159], 8.27511, 8.27511}.
(c12) \e\z(S)+2*SUM(SUM(SUM((I^2+J^2+K^2)^-(S/2),I,-INF,INF),J,-INF,INF),K,1,INF) = ('INTEGRATE((THETA[3](0,%E^-T)^3-1)*T^(S/2-1),T,0,INF))/GAMMA(S/2);
inf inf inf ==== ==== ==== \ \ \ 1 (d12) EZ(s) + 2 > > > ----------------- = / / / 2 2 2 s/2 ==== ==== ==== (k + j + i ) k = 1 j = - inf i = - inf
inf / [ 3 - t s/2 - 1 I (theta (0, %e ) - 1) t dt ] 3 / 0 ---------------------------------------, s gamma(-) 2 etc. (First two proved only for even integer s. Last one(s) completely untested.) --rwg ) Last one tests out so far. d/ds the integral for some weird identities. Also, subtracting %e^- t * (sqrt(%pi/t) - 1) * t^s from the integrand extends convergence across the critical strip. Check out the zeta(0) and "zeta(1)" limits.
Is it clear to anyone why
inf inf , ==== ==== \ \ 1 epsteinzeta(2 s) := > > ---------- / / 2 2 s ==== ==== (k + j ) j = - inf k = - inf
has the negative integers as real roots, and a proper superset of zeta's complex roots, all with realpart 1/2?
I.e. epzeta(2*s)/zeta(s) apparently has no poles. (And the complex 0s seem more evenly spaced.)
The first few epstein roots are 0.50000000000000 + 6.020948904697597 I, 0.50000000000000 + 10.243770304166555 I, 0.50000000000000 + 12.988098012312423 I, 0.50000000000000 + 14.13472514173469 I, Z 0.50000000000000 + 16.34260710458722 I, 0.50000000000000 + 18.29199319612353 I, 0.50000000000000 + 21.02203963877156 I, 0.50000000000000 + 21.45061134398345 I, Z 0.50000000000000 + 23.27837652045958 I, 0.50000000000000 + 25.01085758014479 I, Z
with those marked with the mark of Zorro matching Zeta's. Does Epstein[1-s] come out in Epstein[s]?
Empirically, it's merely Gamma(s) epzeta(2 s) s - 1 -------------------- = epzeta(2 - 2 s) Gamma(1 - s) pi . s pi http://www.stephenwolfram.com/publications/articles/physics/83-properties1/4... seems to give an enormous generalization of this.
--rwg
(No answers so far on:
(c6) ZETA(S) = ('INTEGRATE((THETA[3](0,%E^-T)^1-1)*T^(S/2-1),T,0,INF))/GAMMA(S/2)/2
inf / [ - t s/2 - 1 I (theta (0, %e ) - 1) t dt ] 3 / 0 (d6) zeta(s) = --------------------------------------- s 2 gamma(-) 2
(c8) DFLOAT(EVAL(SUBST([S = 2*%PI,NOUNIFY('INTEGRATE) = QUAD_INF],D6)))
(d8) 1.01407286015004d0 = 1.01407286015011d0
inf inf ==== ==== \ \ 1 EZ(s) := 2 zeta(s) + 2 > > ------------ / / 2 2 s/2 ==== ==== (k + j ) k = 1 j =-inf
inf / [ 2 - t s/2 - 1 I (theta (0, e ) - 1) t dt ] 3 / 0 = --------------------------------------. s Gamma(-) 2
E.g., In:= N[List @@ % /. s -> Pi] Out= {EpsteinZeta[3.14159], 8.27511, 8.27511}.
(c12) \e\z(S)+2*SUM(SUM(SUM((I^2+J^2+K^2)^-(S/2),I,-INF,INF),J,-INF,INF),K,1,INF) = ('INTEGRATE((THETA[3](0,%E^-T)^3-1)*T^(S/2-1),T,0,INF))/GAMMA(S/2);
inf inf inf ==== ==== ==== \ \ \ 1 (d12) EZ(s) + 2 > > > ----------------- = / / / 2 2 2 s/2 ==== ==== ==== (k + j + i ) k = 1 j = - inf i = - inf
inf / [ 3 - t s/2 - 1 I (theta (0, %e ) - 1) t dt ] 3 / 0 ---------------------------------------, s gamma(-) 2 etc. (First two proved only for even integer s. Last one(s) completely untested.) --rwg ) Last one tests out so far. d/ds the integral for some weird identities. Also, subtracting %e^- t * (sqrt(%pi/t) - 1) * t^s from the integrand extends convergence across the critical strip. Check out the zeta(0) and "zeta(1)" limits.
E.g., 1 - Sqrt[Pi/t] -t -1 + -------------- + EllipticTheta[3, 0, E ] t E Integrate[--------------------------------------------, Sqrt[t] {t, 0, Infinity}] == Sqrt[Pi] (1 + 2 EulerGamma - Log[4])
Is it clear to anyone why
inf inf , ==== ==== \ \ 1 epsteinzeta(2 s) := > > ---------- / / 2 2 s ==== ==== (k + j ) j = - inf k = - inf
has the negative integers as real roots, and a proper superset of zeta's complex roots, all with realpart 1/2?
I.e. epzeta(2*s)/zeta(s) apparently has no poles. (And the complex 0s seem more evenly spaced.)
The first few epstein roots are 0.50000000000000 + 6.020948904697597 I, 0.50000000000000 + 10.243770304166555 I, 0.50000000000000 + 12.988098012312423 I, 0.50000000000000 + 14.13472514173469 I, Z 0.50000000000000 + 16.34260710458722 I, 0.50000000000000 + 18.29199319612353 I, 0.50000000000000 + 21.02203963877156 I, 0.50000000000000 + 21.45061134398345 I, Z 0.50000000000000 + 23.27837652045958 I, 0.50000000000000 + 25.01085758014479 I, Z
with those marked with the mark of Zorro matching Zeta's. Does Epstein[1-s] come out in Epstein[s]?
Empirically, it's merely
Gamma(s) epzeta(2 s) s - 1 -------------------- = epzeta(2 - 2 s) Gamma(1 - s) pi . s pi
http://www.stephenwolfram.com/publications/articles/physics/83-properties1/4... seems to give an enormous generalization of this.
YOW! That Wolfram paper (Summary: The Casimir Jedi Force makes vacuums suck even worse than you thought) says, in effect, 1 zeta(s, -) epzeta(2 s) 4 1 ----------- = ---------- + 4 (-- - 1) zeta(s), zeta(s) 2 s - 3 s 2 2 so we have surprising identities like inf / [ 2 - t z I (theta (0, %e ) - 1) t dt = ] 3 / 0 1 zeta(z + 1, -) 4 1 z! zeta(z + 1) (-------------- + (------ - 4) zeta(z + 1)) 2 z - 1 z - 1 2 2 Note that for z=1, this reduces to 2 pi^2 Catalan/3. Furthermore, he gives closed forms for the quadruple and sextuple sums, so Integrate[t^z*(-1 + EllipticTheta[3, 0, E^(-t)]^4), {t, 0, Infinity}]== z!*8*(1 - 4^-z)*Zeta[z + 1]*Zeta[z] and (not yet tested) Integrate[t^z*(-1 + EllipticTheta[3, 0, E^(-t)]^6), {t, 0, Infinity}]== z!*16*(1 - 2^-z + 4^(1 - z))*Zeta[z + 1]*Zeta[z - 2]. --rwg
(No answers so far on:
(c6) ZETA(S) = ('INTEGRATE((THETA[3](0,%E^-T)^1-1)*T^(S/2-1),T,0,INF))/GAMMA(S/2)/2
inf / [ - t s/2 - 1 I (theta (0, %e ) - 1) t dt ] 3 / 0 (d6) zeta(s) = --------------------------------------- s 2 gamma(-) 2
(c8) DFLOAT(EVAL(SUBST([S = 2*%PI,NOUNIFY('INTEGRATE) = QUAD_INF],D6)))
(d8) 1.01407286015004d0 = 1.01407286015011d0
inf inf ==== ==== \ \ 1 EZ(s) := 2 zeta(s) + 2 > > ------------ / / 2 2 s/2 ==== ==== (k + j ) k = 1 j =-inf
inf / [ 2 - t s/2 - 1 I (theta (0, e ) - 1) t dt ] 3 / 0 = --------------------------------------. s Gamma(-) 2
E.g., In:= N[List @@ % /. s -> Pi] Out= {EpsteinZeta[3.14159], 8.27511, 8.27511}.
(c12) \e\z(S)+2*SUM(SUM(SUM((I^2+J^2+K^2)^-(S/2),I,-INF,INF),J,-INF,INF),K,1,INF) = ('INTEGRATE((THETA[3](0,%E^-T)^3-1)*T^(S/2-1),T,0,INF))/GAMMA(S/2);
inf inf inf ==== ==== ==== \ \ \ 1 (d12) EZ(s) + 2 > > > ----------------- = / / / 2 2 2 s/2 ==== ==== ==== (k + j + i ) k = 1 j = - inf i = - inf
inf / [ 3 - t s/2 - 1 I (theta (0, %e ) - 1) t dt ] 3 / 0 ---------------------------------------, s gamma(-) 2 etc. (First two proved only for even integer s. Last one(s) completely untested.) --rwg ) Last one tests out so far. d/ds the integral for some weird identities. Also, subtracting %e^- t * (sqrt(%pi/t) - 1) * t^s from the integrand extends convergence across the critical strip. Check out the zeta(0) and "zeta(1)" limits.
E.g., 1 - Sqrt[Pi/t] -t -1 + -------------- + EllipticTheta[3, 0, E ] t E Integrate[--------------------------------------------, Sqrt[t]
{t, 0, Infinity}] == Sqrt[Pi] (1 + 2 EulerGamma - Log[4])
_______________________________________________ math-fun mailing list math-fun@mailman.xmission.com http://mailman.xmission.com/cgi-bin/mailman/listinfo/math-fun
Is it clear to anyone why
inf inf , ==== ==== \ \ 1 epsteinzeta(2 s) := > > ---------- / / 2 2 s ==== ==== (k + j ) j = - inf k = - inf
has the negative integers as real roots, and a proper superset of zeta's complex roots, all with realpart 1/2?
I.e. epzeta(2*s)/zeta(s) apparently has no poles. (And the complex 0s seem more evenly spaced.)
The first few epstein roots are 0.50000000000000 + 6.020948904697597 I, 0.50000000000000 + 10.243770304166555 I, 0.50000000000000 + 12.988098012312423 I, 0.50000000000000 + 14.13472514173469 I, Z 0.50000000000000 + 16.34260710458722 I, 0.50000000000000 + 18.29199319612353 I, 0.50000000000000 + 21.02203963877156 I, 0.50000000000000 + 21.45061134398345 I, Z 0.50000000000000 + 23.27837652045958 I, 0.50000000000000 + 25.01085758014479 I, Z
with those marked with the mark of Zorro matching Zeta's. Does Epstein[1-s] come out in Epstein[s]?
Empirically, it's merely
Gamma(s) epzeta(2 s) s - 1 -------------------- = epzeta(2 - 2 s) Gamma(1 - s) pi . s pi
http://www.stephenwolfram.com/publications/articles/physics/83-properties1/4... seems to give an enormous generalization of this.
YOW! That Wolfram paper (Summary: The Casimir Jedi Force makes vacuums suck even worse than you thought) says, in effect,
1 zeta(s, -) epzeta(2 s) 4 1 ----------- = ---------- + 4 (-- - 1) zeta(s), zeta(s) 2 s - 3 s 2 2
so we have surprising identities like
inf / [ 2 - t z I (theta (0, %e ) - 1) t dt = ] 3 / 0 1 zeta(z + 1, -) 4 1 z! zeta(z + 1) (-------------- + (------ - 4) zeta(z + 1)) 2 z - 1 z - 1 2 2
Note that for z=1, this reduces to 2 pi^2 Catalan/3.
Furthermore, he gives closed forms for the quadruple and sextuple sums,
No, octuple.
so Integrate[t^z*(-1 + EllipticTheta[3, 0, E^(-t)]^4), {t, 0, Infinity}]== z!*8*(1 - 4^-z)*Zeta[z + 1]*Zeta[z]
and (not yet tested)
And indeed mistranscribed. Should be the octuple sum Integrate[t^z*(-1 + EllipticTheta[3, 0, E^(-t)]^8), {t, 0, Infinity}]== z!*16*(1 - 2^-z + 4^(1 - z))*Zeta[z + 1]*Zeta[z - 2]. So, can we only do Theta^2^n? --rwg
(No answers so far on:
(c6) ZETA(S) = ('INTEGRATE((THETA[3](0,%E^-T)^1-1)*T^(S/2-1),T,0,INF))/GAMMA(S/2)/2
inf / [ - t s/2 - 1 I (theta (0, %e ) - 1) t dt ] 3 / 0 (d6) zeta(s) = --------------------------------------- s 2 gamma(-) 2
(c8) DFLOAT(EVAL(SUBST([S = 2*%PI,NOUNIFY('INTEGRATE) = QUAD_INF],D6)))
(d8) 1.01407286015004d0 = 1.01407286015011d0
inf inf ==== ==== \ \ 1 EZ(s) := 2 zeta(s) + 2 > > ------------ / / 2 2 s/2 ==== ==== (k + j ) k = 1 j =-inf
inf / [ 2 - t s/2 - 1 I (theta (0, e ) - 1) t dt ] 3 / 0 = --------------------------------------. s Gamma(-) 2
E.g., In:= N[List @@ % /. s -> Pi] Out= {EpsteinZeta[3.14159], 8.27511, 8.27511}.
(c12) \e\z(S)+2*SUM(SUM(SUM((I^2+J^2+K^2)^-(S/2),I,-INF,INF),J,-INF,INF),K,1,INF) = ('INTEGRATE((THETA[3](0,%E^-T)^3-1)*T^(S/2-1),T,0,INF))/GAMMA(S/2);
inf inf inf ==== ==== ==== \ \ \ 1 (d12) EZ(s) + 2 > > > ----------------- = / / / 2 2 2 s/2 ==== ==== ==== (k + j + i ) k = 1 j = - inf i = - inf
inf / [ 3 - t s/2 - 1 I (theta (0, %e ) - 1) t dt ] 3 / 0 ---------------------------------------, s gamma(-) 2 etc. (First two proved only for even integer s. Last one(s) completely untested.) --rwg ) Last one tests out so far. d/ds the integral for some weird identities. Also, subtracting %e^- t * (sqrt(%pi/t) - 1) * t^s from the integrand extends convergence across the critical strip. Check out the zeta(0) and "zeta(1)" limits.
E.g., 1 - Sqrt[Pi/t] -t -1 + -------------- + EllipticTheta[3, 0, E ] t E Integrate[--------------------------------------------, Sqrt[t]
{t, 0, Infinity}] == Sqrt[Pi] (1 + 2 EulerGamma - Log[4])
Is it clear to anyone why
inf inf , ==== ==== \ \ 1 epsteinzeta(2 s) := > > ---------- / / 2 2 s ==== ==== (k + j ) j = - inf k = - inf
has the negative integers as real roots, and a proper superset of zeta's complex roots, all with realpart 1/2?
I.e. epzeta(2*s)/zeta(s) apparently has no poles. (And the complex 0s seem more evenly spaced.)
The first few epstein roots are 0.50000000000000 + 6.020948904697597 I, 0.50000000000000 + 10.243770304166555 I, 0.50000000000000 + 12.988098012312423 I, 0.50000000000000 + 14.13472514173469 I, Z 0.50000000000000 + 16.34260710458722 I, 0.50000000000000 + 18.29199319612353 I, 0.50000000000000 + 21.02203963877156 I, 0.50000000000000 + 21.45061134398345 I, Z 0.50000000000000 + 23.27837652045958 I, 0.50000000000000 + 25.01085758014479 I, Z
with those marked with the mark of Zorro matching Zeta's. Does Epstein[1-s] come out in Epstein[s]?
Empirically, it's merely
Gamma(s) epzeta(2 s) s - 1 -------------------- = epzeta(2 - 2 s) Gamma(1 - s) pi . s pi
http://www.stephenwolfram.com/publications/articles/physics/83-properties1/4... seems to give an enormous generalization of this.
YOW! That Wolfram paper (Summary: The Casimir Jedi Force makes vacuums suck even worse than you thought) says, in effect,
1 zeta(s, -) epzeta(2 s) 4 1 ----------- = ---------- + 4 (-- - 1) zeta(s), zeta(s) 2 s - 3 s 2 2
so we have surprising identities like
inf / [ 2 - t z I (theta (0, %e ) - 1) t dt = ] 3 / 0 1 zeta(z + 1, -) 4 1 z! zeta(z + 1) (-------------- + (------ - 4) zeta(z + 1)) 2 z - 1 z - 1 2 2
Note that for z=1, this reduces to 2 pi^2 Catalan/3.
Furthermore, he gives closed forms for the quadruple and sextuple sums,
No, octuple.
so Integrate[t^z*(-1 + EllipticTheta[3, 0, E^(-t)]^4), {t, 0, Infinity}]== z!*8*(1 - 4^-z)*Zeta[z + 1]*Zeta[z]
and (not yet tested)
And indeed mistranscribed. Should be the octuple sum Integrate[t^z*(-1 + EllipticTheta[3, 0, E^(-t)]^8), {t, 0, Infinity}]== z!*16*(1 - 2^-z + 4^(1 - z))*Zeta[z + 1]*Zeta[z - 2]. So, can we only do Theta^2^n? --rwg
Evidently. Check out the roots of the Theta_3^3 (triple sum) case: 0.75 + 5.22263810510 I, 0.75 + 9.64480205514 I, 0.75 + 15.64123740772 I, 0.75 + 18.30688168298 I, 0.75 + 20.59763699317 I, 0.95152180948 + 22.24999013509 I, 0.75 + 27.70929450516 I, 0.75 + 34.00853726804 I, ... The poles of 1/|this fcn| in the critical strip are so acicular as to evade detection by 7.0's Plot3D. We need a pole-zero plotter for complex fcns that actually calls FindRoot in every mesh cell. --rwg
(No answers so far on:
(c6) ZETA(S) = ('INTEGRATE((THETA[3](0,%E^-T)^1-1)*T^(S/2-1),T,0,INF))/GAMMA(S/2)/2
inf / [ - t s/2 - 1 I (theta (0, %e ) - 1) t dt ] 3 / 0 (d6) zeta(s) = --------------------------------------- s 2 gamma(-) 2
(c8) DFLOAT(EVAL(SUBST([S = 2*%PI,NOUNIFY('INTEGRATE) = QUAD_INF],D6)))
(d8) 1.01407286015004d0 = 1.01407286015011d0
inf inf ==== ==== \ \ 1 EZ(s) := 2 zeta(s) + 2 > > ------------ / / 2 2 s/2 ==== ==== (k + j ) k = 1 j =-inf
inf / [ 2 - t s/2 - 1 I (theta (0, e ) - 1) t dt ] 3 / 0 = --------------------------------------. s Gamma(-) 2
E.g., In:= N[List @@ % /. s -> Pi] Out= {EpsteinZeta[3.14159], 8.27511, 8.27511}.
(c12) \e\z(S)+2*SUM(SUM(SUM((I^2+J^2+K^2)^-(S/2),I,-INF,INF),J,-INF,INF),K,1,INF) = ('INTEGRATE((THETA[3](0,%E^-T)^3-1)*T^(S/2-1),T,0,INF))/GAMMA(S/2);
inf inf inf ==== ==== ==== \ \ \ 1 (d12) EZ(s) + 2 > > > ----------------- = / / / 2 2 2 s/2 ==== ==== ==== (k + j + i ) k = 1 j = - inf i = - inf
inf / [ 3 - t s/2 - 1 I (theta (0, %e ) - 1) t dt ] 3 / 0 ---------------------------------------, s gamma(-) 2 etc. (First two proved only for even integer s. Last one(s) completely untested.) --rwg ) Last one tests out so far. d/ds the integral for some weird identities. Also, subtracting %e^- t * (sqrt(%pi/t) - 1) * t^s from the integrand extends convergence across the critical strip. Check out the zeta(0) and "zeta(1)" limits.
E.g., 1 - Sqrt[Pi/t] -t -1 + -------------- + EllipticTheta[3, 0, E ] t E Integrate[--------------------------------------------, Sqrt[t]
{t, 0, Infinity}] == Sqrt[Pi] (1 + 2 EulerGamma - Log[4])
_______________________________________________ math-fun mailing list math-fun@mailman.xmission.com http://mailman.xmission.com/cgi-bin/mailman/listinfo/math-fun
The first integral seems to have a problem when the upper limit x crosses the square of an integer: The integrand jumps, but the integral is continuous. (No problem yet.) But the RHS expression will have a big jump, as sqrt(x) crosses an integer value, and floor(sqrt(x)) jumps, and the positive product in the RHS jumps. What am I missing? Rich --------------------- Quoting rwg@sdf.lonestar.org:
It's very easy to "shew that"
/ x [ I floor(sqrt(t)) dt ] / 0 (floor(sqrt(x)) + 1) (2 floor(sqrt(x)) + 1) = floor(sqrt(x)) (x + -------------------------------------------), 6 but a little surprising when you look at it.
/ x [ 3/2 I sqrt(floor(t)) dt = floor (x) + x sqrt(floor(x)) ] / 0 1 1 + hurwitz_zeta(- -, floor(x)) - zeta(- -). 2 2 --rwg
_______________________________________________ math-fun mailing list math-fun@mailman.xmission.com http://mailman.xmission.com/cgi-bin/mailman/listinfo/math-fun
is an addendum to http://mathworld.wolfram.com/KochSnowflake.html . Summary: Exact Fourier series and exact spacefill. --rwg WOLF SNAKE SNOWFLAKE
[BCC PRS at the Computer History Museum] Hi Peter, reading some Macsyma functions just now, I was momentarily puzzled by the line ?*plotnum\-3d\-limit\*:false$ PLOTNUM0 and PLOTNUM1 are the number of x and y subdivisions in a 3D plot. Why would anyone add a feature to limit them? And why would its name be so obscurely and inconveniently punctuated? In fact this controls one of those evil disabilities inflicted on the cheaper version of a product that cost extra to have removed. The lucky gnurds queueing up for time on the RLE-PDP1 were indebted to the unlucky tools who chose instead to hang out at the Civil Engineering Dept IBM1401, which, a tool told me, had a 407 E8 that cost $tens of thousands less because it was only 2/3 the speed of a regular 407. By virtue of a plugboard with an extra relay circuit to discard every third cycle! Yank the relays and be fully upgraded. Except during scheduled visits by the IBM field service representative. It amazes me that IBM was able to maintain its hypnotic sway over market opinion while pulling stunts like that. Perhaps someone at the Computer History Museum can propose an earlier example such a deliberate disfeature. Perhaps even provide a name for the practice. (Failing which: disfeaturement.) geb trats --rwg EDGAR T IRONS DENIGRATORS
From: "rwg@sdf.lonestar.org" <rwg@sdf.lonestar.org> To: math-fun <math-fun@mailman.xmission.com> Sent: Thursday, January 1, 2009 12:54:14 AM Subject: [math-fun] IBM407 Card Lister Model E8 [BCC PRS at the Computer History Museum] Hi Peter, reading some Macsyma functions just now, I was momentarily puzzled by the line ?*plotnum\-3d\-limit\*:false$ PLOTNUM0 and PLOTNUM1 are the number of x and y subdivisions in a 3D plot. Why would anyone add a feature to limit them? And why would its name be so obscurely and inconveniently punctuated? In fact this controls one of those evil disabilities inflicted on the cheaper version of a product that cost extra to have removed. The lucky gnurds queueing up for time on the RLE-PDP1 were indebted to the unlucky tools who chose instead to hang out at the Civil Engineering Dept IBM1401, which, a tool told me, had a 407 E8 that cost $tens of thousands less because it was only 2/3 the speed of a regular 407. By virtue of a plugboard with an extra relay circuit to discard every third cycle! Yank the relays and be fully upgraded. Except during scheduled visits by the IBM field service representative. It amazes me that IBM was able to maintain its hypnotic sway over market opinion while pulling stunts like that. Perhaps someone at the Computer History Museum can propose an earlier example such a deliberate disfeature. Perhaps even provide a name for the practice. (Failing which: disfeaturement.) geb trats --rwg EDGAR T IRONS DENIGRATORS _______________________________________________ This practice is very much alive in current business, whether a pure software product, or software embedded in some device. The name given to such practice is "crippleware". Gene
Re Crippleware: A significant source of crippleware is marketing brain damage, but there are also plausible explanations for crippleware: 1. I heard of a truck company in Indiana that "detuned" any engine that exceeded the advertised horsepower. They claimed that the increased horsepower put an unplanned stress on the transmission so that it would fail earlier than expected. 2. The southern NASCAR-type mechanics during the Vietnam war would sometimes "hotrod" the battle tanks, making their MTBF less than 100 miles. With these and other problems, a good fraction of the tanks spent their time either towing other tanks, or being towed themselves. 3. The wear in some mechanical devices goes up dramatically (more than linearly) with increased speed, so that the lifetime is severely compromised. 4. If you are careful, you can utilize low-octane gas in an older car which was designed for high-octane gas. However, it requires a relatively skillful driver to avoid damaging the engine. 5. The IBM 1403 chain printer had its character set sequence cleverly designed so that the probability of a large subset of hammers being fired simultaneously was essentially zero. However, if you studied the chain and then ordered the printer to print a certain sequence of characters, you could easily break the chain, which would wreak havoc on surrounding portions of the printer. If this happened too often, IBM would have been forced to include software to disallow those dangerous character sequences. I guess if IBM had copyrighted those sequences, then such software would have been the first "DRM" software? ("DRM" = Digital Rights Management, used to prevent the copying of copyrighted materials.) 5. The Xerox Alto "personal" computers were built from "standard" 74xx-type chips. However, they hadn't used the best "timing" tools, so some of the paths were too tight for reliable operation. As a result, some microprogramming operations worked only on a subset of the Altos. Due to the requirements for synching with things like the display, one couldn't simply slow down the clock. In the design of many chips today, "timing closure" is one of the last tasks to be done after verifying that the data paths work logically. "Timing closure" then tells you how fast you can run the chip without violating various setup time requirements. Circuits that are too close to the edge timingwise may fail when the temperature goes up a little. 6. The whole "digital revolution" is the recognition that standardizing signal definitions and timings mean that when predicting the future digital behavior of a circuit, the details of the digital mapping to the analog circuitry can be safely ignored. From time to time, you can "improve" a circuit in some dimension by "hacking" it (by taking advantage of the details of a particular circuit), but some of these improvements probably won't survive the next series of job layoffs or the next product cycle. At 09:58 AM 1/1/2009, Eugene Salamin wrote:
The lucky gnurds queueing up for time on the RLE-PDP1 were indebted to the unlucky tools who chose instead to hang out at the Civil Engineering Dept IBM1401, which, a tool told me, had a 407 E8 that cost $tens of thousands less because it was only 2/3 the speed of a regular 407. By virtue of a plugboard with an extra relay circuit to discard every third cycle! Yank the relays and be fully upgraded. Except during scheduled visits by the IBM field service representative.
It amazes me that IBM was able to maintain its hypnotic sway over market opinion while pulling stunts like that.
Perhaps someone at the Computer History Museum can propose an earlier example such a deliberate disfeature. Perhaps even provide a name for the practice. (Failing which: disfeaturement.)
geb trats --rwg
EDGAR T IRONS DENIGRATORS _______________________________________________ This practice is very much alive in current business, whether a pure software product, or software embedded in some device. The name given to such practice is "crippleware".
Gene
These are not examples of crippleware; they are (except for the IBM case) examples of designing the software so that the hardware is operated without damage. This is what is expected of a reputable manufacturer. Crippleware consists of making a special effort, doing extra software development work, to reduce the performance of the product below its capability. Gene ________________________________ From: Henry Baker <hbaker1@pipeline.com> To: Eugene Salamin <gene_salamin@yahoo.com> Cc: math-fun <math-fun@mailman.xmission.com> Sent: Thursday, January 1, 2009 6:45:16 PM Subject: Re: [math-fun] IBM407 Card Lister Model E8 Re Crippleware: A significant source of crippleware is marketing brain damage, but there are also plausible explanations for crippleware: 1. I heard of a truck company in Indiana that "detuned" any engine that exceeded the advertised horsepower. They claimed that the increased horsepower put an unplanned stress on the transmission so that it would fail earlier than expected. 2. The southern NASCAR-type mechanics during the Vietnam war would sometimes "hotrod" the battle tanks, making their MTBF less than 100 miles. With these and other problems, a good fraction of the tanks spent their time either towing other tanks, or being towed themselves. 3. The wear in some mechanical devices goes up dramatically (more than linearly) with increased speed, so that the lifetime is severely compromised. 4. If you are careful, you can utilize low-octane gas in an older car which was designed for high-octane gas. However, it requires a relatively skillful driver to avoid damaging the engine. 5. The IBM 1403 chain printer had its character set sequence cleverly designed so that the probability of a large subset of hammers being fired simultaneously was essentially zero. However, if you studied the chain and then ordered the printer to print a certain sequence of characters, you could easily break the chain, which would wreak havoc on surrounding portions of the printer. If this happened too often, IBM would have been forced to include software to disallow those dangerous character sequences. I guess if IBM had copyrighted those sequences, then such software would have been the first "DRM" software? ("DRM" = Digital Rights Management, used to prevent the copying of copyrighted materials.) 5. The Xerox Alto "personal" computers were built from "standard" 74xx-type chips. However, they hadn't used the best "timing" tools, so some of the paths were too tight for reliable operation. As a result, some microprogramming operations worked only on a subset of the Altos. Due to the requirements for synching with things like the display, one couldn't simply slow down the clock. In the design of many chips today, "timing closure" is one of the last tasks to be done after verifying that the data paths work logically. "Timing closure" then tells you how fast you can run the chip without violating various setup time requirements. Circuits that are too close to the edge timingwise may fail when the temperature goes up a little. 6. The whole "digital revolution" is the recognition that standardizing signal definitions and timings mean that when predicting the future digital behavior of a circuit, the details of the digital mapping to the analog circuitry can be safely ignored. From time to time, you can "improve" a circuit in some dimension by "hacking" it (by taking advantage of the details of a particular circuit), but some of these improvements probably won't survive the next series of job layoffs or the next product cycle. At 09:58 AM 1/1/2009, Eugene Salamin wrote:
The lucky gnurds queueing up for time on the RLE-PDP1 were indebted to the unlucky tools who chose instead to hang out at the Civil Engineering Dept IBM1401, which, a tool told me, had a 407 E8 that cost $tens of thousands less because it was only 2/3 the speed of a regular 407. By virtue of a plugboard with an extra relay circuit to discard every third cycle! Yank the relays and be fully upgraded. Except during scheduled visits by the IBM field service representative.
It amazes me that IBM was able to maintain its hypnotic sway over market opinion while pulling stunts like that.
Perhaps someone at the Computer History Museum can propose an earlier example such a deliberate disfeature. Perhaps even provide a name for the practice. (Failing which: disfeaturement.)
geb trats --rwg
EDGAR T IRONS DENIGRATORS _______________________________________________ This practice is very much alive in current business, whether a pure software product, or software embedded in some device. The name given to such practice is "crippleware".
Gene
_______________________________________________ math-fun mailing list math-fun@mailman.xmission.com http://mailman.xmission.com/cgi-bin/mailman/listinfo/math-fun
These are not examples of crippleware; they are (except for the IBM case) examples of designing the software so that the hardware is operated without damage. This is what is expected of a reputable manufacturer.
Crippleware consists of making a special effort, doing extra software development work, to reduce the performance of the product below its capability.
Gene
We almost had government mandated (hard)crippleware when congresscreatures Gore and Waxman introduced bills to mandate a Marantz fidelity restricting chip into digital tape recorders to protect the CD industry. What averted this was not CD writers, but rather Sony's purchase of CBS Records. --rwg
________________________________ From: Henry Baker <hbaker1@pipeline.com> To: Eugene Salamin <gene_salamin@yahoo.com> Cc: math-fun <math-fun@mailman.xmission.com> Sent: Thursday, January 1, 2009 6:45:16 PM Subject: Re: [math-fun] IBM407 Card Lister Model E8
Re Crippleware:
A significant source of crippleware is marketing brain damage, but there are also plausible explanations for crippleware:
1. I heard of a truck company in Indiana that "detuned" any engine that exceeded the advertised horsepower. They claimed that the increased horsepower put an unplanned stress on the transmission so that it would fail earlier than expected.
2. The southern NASCAR-type mechanics during the Vietnam war would sometimes "hotrod" the battle tanks, making their MTBF less than 100 miles. With these and other problems, a good fraction of the tanks spent their time either towing other tanks, or being towed themselves.
3. The wear in some mechanical devices goes up dramatically (more than linearly) with increased speed, so that the lifetime is severely compromised.
4. If you are careful, you can utilize low-octane gas in an older car which was designed for high-octane gas. However, it requires a relatively skillful driver to avoid damaging the engine.
5. The IBM 1403 chain printer had its character set sequence cleverly designed so that the probability of a large subset of hammers being fired simultaneously was essentially zero. However, if you studied the chain and then ordered the printer to print a certain sequence of characters, you could easily break the chain, which would wreak havoc on surrounding portions of the printer. If this happened too often, IBM would have been forced to include software to disallow those dangerous character sequences. I guess if IBM had copyrighted those sequences, then such software would have been the first "DRM" software? ("DRM" = Digital Rights Management, used to prevent the copying of copyrighted materials.)
5. The Xerox Alto "personal" computers were built from "standard" 74xx-type chips. However, they hadn't used the best "timing" tools, so some of the paths were too tight for reliable operation. As a result, some microprogramming operations worked only on a subset of the Altos. Due to the requirements for synching with things like the display, one couldn't simply slow down the clock. In the design of many chips today, "timing closure" is one of the last tasks to be done after verifying that the data paths work logically. "Timing closure" then tells you how fast you can run the chip without violating various setup time requirements. Circuits that are too close to the edge timingwise may fail when the temperature goes up a little.
6. The whole "digital revolution" is the recognition that standardizing signal definitions and timings mean that when predicting the future digital behavior of a circuit, the details of the digital mapping to the analog circuitry can be safely ignored. From time to time, you can "improve" a circuit in some dimension by "hacking" it (by taking advantage of the details of a particular circuit), but some of these improvements probably won't survive the next series of job layoffs or the next product cycle.
At 09:58 AM 1/1/2009, Eugene Salamin wrote:
The lucky gnurds queueing up for time on the RLE-PDP1 were indebted to the unlucky tools who chose instead to hang out at the Civil Engineering Dept IBM1401, which, a tool told me, had a 407 E8 that cost $tens of thousands less because it was only 2/3 the speed of a regular 407. By virtue of a plugboard with an extra relay circuit to discard every third cycle! Yank the relays and be fully upgraded. Except during scheduled visits by the IBM field service representative.
It amazes me that IBM was able to maintain its hypnotic sway over market opinion while pulling stunts like that.
Perhaps someone at the Computer History Museum can propose an earlier example such a deliberate disfeature. Perhaps even provide a name for the practice. (Failing which: disfeaturement.)
geb trats --rwg
EDGAR T IRONS DENIGRATORS _______________________________________________ This practice is very much alive in current business, whether a pure software product, or software embedded in some device. The name given to such practice is "crippleware".
Gene
ALGORISMIC MICROGLIAS
Henry Baker wrote
Re Crippleware:
A significant source of crippleware is marketing brain damage, but there are also plausible explanations for crippleware:
1. I heard of a truck company in Indiana that "detuned" any engine that exceeded the advertised horsepower. They claimed that the increased horsepower put an unplanned stress on the transmission so that it would fail earlier than expected.
2. The southern NASCAR-type mechanics during the Vietnam war would sometimes "hotrod" the battle tanks, making their MTBF less than 100 miles. With these and other problems, a good fraction of the tanks spent their time either towing other tanks, or being towed themselves.
3. The wear in some mechanical devices goes up dramatically (more than linearly) with increased speed, so that the lifetime is severely compromised.
4. If you are careful, you can utilize low-octane gas in an older car which was designed for high-octane gas. However, it requires a relatively skillful driver to avoid damaging the engine.
5. The IBM 1403 chain printer had its character set sequence cleverly designed so that the probability of a large subset of hammers being fired simultaneously was essentially zero. However, if you studied the chain and then ordered the printer to print a certain sequence of characters, you could easily break the chain, which would wreak havoc on surrounding portions of the printer. If this happened too often, IBM would have been forced to include software to disallow those dangerous character sequences. I guess if IBM had copyrighted those sequences, then such software would have been the first "DRM" software? ("DRM" = Digital Rights Management, used to prevent the copying of copyrighted materials.)
There were even programs to play recognizable tunes in these printers. (Anchors Aweigh? Sailor's Hornpipe?) Their very existence was remarkable due to incredible cost of cpu time--hunreds of 1960s dollars/hr. To save precious seconds, the printer had a very fast form feed controlled by a loop of perforated tape. Depending on which bit position you chose in your format statement, the printer could skip to the next sixth, quarter, third, half, or whole page. (How did it skip 1/4 if there were 66 lines?) There were also unpunched bit positions accessible with a computed, e.g. accidentally overwritten, format string, whose semantics was "skip to new box of paper". These machines were rugged, withstanding besides the hammering chain, the kicking of low paid operators screaming imprecations. --rwg The MITAI Lab got a later model with a spinning drum embossed with the entire character set repeated 120 (132?) times. The row of hammers struck the paper from behind, knocking it into the ribbon and drum at the appropriate millisecond. The rear access doors were featureless rounded rectangles, flush with the back of the printer, and held shut by magnets. It was an IQ test to get them open. Pushing on one around its edges revealed a "soft spot", a deliberate gap in the jamb rabbet opposite the otherwise indistinguishable hinged edge. Punching the soft spots popped open the doors. A good sized magnet might also have worked. Also perhaps the huge double suction cups used for lifting the raised floor panels under which ran all the power and data cables (and refrigerated air). And perhaps not, as I think the surface of those doors was somewhat textured, bearing also the helpful (but non-OEM) legend: "To open, see instructions inside."
5. The Xerox Alto "personal" computers were built from "standard" 74xx-type chips. However, they hadn't used the best "timing" tools, so some of the paths were too tight for reliable operation. As a result, some microprogramming operations worked only on a subset of the Altos. Due to the requirements for synching with things like the display, one couldn't simply slow down the clock. In the design of many chips today, "timing closure" is one of the last tasks to be done after verifying that the data paths work logically. "Timing closure" then tells you how fast you can run the chip without violating various setup time requirements. Circuits that are too close to the edge timingwise may fail when the temperature goes up a little.
6. The whole "digital revolution" is the recognition that standardizing signal definitions and timings mean that when predicting the future digital behavior of a circuit, the details of the digital mapping to the analog circuitry can be safely ignored. From time to time, you can "improve" a circuit in some dimension by "hacking" it (by taking advantage of the details of a particular circuit), but some of these improvements probably won't survive the next series of job layoffs or the next product cycle.
At 09:58 AM 1/1/2009, Eugene Salamin wrote: rwg wrote
The lucky gnurds queueing up for time on the RLE-PDP1 were indebted to the unlucky tools who chose instead to hang out at the Civil Engineering Dept IBM1401, which, a tool told me, had a 407 E8 that cost $tens of thousands less because it was only 2/3 the speed of a regular 407. By virtue of a plugboard with an extra relay circuit to discard every third cycle! Yank the relays and be fully upgraded. Except during scheduled visits by the IBM field service representative.
It amazes me that IBM was able to maintain its hypnotic sway over market opinion while pulling stunts like that.
Perhaps someone at the Computer History Museum can propose an earlier example such a deliberate disfeature. Perhaps even provide a name for the practice. (Failing which: disfeaturement.)
geb trats --rwg
EDGAR T IRONS DENIGRATORS _______________________________________________ This practice is very much alive in current business, whether a pure software product, or software embedded in some device. The name given to such practice is "crippleware".
Gene
I added the Fourier series, at the behest of DanA. The derivation was fairly slick and is probably how Lagrange got his series solution to Kepler's problem. (http://www.tweedledum.com/rwg/pizza.htm, which I am redoing.) --rwg
I added the Fourier series, at the behest of DanA. The derivation was fairly slick and is probably how Lagrange got his series solution to Kepler's problem. (http://www.tweedledum.com/rwg/pizza.htm, which I am redoing.) --rwg
http://www.tweedledum.com/rwg/pizza.html
_______________________________________________ math-fun mailing list math-fun@mailman.xmission.com http://mailman.xmission.com/cgi-bin/mailman/listinfo/math-fun
I added the Fourier series, at the behest of DanA. The derivation was fairly slick and is probably how Lagrange got his series solution to Kepler's problem. (http://www.tweedledum.com/rwg/pizza.htm, which I am redoing.) --rwg
http://www.tweedledum.com/rwg/pizza.html I just uploaded a simplification of the last formula, an acceleration of the Bessel-Fourier series for Kepler's equation, with a presumably similar one accelerating the cycloid series. --rwg
INCONSISTENT NONSCIENTIST
_______________________________________________ math-fun mailing list math-fun@mailman.xmission.com http://mailman.xmission.com/cgi-bin/mailman/listinfo/math-fun
_______________________________________________ math-fun mailing list math-fun@mailman.xmission.com http://mailman.xmission.com/cgi-bin/mailman/listinfo/math-fun
Merriam-Webster: simpson's rule Function: noun Usage: usually capitalized S Etymology: after Thomas Simpson died 1761 English mathematician : a method used especially by naval architects for computing the approximate area bounded by a curve by adding the areas of a series of figures formed from an odd number of equally spaced ordinates to the curve and parabolas drawn through the points where these ordinates cut the curve (I always wondered what they did in Course XIII.) This is a complicated description of the 1,4,2,4,...,2,4,1 method. However, I have to think that it was not Thomas, but rather Homer, who authored the "3/8 rule", 1,3,3,2,3,3,2,...,3,3,1. E.g., let's compute pi = [3, 7, 15, 1, 292, 1, 1, 1, 2, 1, 3, 1, 14, 2, 1, 1, 2, 2, 2, 2, 1, 84, 2, 1, 1, 15, 3, 13, 1, 4, 2, 6, 6, 99, 1, 2, ... by averaging 4/(1+x^2) over [0,1]. With nine samples, Thomas gives cf(''(makelist(4/(1+x^2),x,(0..8)/8).[1,4,2,4,2,4,2,4,1]/24)) [3, 7, 15, 1, 186, 2, 6, 1, 1, 4, 21, 1, 73, 11] and with *ten*, Homer gives cf(''(makelist(4/(1+x^2),x,(0..9)/9).[1,3,3,2,3,3,2,3,3,1]/24)) [3, 7, 15, 1, 127, 4, 2, 18, 6, 1, 78, 1, 4], and for comparably many samples, never beats Thomas. Is there any reason to use Homer's rule besides being stuck with 6 n + 4 samples? Interestingly, the period 6 pattern 41, 216, 27, 272, 27, 216, 41+41, 216, ..., 41, which exactly integrates heptics, expand([41, 216, 27, 272, 27, 216, 41]/840 .makelist(a*x^7+b*x^6,x,(0..6)/6)) a b - + - 8 7 overtakes Thomas on the pi example at nineteen samples: cf(''([1,4,2,4,2,4,2,4,2,4,2,4,2,4,2,4,2,4,1]/6/9 .makelist(4/(1+x^2),x,(0..18)/18))) [3, 7, 15, 1, 291, 2, 1, 4, 5, 1, 29, 2, 8, 1, 1, 1, 1, 2, 1, 1, 1, 8, 2, 672, 1, 2, 2, 2, 4, 89, 1, 1, 1, 1, 1, 2, 4, 22, 1, 11, 2, 1, 1, 1, 1, 6] cf(''([41,216,27,272,27,216,41+41,216,27,272,27,216,41+41,216,27,272,27,216,41]/840/3 .makelist(4/(1+x^2),x,(0..18)/18))) [3, 7, 15, 1, 292, 1, 1, 4, 1, 37, 5, 3, 112, 1, 10, 2, 7, 2, 23, 1, 1, 3, 1, 1, 2, 1, 2, 2, 2, 2, 10, 1, 3, 2, 11, 1, 6, 3, 5, 4, 1, 2, 6, 2, 3, 1, 2] But something really weird is going on. The period 4 weighting pattern 7, 32, 12, 32, 7+7, 32, 12,..., 32, 7, which exactly integrates quintics, has "exactly" (modulo double precision) -2 times Thomas's error with 201 samples! Perhaps this will make sense in the morning. Meanwhile, two applications of a 0th order Euler-Maclaurin expansion give an interesting formula for Thomas's error: 'integrate(g(k),k,0,1)-('sum(4*g(k/n+1/(2*n))+2*g(k/n),k,0,n-1)+g(1)-g(0))/(6*n) = ('sum('integrate((1/6-x)*'diff(g(x/n+i/n)+g((1-x)/n+i/n),x,1),x,0,1/2),i,0,n-1))/n (d296) n - 1 ==== \ k 1 k > (4 g(- + ---) + 2 g(-)) + g(1) - g(0) 1 / n 2 n n / ==== [ k = 0 I g(k) dk - ------------------------------------------- = ] 6 n / 0 1 - n - 1 2 ==== / \ [ 1 d x i 1 - x i > I (- - x) (-- (g(- + -) + g(----- + -))) dx / ] 6 dx n n n n ==== / i = 0 0 --------------------------------------------------, n (directly verifiable by integration by parts). The individual integrals on the right are small. Can someone explain why? Testing with g(x):=4/(1+x^2), n = 1..4, (c346) makelist(subst(lambda([[l]],funmake_no_simp("+",l)),"[",lhs(e) = dfloat(rhs(e))),e,block([listarith:False],makelist(expand(eval(''(subst(makelist,nounify(sum),d296)))),n,1,4))) 47 (d346) [%pi - -- = + 0.00825932025646d0, 15 8011 %pi - ---- = 3.35550919902339d-4 - 3.11524781089349d-4, 2550 829597 %pi - ------ = 3.48105078317297d-5 + 5.09316464365226d-5 264069 152916620159 - 8.48695005182994d-5, %pi - ------------ = 48674874300 7.09747270259224d-6 + 1.325010624272d-5 + 4.8192196463089d-6 - 2.5015667505753d-5] (c347) resimplify(dfloat(%)) (d347) [0.00825932025646d0 = 0.00825932025646d0, 2.40261388126939d-5 = 2.402613881299d-5, 8.72653749706131d-7 = 8.72653749952849d-7, 1.51131086312262d-7 = 1.51131085868171d-7] (The inexactitudes are floating point artifacts.) --rwg DAMN YOUR ROUND YAM PEDIATRIC PATRICIDE SCHEMATIC CATECHISM ALLIGATOR WEED WATER GLADIOLE TRANSPORTEE PATERNOSTER PENETRATORS RHINOPLASTIES RELATIONSHIPS SUPERIMPOSED PSEUDOPRIME COMMERCIALIST MICROCLIMATES GRAPHITE EARTH PIG (Hmm, the spellchecker mailsquirrel is having a nosebleed.)
Misc thoughts ... Maybe 1/(1+x) would be a better test function, since 4/(1+x^2) has only even-exponent terms in the power series? Try just using one cycle for the whole integral: 11, 141, 1331, etc. Is the convergence interesting? Does it? What happens if you stick with Simpson's 141, but move around the cut points (the abscissae)? Rich ------------ Quoting rwg@sdf.lonestar.org:
Merriam-Webster: simpson's rule Function: noun Usage: usually capitalized S Etymology: after Thomas Simpson died 1761 English mathematician
: a method used especially by naval architects for computing the approximate area bounded by a curve by adding the areas of a series of figures formed from an odd number of equally spaced ordinates to the curve and parabolas drawn through the points where these ordinates cut the curve
(I always wondered what they did in Course XIII.) This is a complicated description of the 1,4,2,4,...,2,4,1 method. However, I have to think that it was not Thomas, but rather Homer, who authored the "3/8 rule", 1,3,3,2,3,3,2,...,3,3,1. E.g., let's compute pi = [3, 7, 15, 1, 292, 1, 1, 1, 2, 1, 3, 1, 14, 2, 1, 1, 2, 2, 2, 2, 1, 84, 2, 1, 1, 15, 3, 13, 1, 4, 2, 6, 6, 99, 1, 2, ...
by averaging 4/(1+x^2) over [0,1]. With nine samples, Thomas gives cf(''(makelist(4/(1+x^2),x,(0..8)/8).[1,4,2,4,2,4,2,4,1]/24))
[3, 7, 15, 1, 186, 2, 6, 1, 1, 4, 21, 1, 73, 11]
and with *ten*, Homer gives cf(''(makelist(4/(1+x^2),x,(0..9)/9).[1,3,3,2,3,3,2,3,3,1]/24))
[3, 7, 15, 1, 127, 4, 2, 18, 6, 1, 78, 1, 4],
and for comparably many samples, never beats Thomas. Is there any reason to use Homer's rule besides being stuck with 6 n + 4 samples? Interestingly, the period 6 pattern 41, 216, 27, 272, 27, 216, 41+41, 216, ..., 41, which exactly integrates heptics,
expand([41, 216, 27, 272, 27, 216, 41]/840 .makelist(a*x^7+b*x^6,x,(0..6)/6))
a b - + - 8 7
overtakes Thomas on the pi example at nineteen samples: cf(''([1,4,2,4,2,4,2,4,2,4,2,4,2,4,2,4,2,4,1]/6/9 .makelist(4/(1+x^2),x,(0..18)/18))) [3, 7, 15, 1, 291, 2, 1, 4, 5, 1, 29, 2, 8, 1, 1, 1, 1, 2, 1, 1, 1, 8, 2, 672, 1, 2, 2, 2, 4, 89, 1, 1, 1, 1, 1, 2, 4, 22, 1, 11, 2, 1, 1, 1, 1, 6]
cf(''([41,216,27,272,27,216,41+41,216,27,272,27,216,41+41,216,27,272,27,216,41]/840/3 .makelist(4/(1+x^2),x,(0..18)/18))) [3, 7, 15, 1, 292, 1, 1, 4, 1, 37, 5, 3, 112, 1, 10, 2, 7, 2, 23, 1, 1, 3, 1, 1, 2, 1, 2, 2, 2, 2, 10, 1, 3, 2, 11, 1, 6, 3, 5, 4, 1, 2, 6, 2, 3, 1, 2]
But something really weird is going on. The period 4 weighting pattern 7, 32, 12, 32, 7+7, 32, 12,..., 32, 7, which exactly integrates quintics, has "exactly" (modulo double precision) -2 times Thomas's error with 201 samples! Perhaps this will make sense in the morning. Meanwhile, two applications of a 0th order Euler-Maclaurin expansion give an interesting formula for Thomas's error:
'integrate(g(k),k,0,1)-('sum(4*g(k/n+1/(2*n))+2*g(k/n),k,0,n-1)+g(1)-g(0))/(6*n) = ('sum('integrate((1/6-x)*'diff(g(x/n+i/n)+g((1-x)/n+i/n),x,1),x,0,1/2),i,0,n-1))/n
(d296) n - 1 ==== \ k 1 k > (4 g(- + ---) + 2 g(-)) + g(1) - g(0) 1 / n 2 n n / ==== [ k = 0 I g(k) dk - ------------------------------------------- = ] 6 n / 0
1 - n - 1 2 ==== / \ [ 1 d x i 1 - x i > I (- - x) (-- (g(- + -) + g(----- + -))) dx / ] 6 dx n n n n ==== / i = 0 0 --------------------------------------------------, n
(directly verifiable by integration by parts). The individual integrals on the right are small. Can someone explain why? Testing with g(x):=4/(1+x^2), n = 1..4,
(c346) makelist(subst(lambda([[l]],funmake_no_simp("+",l)),"[",lhs(e) = dfloat(rhs(e))),e,block([listarith:False],makelist(expand(eval(''(subst(makelist,nounify(sum),d296)))),n,1,4)))
47 (d346) [%pi - -- = + 0.00825932025646d0, 15
8011 %pi - ---- = 3.35550919902339d-4 - 3.11524781089349d-4, 2550
829597 %pi - ------ = 3.48105078317297d-5 + 5.09316464365226d-5 264069
152916620159 - 8.48695005182994d-5, %pi - ------------ = 48674874300
7.09747270259224d-6 + 1.325010624272d-5 + 4.8192196463089d-6
- 2.5015667505753d-5]
(c347) resimplify(dfloat(%))
(d347) [0.00825932025646d0 = 0.00825932025646d0, 2.40261388126939d-5 = 2.402613881299d-5, 8.72653749706131d-7 = 8.72653749952849d-7, 1.51131086312262d-7 = 1.51131085868171d-7]
(The inexactitudes are floating point artifacts.) --rwg DAMN YOUR ROUND YAM PEDIATRIC PATRICIDE SCHEMATIC CATECHISM ALLIGATOR WEED WATER GLADIOLE TRANSPORTEE PATERNOSTER PENETRATORS RHINOPLASTIES RELATIONSHIPS SUPERIMPOSED PSEUDOPRIME COMMERCIALIST MICROCLIMATES GRAPHITE EARTH PIG (Hmm, the spellchecker mailsquirrel is having a nosebleed.)
_______________________________________________ math-fun mailing list math-fun@mailman.xmission.com http://mailman.xmission.com/cgi-bin/mailman/listinfo/math-fun
If I'm integrating over a circle, so that there is no boundary, then one point is as good as another, and I would expect the best approximation to be to give equal weights to each point. Now if instead, I integrate over an interval, there are some boundary effects, but deep within the interval , why would I want to do otherwise than to weight the points equally? -- Gene ________________________________ From: "rcs@xmission.com" <rcs@xmission.com> To: math-fun@mailman.xmission.com Sent: Thursday, March 26, 2009 11:46:51 AM Subject: Re: [math-fun] Simpsons' rules Misc thoughts ... Maybe 1/(1+x) would be a better test function, since 4/(1+x^2) has only even-exponent terms in the power series? Try just using one cycle for the whole integral: 11, 141, 1331, etc. Is the convergence interesting? Does it? What happens if you stick with Simpson's 141, but move around the cut points (the abscissae)? Rich ------------ Quoting rwg@sdf.lonestar.org:
Merriam-Webster: simpson's rule Function: noun Usage: usually capitalized S Etymology: after Thomas Simpson died 1761 English mathematician
: a method used especially by naval architects for computing the approximate area bounded by a curve by adding the areas of a series of figures formed from an odd number of equally spaced ordinates to the curve and parabolas drawn through the points where these ordinates cut the curve
(I always wondered what they did in Course XIII.) This is a complicated description of the 1,4,2,4,...,2,4,1 method. However, I have to think that it was not Thomas, but rather Homer, who authored the "3/8 rule", 1,3,3,2,3,3,2,...,3,3,1. E.g., let's compute pi = [3, 7, 15, 1, 292, 1, 1, 1, 2, 1, 3, 1, 14, 2, 1, 1, 2, 2, 2, 2, 1, 84, 2, 1, 1, 15, 3, 13, 1, 4, 2, 6, 6, 99, 1, 2, ...
by averaging 4/(1+x^2) over [0,1]. With nine samples, Thomas gives cf(''(makelist(4/(1+x^2),x,(0..8)/8).[1,4,2,4,2,4,2,4,1]/24))
[3, 7, 15, 1, 186, 2, 6, 1, 1, 4, 21, 1, 73, 11]
and with *ten*, Homer gives cf(''(makelist(4/(1+x^2),x,(0..9)/9).[1,3,3,2,3,3,2,3,3,1]/24))
[3, 7, 15, 1, 127, 4, 2, 18, 6, 1, 78, 1, 4],
and for comparably many samples, never beats Thomas. Is there any reason to use Homer's rule besides being stuck with 6 n + 4 samples? Interestingly, the period 6 pattern 41, 216, 27, 272, 27, 216, 41+41, 216, ..., 41, which exactly integrates heptics,
expand([41, 216, 27, 272, 27, 216, 41]/840 .makelist(a*x^7+b*x^6,x,(0..6)/6))
a b - + - 8 7
overtakes Thomas on the pi example at nineteen samples: cf(''([1,4,2,4,2,4,2,4,2,4,2,4,2,4,2,4,2,4,1]/6/9 .makelist(4/(1+x^2),x,(0..18)/18))) [3, 7, 15, 1, 291, 2, 1, 4, 5, 1, 29, 2, 8, 1, 1, 1, 1, 2, 1, 1, 1, 8, 2, 672, 1, 2, 2, 2, 4, 89, 1, 1, 1, 1, 1, 2, 4, 22, 1, 11, 2, 1, 1, 1, 1, 6]
cf(''([41,216,27,272,27,216,41+41,216,27,272,27,216,41+41,216,27,272,27,216,41]/840/3 .makelist(4/(1+x^2),x,(0..18)/18))) [3, 7, 15, 1, 292, 1, 1, 4, 1, 37, 5, 3, 112, 1, 10, 2, 7, 2, 23, 1, 1, 3, 1, 1, 2, 1, 2, 2, 2, 2, 10, 1, 3, 2, 11, 1, 6, 3, 5, 4, 1, 2, 6, 2, 3, 1, 2]
But something really weird is going on. The period 4 weighting pattern 7, 32, 12, 32, 7+7, 32, 12,..., 32, 7, which exactly integrates quintics, has "exactly" (modulo double precision) -2 times Thomas's error with 201 samples! Perhaps this will make sense in the morning. Meanwhile, two applications of a 0th order Euler-Maclaurin expansion give an interesting formula for Thomas's error:
'integrate(g(k),k,0,1)-('sum(4*g(k/n+1/(2*n))+2*g(k/n),k,0,n-1)+g(1)-g(0))/(6*n) = ('sum('integrate((1/6-x)*'diff(g(x/n+i/n)+g((1-x)/n+i/n),x,1),x,0,1/2),i,0,n-1))/n
(d296) n - 1 ==== \ k 1 k > (4 g(- + ---) + 2 g(-)) + g(1) - g(0) 1 / n 2 n n / ==== [ k = 0 I g(k) dk - ------------------------------------------- = ] 6 n / 0
1 - n - 1 2 ==== / \ [ 1 d x i 1 - x i > I (- - x) (-- (g(- + -) + g(----- + -))) dx / ] 6 dx n n n n ==== / i = 0 0 --------------------------------------------------, n
(directly verifiable by integration by parts). The individual integrals on the right are small. Can someone explain why? Testing with g(x):=4/(1+x^2), n = 1..4,
(c346) makelist(subst(lambda([[l]],funmake_no_simp("+",l)),"[",lhs(e) = dfloat(rhs(e))),e,block([listarith:False],makelist(expand(eval(''(subst(makelist,nounify(sum),d296)))),n,1,4)))
47 (d346) [%pi - -- = + 0.00825932025646d0, 15
8011 %pi - ---- = 3.35550919902339d-4 - 3.11524781089349d-4, 2550
829597 %pi - ------ = 3.48105078317297d-5 + 5.09316464365226d-5 264069
152916620159 - 8.48695005182994d-5, %pi - ------------ = 48674874300
7.09747270259224d-6 + 1.325010624272d-5 + 4.8192196463089d-6
- 2.5015667505753d-5]
(c347) resimplify(dfloat(%))
(d347) [0.00825932025646d0 = 0.00825932025646d0, 2.40261388126939d-5 = 2.402613881299d-5, 8.72653749706131d-7 = 8.72653749952849d-7, 1.51131086312262d-7 = 1.51131085868171d-7]
(The inexactitudes are floating point artifacts.) --rwg DAMN YOUR ROUND YAM PEDIATRIC PATRICIDE SCHEMATIC CATECHISM ALLIGATOR WEED WATER GLADIOLE TRANSPORTEE PATERNOSTER PENETRATORS RHINOPLASTIES RELATIONSHIPS SUPERIMPOSED PSEUDOPRIME COMMERCIALIST MICROCLIMATES GRAPHITE EARTH PIG (Hmm, the spellchecker mailsquirrel is having a nosebleed.)
_______________________________________________ math-fun mailing list math-fun@mailman.xmission.com http://mailman.xmission.com/cgi-bin/mailman/listinfo/math-fun
_______________________________________________ math-fun mailing list math-fun@mailman.xmission.com http://mailman.xmission.com/cgi-bin/mailman/listinfo/math-fun
On 3/26/09, Eugene Salamin <gene_salamin@yahoo.com> wrote:
If I'm integrating over a circle, so that there is no boundary, then one point is as good as another, and I would expect the best approximation to be to give equal weights to each point. Now if instead, I integrate over an interval, there are some boundary effects, but deep within the interval , why would I want to do otherwise than to weight the points equally?
-- Gene
Two reasons I might (tentatively) suggest: (i) You might want to be able to predict the error a priori, subject to some assumptions about the integrand (e.g. analyticity); (ii) You might want to ensure that some smaller class be integrated exactly, or at least to within working precision (e.g. quadratic polynomials). I can't say that I've ever been very convinced by these arguments either --- if left to my own devices and in a hurry, I usually fall back on Romberg's method (which at a pinch can also be tweaked to serve for ordinary differential equations). Fred Lunnon
From: Fred lunnon <fred.lunnon@gmail.com> To: math-fun <math-fun@mailman.xmission.com> Sent: Thursday, March 26, 2009 7:12:35 PM Subject: Re: [math-fun] Simpsons' rules On 3/26/09, Eugene Salamin <gene_salamin@yahoo.com> wrote:
If I'm integrating over a circle, so that there is no boundary, then one point is as good as another, and I would expect the best approximation to be to give equal weights to each point. Now if instead, I integrate over an interval, there are some boundary effects, but deep within the interval , why would I want to do otherwise than to weight the points equally?
-- Gene
Two reasons I might (tentatively) suggest: (i) You might want to be able to predict the error a priori, subject to some assumptions about the integrand (e.g. analyticity); (ii) You might want to ensure that some smaller class be integrated exactly, or at least to within working precision (e.g. quadratic polynomials). I can't say that I've ever been very convinced by these arguments either --- if left to my own devices and in a hurry, I usually fall back on Romberg's method (which at a pinch can also be tweaked to serve for ordinary differential equations). Fred Lunnon _______________________________________________ If we evaluate the integrand at n points, and are free to choose both the locations and weights, we have 2n degrees of freedom. In Gaussian Quadrature, the choice is made so that all polynomials of degree 2n-1 or less are integrated exactly. Scaling the interval to [-1,+1], the locations are the n roots of the n-th Legendre polynomial, P[n](x[i]) = 0, i = 1..n. The weights are w[i] = 2 / ( (1 - x[i]^2) (P[n]'(x[i]))^2 ). Here, P[n]' is the derivative of P[n]. I've worked out some examples comparing Simpson's rule with Gaussian Quadrature, over [-1,+1], and using the same number n of integrand evaluations. f(x) = exp(x), n = 3, Simpson's error = 0.012, GQ error = 6.5e-5. f(x) = exp(x), n = 13, Simpson's error = 1.0e-5, GQ error = 1.1e-34. f(x) = cos((pi / 2) x), n = 3, Simpson's error = 0.047, GQ error = 6.9e-4. f(x) = cos((pi / 2) x), n = 13, Simpson's error = 2.6e-5, GQ error = 1.1e-29. -- Gene
rwg> n - 1 1/2 ==== / \ [ 1 d x i 1 - x i > I (- - x) (-- (g(- + -) + g(----- + -))) dx / ] 6 dx n n n n ==== / i = 0 0 ... = --------------------------------------------------, n
(directly verifiable by integration by parts). The individual integrals on the right are small. Can someone explain why?
It doesn't matter. They're large compared with their sums:
Testing with g(x):=4/(1+x^2), n = 1..4, [...] 8011 %pi - ---- = 3.35550919902339d-4 - 3.11524781089349d-4, 2550
829597 %pi - ------ = 3.48105078317297d-5 + 5.09316464365226d-5 264069 152916620159 - 8.48695005182994d-5, %pi - ------------ = 48674874300
7.09747270259224d-6 + 1.325010624272d-5 + 4.8192196463089d-6 - 2.5015667505753d-5]
(c347) resimplify(dfloat(%))
(d347) [0.00825932025646d0 = 0.00825932025646d0, 2.40261388126939d-5 = 2.402613881299d-5, 8.72653749706131d-7 = 8.72653749952849d-7, 1.51131086312262d-7 = 1.51131085868171d-7]
So this form of the error is unenlightening. rcs>Maybe 1/(1+x) would be a better test function, since 4/(1+x^2) has
only even-exponent terms in the power series? Before even trying it, I'll say no, since we're not 0-centered. E.g., we could pretend we did 1/(1 + i x) and took the realpart afterwards. OK, now let's compute 1 / [ 1 I ----- dx = log(2) = K(0, 1, 2, 3, 1, 6, 3, 1, 1, 2, 1, 1, 1, 1, 3, ] x + 1 / 0
10, 1, 1, 1, 2, 1, 1, 1, 1, 3, 2, 3, 1, 13, 7, 4, 1, 1, 1, 7, 2, 4, ...) by averaging 1/(1+x) over [0,1]. With nine samples, Thomas gives cf(''(makelist(1/(1+x),x,(0..8)/8).[1,4,2,4,2,4,2,4,1]/24)) [0, 1, 2, 3, 1, 6, 4, 1, 1, 2, 2, 5, 3, 1, 1, 1, 1, 1, 2] and with *ten*, Homer gives cf(''(makelist(1/(1+x),x,(0..9)/9).[1,3,3,2,3,3,2,3,3,1]/24)) [0, 1, 2, 3, 1, 6, 5, 8, 2, 1, 2, 1, 8, 2, 1, 1, 5] Worse, as predicted. On the other hand, the bizarreness with 7 32 12 32 7 giving -2 times the error with 1 4 1 may well depend on which function we're integrating, so 1/(1+x) will be good to test. rcs>Try just using one cycle for the whole integral: 11, 141, 1331, etc. I think "etc" needs defining. I calculated 7 32 12 32 7, <skipped>, and 41 216 27 272 27 216 41 to maximize the degree of the polynomial they integrate in one cycle (or any number). rcs> Is the convergence interesting? Does it? I don't think so. Recall in my previous msg that it took three periods of 41 216 27 272 27 216 41 (19 samples) to overtake 19 samples of 1 4 1. (And infinitely many periods of 7 32 12 32 7 ???) Maximizing the degree of approximation only pays off in the long run, if at all. With additional knowledge of the integrand, it should be possible to find 10 weights, say, that outperform both 1 3 3 2 3 ... 3 1 and whatever pattern exactly integrates x^10 + b x ^9 + ... . rcs>What happens if you stick with Simpson's 141, but move around the cut points (the abscissae)? Quodlibet. I guess you're asking how to move them for optimal approximation. As Webster's hints, the 1 4 1 really does integrate a piecewise parabolic fit, presupposing equal steps. For unequal steps you'd need different weights, although you could probably tailor the abscissae for specific integrands. That would be cheating. Gene>If I'm integrating over a circle, so that there is no boundary, then one point is as good as another, Not quite. Suppose we seek the average value (= 1/rt3) of 1/(2+sin(2(t+f) pi)), a smooth, period 1 fn(t) phase-shifted by the fraction f of a period. Then we can write the equal weight average exactly: (sum(1/(sin(%pi*(2*k/n+2*f))+2),k,0,n-1))/n = ((sqrt(3)+2)^n-(2-sqrt(3))^n)/(sqrt(3)*(-2*cos(%pi*(2*f+1/2)*n)+(sqrt(3)+2)^n+(2-sqrt(3))^n)) n - 1 ==== \ 1 > ----------------------- / 2 k ==== sin(pi (--- + 2 f)) + 2 k = 0 n ----------------------------- =: F(f) n n n (sqrt(3) + 2) - (2 - sqrt(3)) = ----------------------------------------------------------------- n n 1 ((sqrt(3) + 2) + (2 - sqrt(3)) - 2 cos(pi (2 f + -) n)) sqrt(3) 2 (Anybody want to hire me to make their CAS do these?) which does depend (slightly) on f. (And, ironically, is free of rt3.) For n = 1..6, 1 4 15 56 ---------------, ---------------, ----------------, ----------------, 2 + sin(2 f pi) 7 + cos(4 f pi) 26 - sin(6 f pi) 97 - cos(8 f pi) 209 780 ------------------, ------------------- 362 + sin(10 f pi) 1351 + cos(12 f pi) showing a dependence on f declining exponentially with n. Simpson's rule in this case is just F(f)/3 + 2 F(f+1/2n)/3, which is only slightly better, but has 2n samples, so is really worse! Gene>and I would expect the best approximation to be to give equal weights to each point. Now if instead, I integrate over an interval, there are some boundary effects, but deep within the interval , why would I want to do otherwise than to weight the points equally? This is a devilish question. But I don't think you can find any way of fading from 1,4,2,4,2,..., to 3,3,3,3,3,... and back to ...,2,4,2,4,1 that will exactly integrate cubics. Simpson's is geared to polynomials vs periodics. --rwg
When I first got the mechanical assembly puzzle (http://www.spacecubes.com/more.php?FORMAT=2d), I doubted its (unassisted) feasibility. Having lost all my copies, I sank about $65 into some plumbing supplies and a circular saw, with marginal success: (http://gosper.org/spacecube.jpg). With handheld tools and work, the circular concavities are so imprecise that the pieces are not interchangeable, and the thing collapses if you breath on it. But with greater precision (e.g., a bleepload of filing and tweaking) it promises a modicum of stiffness and a drench of difficulty. Due to the square vs rectangular design, there are no slippery little flats which seem critical to solving the commercial version. I can do this one on a tablecloth or rug or with a bunch of books for scaffolding, but can't imagine doing it on a smooth surface or no surface, which might make fun for two people. (Thanks Marian Aiken and Charlie Perumattan for the loan of your powerdrills.) --rwg
Mma 7.0: Plot[Im[Log[(x + I)!]], {x, 22.6316, 22.6317}, PlotRange -> All] produces http://gosper.org/glitch.png --rwg http://www.youtube.com/watch?v=Jp9BSW38bXg&fmt=18
Mma 7.0: Plot[Im[Log[(x + I)!]], {x, 22.6316, 22.6317}, PlotRange -> All] produces http://gosper.org/glitch.png --rwg http://www.youtube.com/watch?v=Jp9BSW38bXg&fmt=18
Through cognizance, credulousness, or indifference, no one challenged this April Foolishness. It's just the imaginary part going negative. --rwg Did anyone besides DanA have trouble viewing http://gosper.org/spacecube.jpg ?
products. To get prod(trig(t/k^n),n), choose the undetermined coefficients in a small polynomial p(x) to annihilate remainder(p(x^k),p(x)). E.g., for a cubic and k=2, (c1) (x^3+a*x^2+b*x+c,remainder(subst(x^2,x,%%),%%,x)) (d1) (2 a c + b + (- 3 a - a + 1) b + a + a ) x 2 2 3 2 + ((2 b - a - a) c - 2 a b + (a + a ) b) x 2 3 2 + c + (- 2 a b + a + a + 1) c giving three eqns and 3 unknowns, with numerous solutions, [[b = 0, c = 0, a = - 1], [c = 0, b = 1, a = - 2], [c = 0, b = 1, a = 1], sqrt(3) %i - 1 sqrt(3) %i - 1 [a = 0, b = 0, c = 0], [c = --------------, b = --------------, 2 sqrt(3) %i + 1 sqrt(3) %i 1 sqrt(3) %i + 1 sqrt(3) %i + 1 a = ---------- + -], [c = - --------------, b = --------------, 2 2 2 sqrt(3) %i - 1 1 sqrt(3) %i a = - - ----------], [c = 1, b = - 1, a = - 1], 2 2 2 3 6 5 4 3 b - b [c = - 1, b = root_of(b - 2 b - b - 6 b , b), a = -------], 2 b [a = 0, c = 0, b = - 1], [a = 0, b = 0, c = - 1], [c = %i, b = - 1, a = - %i], [c = - %i, b = - 1, a = %i]], each yielding a product identity. The most interesting are from the root_of sextic which could (should?) have reduced to quadratic surds. One of these gives 2 2 p(x ) 3 (- sqrt(7) %i - 1) x (sqrt(7) %i - 1) x ----- = x + --------------------- + ------------------ + 1. p(x) 2 2 Now substitute x= exp(i t/2^n) and the lhs takes the form f(n-1)/f(n). After some simplification, product(2*cos(3*t/2^n)+sqrt(7)*sin(t/2^n)-cos(t/2^n),n,1,inf) = (2*sin(3*t)+sin(t))/sqrt(7)+cos(t) inf /===\ | | 3 t t t 2 sin(3 t) + sin(t) | | (2 cos(---) + sqrt(7) sin(--) - cos(--)) = ------------------- + cos(t) | | n n n sqrt(7) n = 1 2 2 2 Likewise for -sqrt(7). Tayloring a few terms, closedform(taylor(%,t,0,6)) 2 3 4 5 6 t (55 sqrt(7)) t t (487 sqrt(7)) t t 1 + sqrt(7) t - -- - --------------- + -- + ---------------- - --- + . . . 2 42 24 840 720 2 3 4 5 6 t (55 sqrt(7)) t t (487 sqrt(7)) t t = 1 + sqrt(7) t - -- - --------------- + -- + ---------------- - --- + . . . 2 42 24 840 720 (As of 7.0, Mathematica still won't Series expand Sums and Products!) In vain hope of brevity, we skip the more routine solutions and try for prod(trig(t/5^n),n), which I don't recall seeing. (x^2+a*x+b,remainder(subst(x^5,x,%%),%%,x)) gives the two equations 2 2 4 2 2 4 0 = a (b - 3 a b + a ) (5 b - 5 a b + a - 1) 4 2 3 4 2 6 2 8 4 = b (b - 10 a b + 15 a b - 7 a b + 2 a b + a - a - 1) The very simple case a=2, b=1 gives the square of the identity inf /===\ t | | 2 t t cos(-) = | | (4 cos (--) - 2 cos(--) - 1) 2 | | n n n = 1 5 5 The case a=2i, b=-1 gives the square of the identity inf /===\ t t | | t 2 t cos(-) + sin(-) = | | (1 + 2 sin(--) - 4 sin (--)) 2 2 | | n n n = 1 5 5 a=-2, b=1 gives the square of 2 t t t inf 4 cos (--) + 2 cos(--) - 1 2 sin(-) /===\ n n 2 | | 5 5 -------- = | | -------------------------- t | | 5 n = 1 The peculiar solution a=sqrt(3i), b=1 gives t pi t 4 (sqrt(3) - sqrt(2)) cos(- + --) sin(-) + 1 = 2 4 2 inf /===\ | | 2 t t pi 4 t | | (4 sin(---) (sqrt(3) cos(-- - --) - 1) + 2 cos(---) - 1) | | n n 4 n n = 1 5 5 5 Likewise for -sqrt(3), but not -sqrt(2). There are two root_of quartics saying b can be any 10th root of 1. Choosing the first led, via almost two days of simplifying and debugging, to product(2*cos(t/5^n+3*%pi/20)*(2*cos(3*t/5^n-3*%pi/20)-(sqrt(5)-1)*cos(t/5^n-%pi/20)),n,1,inf) = sin(t)+cos(t) inf /===\ | | t 3 pi 3 t 3 pi t pi | | 2 cos(-- + ----) (2 cos(--- - ----) - (sqrt(5) - 1) cos(-- - --)) | | n 20 n 20 n 20 n = 1 5 5 5 = sin(t) + cos(t), a puzzling result, given that the individual factors, product(cos(t/5^n+3*%pi/20)/cos(3*%pi/20),n,1,inf) t 3 pi inf cos(-- + ----) /===\ n 20 | | 5 | | -------------- and <cofactor> | | 3 pi n = 1 cos(----) 20 appear not to telescope, even though there were hints during the derivation that they would. One nice simplifier subproblem was %e^-(%i*t)*((sqrt(5)-1)*%e^(%i*t+%i*%pi/10)/2+%e^(2*%i*t)+%e^(%i*%pi/5))/(%e^(%i*%pi/5)+(sqrt(5)-1)*%e^(%i*%pi/10)/2+1) = sqrt(2)*cos(t/2-%pi/4)*cos(t/2+3*%pi/20)/cos(3*%pi/20) i pi i t + ---- i pi 10 ---- (sqrt(5) - 1) e 2 i t 5 t pi t 3 pi ------------------------- + e + e sqrt(2) cos(- - --) cos(- + ----) 2 2 4 2 20 ------------------------------------------ = --------------------------------- i pi 3 pi i pi ---- cos(----) ---- 10 20 i t 5 (sqrt(5) - 1) e e (e + ------------------- + 1) 2 which FullSimplify cannot even verify for special cases, let alone derive. --rwg
Off-list Rich asked if maybe the identities in my crankout message weren't fairly obvious when converted back to finite products of polynomials. Well, obvious modulo solving for some undetermined coefficients. While hand simulating an algorithm that ought to evaluate most of these products, I constructed the following screw case: prod(8*cos(t/2^n-5*%pi/12)*cos(t/2^n+%pi/12)*cos(t/2^n+%pi/3),n,1,inf) = (sqrt(2)*sin(2*t-%pi/12)+1)*(sqrt(2)*sin(4*t-%pi/12)+1)/(6*cos(t-5*%pi/12)*cos(2*t-5*%pi/12)) inf /===\ | | t 5 pi t pi t pi | | 8 cos(-- - ----) cos(-- + --) cos(-- + --) = | | n 12 n 12 n 3 n = 1 2 2 2 pi pi (sqrt(2) sin(2 t - --) + 1) (sqrt(2) sin(4 t - --) + 1) 12 12 ------------------------------------------------------- 5 pi 5 pi 6 cos(t - ----) cos(2 t - ----) 12 12 The prodand can *not* (I think) be written as trig(2^(n+1))/trig(2^n). But the rhs, which is of the form f(t) f(2t), reveals the trick*: Do the odd and even terms separately, making the lhs *two* products which actually *can* be put in the form trig(4^(n+1))/trig(4^n). So we have to try bisecting and trisecting (etc., 'til when?) products that resist telescopy. This might even handle such grotesqueries as prod((cos(t/2^n)+cos(%pi*2^n/5))/(cos(%pi*2^n/5)+1),n,1,inf) = (1-1/sqrt(5))*cos(t)+1/sqrt(5) n t 2 pi inf cos(--) + cos(-----) /===\ n 5 | | 2 1 1 | | -------------------- = (1 - -------) cos(t) + ------- | | n sqrt(5) sqrt(5) n = 1 2 pi cos(-----) + 1 5 And speaking of cos(pi/5), can anyone prove this little goodie?: sum(k*fib(k+1)*binom(-n,n-k),k,1,n) = n n ==== \ > k fib(k + 1) binomial(- n, n - k) = n / ==== k = 1 This is large integers summing to a small one. E.g., for n=9, 495 - 2448 + 6615 - 12870 + 19800 - 25740 + 27027 - 25740 + 12870 = 9 Note all the suggestive coincidences. But they don't last! 1584 - 9790 + 32670 - 77792 + 147147 - 234234 + 320320 - 388960 + 393822 - 369512 + 184756 = 11 Replacing fib(k) with q^k gave no hint of a degenerate bibasic sum. --rwg INTERMORAINIC RECRIMINATION (feuding glaciologists) *(divulgent infinite product)
An interesting variation on the Pitch Drop Experiment http://www.physics.uq.edu.au/physics_museum/pitchdrop.shtml might be a sealed barrel or jug half-full of pitch rolling down a gentle incline. I'm trying a similar experiment with bottles of "gel-caps" (or "soft-gels") of oil-soluble vitamins (e.g., E, A, or lycopene) which adhere when left undisturbed, but gradually separate under very light force, making little clicks and rattles minutes after their bottle is tilted. The problem is to get the capsules detaching one or two at a time--too steep an incline "liquefies" the whole inventory and the bottle accelerates. Likewise too full or too empty a bottle, so a math-fun tie-in is to find, given the bottle weight and density of the "fluid", the fill depth which minimizes the height of the center of gravity. --rwg
Where's the Pitch Drop Webcam ? If people are willing to watch cheese ripen (http://cheddarvision.tv/), they'd certainly be fascinated by pitch drops. Here's why we don't have a Glass Drop Experiment: http://en.wikipedia.org/wiki/Glass At 02:38 AM 4/12/2009, rwg@sdf.lonestar.org wrote:
An interesting variation on the Pitch Drop Experiment http://www.physics.uq.edu.au/physics_museum/pitchdrop.shtml might be a sealed barrel or jug half-full of pitch rolling down a gentle incline. I'm trying a similar experiment with bottles of "gel-caps" (or "soft-gels") of oil-soluble vitamins (e.g., E, A, or lycopene) which adhere when left undisturbed, but gradually separate under very light force, making little clicks and rattles minutes after their bottle is tilted. The problem is to get the capsules detaching one or two at a time--too steep an incline "liquefies" the whole inventory and the bottle accelerates. Likewise too full or too empty a bottle, so a math-fun tie-in is to find, given the bottle weight and density of the "fluid", the fill depth which minimizes the height of the center of gravity. --rwg
mms://drop.physics.uq.edu.au/PitchDropLive On Mon, Apr 13, 2009 at 12:21 PM, Henry Baker <hbaker1@pipeline.com> wrote:
Where's the Pitch Drop Webcam ?
If people are willing to watch cheese ripen (http://cheddarvision.tv/), they'd certainly be fascinated by pitch drops.
Here's why we don't have a Glass Drop Experiment: http://en.wikipedia.org/wiki/Glass
At 02:38 AM 4/12/2009, rwg@sdf.lonestar.org wrote:
An interesting variation on the Pitch Drop Experiment http://www.physics.uq.edu.au/physics_museum/pitchdrop.shtml might be a sealed barrel or jug half-full of pitch rolling down a gentle incline. I'm trying a similar experiment with bottles of "gel-caps" (or "soft-gels") of oil-soluble vitamins (e.g., E, A, or lycopene) which adhere when left undisturbed, but gradually separate under very light force, making little clicks and rattles minutes after their bottle is tilted. The problem is to get the capsules detaching one or two at a time--too steep an incline "liquefies" the whole inventory and the bottle accelerates. Likewise too full or too empty a bottle, so a math-fun tie-in is to find, given the bottle weight and density of the "fluid", the fill depth which minimizes the height of the center of gravity. --rwg
_______________________________________________ math-fun mailing list math-fun@mailman.xmission.com http://mailman.xmission.com/cgi-bin/mailman/listinfo/math-fun
-- Mike Stay - metaweta@gmail.com http://math.ucr.edu/~mike http://reperiendi.wordpress.com
An interesting variation on the Pitch Drop Experiment http://www.physics.uq.edu.au/physics_museum/pitchdrop.shtml might be a sealed barrel or jug half-full of pitch rolling down a gentle incline. I'm trying a similar experiment with bottles of "gel-caps" (or "soft-gels") of oil-soluble vitamins (e.g., E, A, or lycopene) which adhere when left undisturbed, but gradually separate under very light force, making little clicks and rattles minutes after their bottle is tilted. The problem is to get the capsules detaching one or two at a time--too steep an incline "liquefies" the whole inventory and the bottle accelerates. Likewise too full or too empty a bottle, so a math-fun tie-in is to find, given the bottle weight and density of the "fluid", the fill depth which minimizes the height of the center of gravity. --rwg
I give up. After jacking the angle under the vitamin E a fraction of a degree every few hours, a few pills broke loose about 15 min after the last jack (15.0 deg, almost sliding), and a few seconds later a few more, and then more, and the bottle took off. But not rattling a "liquefied" inventory, as I expected. Apparently the pills spread out along the wall of the bottle to form a cylindrical crescent, thereby raising the center of gravity enough to roll nonviscously. I think this would also happen with a drum of pitch if the incline were too steep. A sort of time-bomb. --rwg PS, corrections to my last bicycle gears mail: P D C B ***>*** E-P F G ... . The *only* purpose of the [...] But the hardest part is to figure out how ***to*** reduce the [...] STREPTOMYCIN PYCNOMETRIST
We have a 27-bit unsigned word. Minimizer chooses one of the binom(27,10) combinations of 17 ones and 10 zeros. Maximizer then chooses the max of the 27 cyclic permutations. Value =? --rwg HORSEWOMEN HOMEOWNERS SOKEMANRIES NOISEMAKERS STRATOSPHERIC ORCHESTRA PITS
110110110110110110110110100 in binary, or 6*(8^8 - 1)/(8-1) - 2 = 14380468 in decimal. Uh --- why is that hard? WFL On 4/30/09, rwg@sdf.lonestar.org <rwg@sdf.lonestar.org> wrote:
We have a 27-bit unsigned word. Minimizer chooses one of the binom(27,10) combinations of 17 ones and 10 zeros. Maximizer then chooses the max of the 27 cyclic permutations. Value =? --rwg HORSEWOMEN HOMEOWNERS SOKEMANRIES NOISEMAKERS STRATOSPHERIC ORCHESTRA PITS
_______________________________________________ math-fun mailing list math-fun@mailman.xmission.com http://mailman.xmission.com/cgi-bin/mailman/listinfo/math-fun
I'm not sure that's correct. Wouldn't 110110110110110110110101010 be better? I haven't put any real effort into optimizing this, so I'm sure it's not the best, but I'm pretty sure it's better than Fred's. Fred could rebut me by playing Maximizer and permuting it to something bigger than his example. On Thu, Apr 30, 2009 at 3:49 AM, Fred lunnon <fred.lunnon@gmail.com> wrote:
110110110110110110110110100 in binary,
or 6*(8^8 - 1)/(8-1) - 2 = 14380468 in decimal.
Uh --- why is that hard? WFL
On 4/30/09, rwg@sdf.lonestar.org <rwg@sdf.lonestar.org> wrote:
We have a 27-bit unsigned word. Minimizer chooses one of the binom(27,10) combinations of 17 ones and 10 zeros. Maximizer then chooses the max of the 27 cyclic permutations. Value =? --rwg HORSEWOMEN HOMEOWNERS SOKEMANRIES NOISEMAKERS STRATOSPHERIC ORCHESTRA PITS
_______________________________________________ math-fun mailing list math-fun@mailman.xmission.com http://mailman.xmission.com/cgi-bin/mailman/listinfo/math-fun
_______________________________________________ math-fun mailing list math-fun@mailman.xmission.com http://mailman.xmission.com/cgi-bin/mailman/listinfo/math-fun
Because that's not the answer :) We can solve the general problem recursively. Suppose there are n 1's and k 0's. The minimizer is effectively dividing the 1's into exactly k groups, and the maximizer is choosing a cyclic order for the sizes of these groups (note that the maximizer always chooses a cyclic permutation for which the high bit is 1 and the low bit is 0). I claim that with optimal play the only group sizes are [n/k] = floor(n/k) and [n/k]+1. First, one of the groups must have size at least [n/k], or there would be fewer than n 1's. Second, the minimizer can ensure that no group has size greater than [n/k]+1, and therefore will do so. Finally, suppose some group has size x < [n/k], which means that some other group has size [n/k]+1. After the maximizer plays there will be a group of size [n/k]+1 to the left (higher) than the group of size x, so the sizes will be ..... [n/k]+1 [n/k] [n/k] ... [n/k] x .... but then the minimizer could instead replace the [n/k]+1 group with one of size [n/k] and the x group with one of size x+1 <= [n/k], resulting in a strictly lower score for the maximizer. It follows that the minimizer always creates (n mod k) groups of size [n/k]+1 and (-n mod k) groups of size [n/k]. So... represent the groups of size [n/k]+1 by '1' and the groups of size [n/k] by '0', and we have the original problem with (n mod k) 1's and (-n mod k) 0's. For the specific posed question, we have (n,k) = (17,10). The minimizer chooses groups of size 1 and 2, reducing to the (7, 3) problem. For this, in turn, the minimizer chooses groups of size 2 and 3, reducing to the (1, 2) problem which has answer 100. Working backwards, 100 => 322 => 1110110110 => 2221221221 => 110110110101101101011011010 J.P. On Thu, Apr 30, 2009 at 3:49 AM, Fred lunnon <fred.lunnon@gmail.com> wrote:
110110110110110110110110100 in binary,
or 6*(8^8 - 1)/(8-1) - 2 = 14380468 in decimal.
Uh --- why is that hard? WFL
On 4/30/09, rwg@sdf.lonestar.org <rwg@sdf.lonestar.org> wrote:
We have a 27-bit unsigned word. Minimizer chooses one of the binom(27,10) combinations of 17 ones and 10 zeros. Maximizer then chooses the max of the 27 cyclic permutations. Value =? --rwg HORSEWOMEN HOMEOWNERS SOKEMANRIES NOISEMAKERS STRATOSPHERIC ORCHESTRA PITS
_______________________________________________ math-fun mailing list math-fun@mailman.xmission.com http://mailman.xmission.com/cgi-bin/mailman/listinfo/math-fun
_______________________________________________ math-fun mailing list math-fun@mailman.xmission.com http://mailman.xmission.com/cgi-bin/mailman/listinfo/math-fun
J.P.Grossman> Because that's not the answer :)
We can solve the general problem recursively. Suppose there are n 1's and k 0's. The minimizer is effectively dividing the 1's into exactly k groups, and the maximizer is choosing a cyclic order for the sizes of these groups (note that the maximizer always chooses a cyclic permutation for which the high bit is 1 and the low bit is 0).
I claim that with optimal play the only group sizes are [n/k] = floor(n/k) and [n/k]+1. First, one of the groups must have size at least [n/k], or there would be fewer than n 1's. Second, the minimizer can ensure that no group has size greater than [n/k]+1, and therefore will do so. Finally, suppose some group has size x < [n/k], which means that some other group has size [n/k]+1. After the maximizer plays there will be a group of size [n/k]+1 to the left (higher) than the group of size x, so the sizes will be
..... [n/k]+1 [n/k] [n/k] ... [n/k] x ....
but then the minimizer could instead replace the [n/k]+1 group with one of size [n/k] and the x group with one of size x+1 <= [n/k], resulting in a strictly lower score for the maximizer.
It follows that the minimizer always creates (n mod k) groups of size [n/k]+1 and (-n mod k) groups of size [n/k]. So... represent the groups of size [n/k]+1 by '1' and the groups of size [n/k] by '0', and we have the original problem with (n mod k) 1's and (-n mod k) 0's.
For the specific posed question, we have (n,k) = (17,10). The minimizer chooses groups of size 1 and 2, reducing to the (7, 3) problem. For this, in turn, the minimizer chooses groups of size 2 and 3, reducing to the (1, 2) problem which has answer 100. Working backwards,
100 => 322 => 1110110110 => 2221221221 => 110110110101101101011011010
J.P.
Elegant. But then (17/27,reverse(makelist(floor((k+1)*%%)-floor(k*%%),k,0,26))) [1, 1, 0, 1, 1, 0, 1, 1, 0, 1, 0, 1, 1, 0, 1, 1, 0, 1, 0, 1, 1, 0, 1, 1, 0, 1, 0] (Doug Hofstadter calls these "eta sequences".)
On Thu, Apr 30, 2009 at 3:49 AM, Fred lunnon <fred.lunnon@gmail.com> wrote:
110110110110110110110110100 in binary,
or 6*(8^8 - 1)/(8-1) - 2 �= �14380468 in decimal.
Uh --- why is that hard? � �WFL
It's a simplification of a problem where the function to be minimaxed is not quite monotone on the bit strings. --rwg POLYESTERS PRESYSTOLE PROSELYTES PTERYLOSES
On 4/30/09, rwg@sdf.lonestar.org <rwg@sdf.lonestar.org> wrote:
We have a 27-bit unsigned word. �Minimizer chooses one of the �binom(27,10) combinations of 17 ones and 10 zeros. �Maximizer then �chooses the max of the 27 cyclic permutations. �Value =? �--rwg �HORSEWOMEN HOMEOWNERS �SOKEMANRIES NOISEMAKERS �STRATOSPHERIC ORCHESTRA PITS
On 5/6/09, rwg@sdf.lonestar.org <rwg@sdf.lonestar.org> wrote:
... But then (17/27,reverse(makelist(floor((k+1)*%%)-floor(k*%%),k,0,26)))
[1, 1, 0, 1, 1, 0, 1, 1, 0, 1, 0, 1, 1, 0, 1, 1, 0, 1, 0, 1, 1, 0, 1, 1, 0, 1, 0] (Doug Hofstadter calls these "eta sequences".)
By now I should know better than to tangle with one of RWG's innocent-looking gotchas ... WFL
I thought the appearance of 2s (or 3s ...) in the coefficients of factor(x^n-1) was equivalent to >= three (or four ...) distinct primes dividing n, but it seems they can't all be 3 mod 4 since n=231 fails. Puzzle: find a trinomial with maximal |coefficient| < some coefficient of a(n irreducible) factor. --rwg RESINACEOUS COENURIASES DRACONTIASES STENOCARDIAS Biodiversity dept: Dracontiasis and coenuriasis are unbelievably icky parasitisms, but that's technically wrong since ick is a parasitism of fish! Main Entry:dracontiasis : infestation with or disease caused by the Guinea worm Main Entry:guinea worm Function:noun Usage:often capitalized G : a slender nematode worm (Dracunculus medinensis) attaining a length of several feet, occurring as an adult in the subcutaneous tissues of man and various mammals in parts of Africa and other warm countries, and having a larva that develops in small freshwater crustaceans (as cyclops) and when ingested with drinking water passes through the intestinal wall and tissues to lodge beneath the skin of a mammalian host and there mature Webster spares us their reproductive revels, which are worthy of an Aliens movie. Main Entry:coenurus Pronunciation:s**n(y)*r*s, s*- Function:noun Inflected Form:plural coenuri \-*r*, -(*)r*\ Etymology:New Latin, from coen- + -urus : a complex tapeworm larva growing interstitially in vertebrate tissues and consisting of a large fluid-filled sac from the inner wall of which numerous scolices develop see GID, MULTICEPS; Main Entry:gid Pronunciation:*gid Function:noun Inflected Form:-s Etymology:back-formation from 1giddy : a disease principally affecting sheep that is caused by the presence in the brain of the coenurus of a tapeworm (Multiceps multiceps) of the dog and related carnivores and is characterized by cerebral disturbances, dilated pupils, dizziness and circling movements, emaciation, and usually death called also sturdy, turn-sick, waterbrain
Puzzle: find a trinomial with maximal |coefficient| < some coefficient of a(n irreducible) factor.
x^70-x^35+1 has irreducible factor x^48-x^47+x^46+x^43-x^42+2*x^41-x^40+x^39+x^36-x^35+x^34-x^33+x^32 -x^31-x^28-x^26-x^24-x^22-x^20-x^17+x^16-x^15+x^14-x^13+x^12+x^9 -x^8+2*x^7-x^6+x^5+x^2-x+1 Note the coefficient of x^41 and also of x^7 is 2.
On Fri, May 8, 2009 at 1:04 AM, <rwg@sdf.lonestar.org> wrote:
Puzzle: find a trinomial with maximal |coefficient| < some coefficient of a(n irreducible) factor. --rwg
2*x^5 - 5*x^2 + 3 = (2*x^3 + 4*x^2 + 6*x + 3)*(x-1)^2 5 < 6. Jim Buddenhagen
rwg>> Puzzle: find a trinomial with maximal |coefficient| < some coefficient of
a(n irreducible) factor.
Edwin Clark>
x^70-x^35+1 has irreducible factor
x^48-x^47+x^46+x^43-x^42+2*x^41-x^40+x^39+x^36-x^35+x^34-x^33+x^32 -x^31-x^28-x^26-x^24-x^22-x^20-x^17+x^16-x^15+x^14-x^13+x^12+x^9 -x^8+2*x^7-x^6+x^5+x^2-x+1
Note the coefficient of x^41 and also of x^7 is 2.
Right, equivalently x^70+x^35+1 = (x^(3*5*7)-1)/(x^(5*7)-1). More surprising (to me), your factor/.x->-x, 1+x+x^2-x^5-x^6 -2*x^7 -x^8-x^9+x^12+x^13+x^14+x^15+x^16+x^17-x^20-x^22-x^24-x^26 -x^28+x^31+x^32+x^33+x^34+x^35+x^36-x^39-x^40 -2*x^41 -x^42-x^43+x^46+x^47+x^48, divides x^245 + x^70 + 1. I.e., (x^2+x+1)*(x^5-x^4+x^2-x+1) = x^7+x^2+1.
JamesB>
2*x^5 - 5*x^2 + 3 = (2*x^3 + 4*x^2 + 6*x + 3)*(x-1)^2
5 < 6.
Jim Buddenhagen
Wow, I missed that one! --rwg GASTROPTOSES STORAGE SPOTS SIDECOUPLES PEDICULOSES NEOCLASSIC CALCINOSES
Rich just pointed out to me that
product(2*cos(3*t/2^n)+sqrt(7)*sin(t/2^n)-cos(t/2^n),n,1,inf) = (2*sin(3*t)+sin(t))/sqrt(7)+cos(t)
inf /===\ | | 3 t t t 2 sin(3 t) + sin(t) | | (2 cos(---) + sqrt(7) sin(--) - cos(--)) = ------------------- + cos(t) | | n n n sqrt(7) n = 1 2 2 2
is equivalent to prod((x^2^k-%e^(3*%i*%pi/7))*(x^2^k-%e^(5*%i*%pi/7))*(x^2^k+%e^(6*%i*%pi/7)),k,0,n-1) = (x^2^n+%e^(3*%i*%pi/7))*(x^2^n+%e^(5*%i*%pi/7))*(x^2^n-%e^(6*%i*%pi/7))/((x+%e^(3*%i*%pi/7))*(x+%e^(5*%i*%pi/7))*(x-%e^(6*%i*%pi/7))) n - 1 3 i pi 5 i pi 6 i pi /===\ k ------ k ------ k ------ | | 2 7 2 7 2 7 | | (x - e ) (x - e ) (x + e ) = | | k = 0 3 i pi 5 i pi 6 i pi n ------ n ------ n ------ 2 7 2 7 2 7 (x + e ) (x + e ) (x - e ) ----------------------------------------------- 3 i pi 5 i pi 6 i pi ------ ------ ------ 7 7 7 (x + e ) (x + e ) (x - e ) ! Trying to derive Rich's observation in Mma 7.0, In[1]:= Factor[((x*(Sqrt[7]*I - 1))/2) + ((x^2*(-Sqrt[7]*I - 1))/2) + x^3 + 1, Extension -> (-1)^(1/7)] Out[1]= -(I ((-93222968909115529994920 + 20924612493340656725134 I) + (86082923884130960518872 + 357456564149362034379738 I) (-1)^( 1/7) + (279230634648179756081536 - 254577319249669754726972 I) (-1)^( 2/7) - (128909861890973390508704 + 19261800404123111281744 I) (-1)^( 3/7) - (262911491903882522699168 + 1717802482722402797698 I) (-1)^( 4/7) + (45621554544051365632984 + 273756075804862968415245 I) (-1)^( 5/7) - (81719779538053027762566 - 18851209912033826322536 I) Sqrt[ 7] + (103154450766576283430054 + 105678145987231257343688 I) (-1)^(1/7) Sqrt[ 7] + (59292997013846884998120 - 16002776632415384530112 I) (-1)^(2/7) Sqrt[ 7] - (23383123842825850913776 + 81972872431225606590752 I) (-1)^(3/7) Sqrt[ 7] - (87232069186717876328194 - 60789769565100139284672 I) (-1)^(4/7) Sqrt[ 7] + (13861719860507037285844 + 61026548391231898078632 I) (-1)^(5/7) Sqrt[7] - 9489147790175370672293 I x) ((-231095869671025050999320 - 29770809283501669680960 I) + (80599420984539853865720 + 866974483270444659023770 I) (-1)^( 1/7) + (699268986371722210933416 - 527606990314207937182708 I) (-1)^( 2/7) - (253276297158528052883264 + 96224042577750374127857 I) (-1)^( 3/7) - (633197087550431243870744 + 67151907184397843513304 I) (-1)^( 4/7) + (77313257087604747957600 + 638476440045894637928148 I) (-1)^( 5/7) - (190459539326979985517456 - 20473275286207017234552 I) Sqrt[ 7] + (200807251268965307405146 + 255243231270689268830856 I) (-1)^(1/7) Sqrt[ 7] + (161654254346412719028592 - 8037913225998504703144 I) (-1)^(2/7) Sqrt[ 7] - (33708196756104193101296 + 216156877780788618401216 I) (-1)^(3/7) Sqrt[ 7] - (208290422612047683845544 - 118883493814378106221976 I) (-1)^(4/7) Sqrt[ 7] + (19030177672653028736736 + 139410137615409039410592 I) (-1)^(5/7) Sqrt[7] + 9489147790175370672293 I x) ((-247532495673239390006288 - 74063007060472274128561 I) + (3995463777580813664096 + 1096160552562377585772601 I) (-1)^( 1/7) + (920251889666913795020568 - 652529069234524320640525 I) (-1)^( 2/7) - (295647348213202062228832 + 95401146337229889537207 I) (-1)^( 3/7) - (797133471283395593852552 + 152391977716105965086363 I) (-1)^( 4/7) + (74489256912699605780152 + 822156194011111601909433 I) (-1)^( 5/7) - (245298890246438957961048 - 13932194855161190302192 I) Sqrt[ 7] + (242227830288253477199700 + 312940817233212567451072 I) (-1)^(1/7) Sqrt[ 7] + (196234992398398299759540 + 24422733428319569334824 I) (-1)^(2/7) Sqrt[ 7] - (22473251874752216955528 + 290128839578604404880864 I) (-1)^(3/7) Sqrt[ 7] - (264124256011717110056402 - 148905955741245735693992 I) (-1)^(4/7) Sqrt[ 7] + (19312095402862888770284 + 168958679078555069590216 I) (-1)^(5/7) Sqrt[7] + 9489147790175370672293 I x))/ 854440119369967104087688437612993743725942514691531406700822737757 whose individual factors seem beyond its ability to simplify! Macsyma just croaks "Quotient by zero", which is not surprising since there are many nonobvious zeros lurking: 6 %i %pi 5 %i %pi -------- -------- 7 7 (sqrt(7) %i - 1) %e - (sqrt(7) %i + 1) %e (d8) --------------------------------------------------------- 2 4 %i %pi -------- 7 + %e - 1 (c9) EXPAND(DFLOAT(%)); (d9) 1.11022302462516d-16 %i --rwg Drat! The lycopene just took off and crashed from the same incline where the vitamin E hasn't budged.
Converting to "standard form" (the quotient for two values of t recovers the finite version), prod(8*cos((t/(2^k))-((3*%pi)/7))*sin((t/(2^k))-((5*%pi)/14))*sin((t/(2^k))-((3*%pi)/14)),k,1,inf) =-((8*sin(t-((3*%pi)/7))*cos(t-((5*%pi)/14))*cos(t-((3*%pi)/14)))/(sqrt(7))) inf /===\ | | t 3 pi t 5 pi t 3 pi | | 8 cos(-- - ----) sin(-- - ----) sin(-- - ----) | | k 7 k 14 k 14 k = 1 2 2 2 3 pi 5 pi 3 pi 8 sin(t - ----) cos(t - ----) cos(t - ----) 7 14 14 = - ------------------------------------------- . sqrt(7) Now it looks obvious, and it sort of is. If we rewrite as Prod cos cos cos = sin sin sin, then we have cos cos cos = (sin 2) (sin 2) (sin 2)/sin sin sin and the phases simply permute when doubled mod 2pi. Analogously prod(8*cos(x/(-2)^k-2*%pi/9)*cos(x/(-2)^k+%pi/9)*cos(x/(-2)^k+4*%pi/9),k,1,inf) = -8*sin(x-2*%pi/9)*sin(x+%pi/9)*sin(x+4*%pi/9)/sqrt(3) inf /===\ | | x 2 pi x pi x 4 pi | | 8 cos(------ - ----) cos(------ + --) cos(------ + ----) | | k 9 k 9 k 9 k = 1 (- 2) (- 2) (- 2) 2 pi pi 4 pi 8 sin(x - ----) sin(x + --) sin(x + ----) 9 9 9 = - ----------------------------------------- sqrt(3) And, of course prod(32*sin(t/(-2)^n-5*%pi/22)*cos(t/(-2)^n-2*%pi/11)*sin(t/(-2)^n-%pi/22)*cos(t/(-2)^n+%pi/11)*cos(t/(-2)^n+4*%pi/11),n,1,inf) = -32*cos(t-5*%pi/22)*sin(t-2*%pi/11)*cos(t-%pi/22)*sin(t+%pi/11)*sin(t+4*%pi/11)/sqrt(11) inf /===\ | | t 5 pi t 2 pi t pi t pi | | 32 sin(------ - ----) cos(------ - ----) sin(------ - --) cos(------ + --) | | n 22 n 11 n 22 n 11 n = 1 (- 2) (- 2) (- 2) (- 2) t 4 pi cos(------ + ----) n 11 (- 2) 5 pi 2 pi pi pi 4 pi 32 cos(t - ----) sin(t - ----) cos(t - --) sin(t + --) sin(t + ----) 22 11 22 11 11 = - --------------------------------------------------------------------. sqrt(11) Ooh, this is fun. prod(16*sin(t/2^n+%pi/30)*cos(t/2^n+%pi/15)*cos(t/2^n+2*%pi/15)*cos(t/2^n+4*%pi/15),n,1,inf) = 16*cos(t+%pi/30)*sin(t+%pi/15)*sin(t+2*%pi/15)*sin(t+4*%pi/15) inf /===\ | | t pi t pi t 2 pi t 4 pi | | 16 sin(-- + --) cos(-- + --) cos(-- + ----) cos(-- + ----) | | n 30 n 15 n 15 n 15 n = 1 2 2 2 2 pi pi 2 pi 4 pi = 16 cos(t + --) sin(t + --) sin(t + ----) sin(t + ----), 30 15 15 15 prod(-16*cos(t/(-2)^n-2*%pi/15)*sin(t/(-2)^n-%pi/30)*cos(t/(-2)^n+%pi/15)*cos(t/(-2)^n+4*%pi/15),n,1,inf) = -16*sin(t-2*%pi/15)*cos(t-%pi/30)*sin(t+%pi/15)*sin(t+4*%pi/15) inf /===\ | | t 2 pi t pi t pi t 4 pi | | - 16 cos(----- - ----) sin(----- - --) cos(----- + --) cos(----- + ----) | | n 15 n 30 n 15 n 15 n = 1 (-2) (-2) (-2) (-2) 2 pi pi pi 4 pi = - 16 sin(t - ----) cos(t - --) sin(t + --) sin(t + ----) 15 30 15 15 But this isn't the whole story, as it won't produce a naked t on the rhs (which we've seen), nor products with (+-3)^n, e.g. --rwg SOCIAL BRETHREN BRONCHIAL TREES CAMPHOR TREE PERCHROMATE UNRESTRAINED SATURNINE RED TRAINED NURSE
[...] But this isn't the whole story, as it won't produce a naked t on the rhs (which we've seen), nor products with (+-3)^n, e.g. Aha, but
(c69) (sin(3*x)/sin(x),%%=trigcontract(factor(trigreduce(trigsimp(trigexpand(%%)))))) sin(3 x) pi pi (d69) -------- = 4 cos(x - --) cos(x + --) sin(x) 6 6 so prod((2*cos(t/3^k+%pi/4)+1)*(2*sin(t/3^k+%pi/4)-1),k,1,inf) = sqrt(2)*sin(t)+1 inf /===\ | | t pi t pi | | (2 cos(-- + --) + 1) (2 sin(-- + --) - 1) = sqrt(2) sin(t) + 1 | | k 4 k 4 k = 1 3 3 and prod((2*sin(t/(-3)^k-3*%pi/14)-1)*(2*sin(t/(-3)^k+%pi/14)-1)*(2*cos(t/(-3)^k+%pi/7)-1),k,1,inf) = -8*sin(t/2-3*%pi/7)*sin(t/2+%pi/7)*sin(t/2+2*%pi/7)/sqrt(7) inf /===\ | | t 3 pi t pi t pi | | (2 sin(----- - ----) - 1) (2 sin(----- + --) - 1) (2 cos(----- + --) - 1) | | k 14 k 14 k 7 k = 1 (-3) (-3) (-3) t 3 pi t pi t 2 pi 8 sin(- - ----) sin(- + --) sin(- + ----) 2 7 2 7 2 7 = - ----------------------------------------- sqrt(7) --rwg
I've been telling people that the ovoid hole in the Arnold puzzle is a disappointingly nondescript image of an ellipse under a homographic transformation, but on prodding from Veit, found a surprisingly simple form, a/((b*i+c)*(i*sin(t)+d*cos(t)-1/4)+1), where a, b, c, and d are classified under the technology export act. But it occurred to me that my rattleback (a solid plastic lifesize statue of a dead banana slug, with the puzzling ability to spin only counterclockwise on a flat surface) might be a segment of a torus, which takes seven points to determine. And maybe even the Arnold cavity is a toric section. Alas, it isn't, but it is so close that the laser program would describe the exact same polygon, so I could claim it *is* a toric section. ("I planned it all along.") Unfortunately, the "minor" radius is nearly twice the major, so it's an ATRESIC (spindle) TORUS and needs a RESUSCITATOR. --rwg
On 4/15/09, rwg@sdf.lonestar.org <rwg@sdf.lonestar.org> wrote:
... But it occurred to me that my rattleback (a solid plastic lifesize statue of a dead banana slug, with the puzzling ability to spin only counterclockwise on a flat surface) might be a segment of a torus, which takes seven points to determine.
If it really is bananoid, it's more likely a Dupin cyclide, with freedom 9. These are quite easy to test for: in Lie-sphere (hexaspherical) coordinates they are quadric reguli generated by three spheres. Incidentally, regarding the mysterious Berger cyclides with 6 systems of circles, I'm tempted to conclude that they were probably just some kind of myth-translation. With the exception of trivial special cases such as (double) planes and spheres, or (super-dense) lines and circles, every Dupin cyclide is either a torus, circular cone, or cylinder, or else some Moebius (conformal) inversion of one. In particular, a general Dupin cyclide also possesses just two Villarceau circle systems, along with its obvious two generator circle systems. WFL
I've been telling people that the ovoid hole in the Arnold puzzle is a disappointingly nondescript image of an ellipse under a homographic transformation, but on prodding from Veit, found a surprisingly simple form, a/((b*i+c)*(i*sin(t)+d*cos(t)-1/4)+1), where a, b, c, and d are classified under the technology export act. But it occurred to me that my rattleback (a solid plastic lifesize statue of a dead banana slug, with the puzzling ability to spin only counterclockwise on a flat surface)
It rectifies time! I.e., the time-reversal of the rotation is unphysical.
might be a segment of a torus, which takes seven points to determine. And maybe even the Arnold cavity is a toric section. Alas, it isn't, but it is so close that the laser program would describe the exact same polygon, so I could claim it *is* a toric section. ("I planned it all along.") Unfortunately, the "minor" radius is nearly twice the major, so it's an ATRESIC (spindle) TORUS and needs a RESUSCITATOR. --rwg
As mentioned here a few years ago, and also in http://mathworld.wolfram.com/CassiniOvals.html, Cassini's ovals *are* toric sections. Pic: http://www.tweedledum.com/rwg/cassini.html . --rwg SECTIONED TORUS DEUTEROTONICS
On Apr 16, 2009, at 7:06 AM, rwg@sdf.lonestar.org wrote:
I've been telling people that the ovoid hole in the Arnold puzzle is a disappointingly nondescript image of an ellipse under a homographic transformation, but on prodding from Veit, found a surprisingly simple form, a/((b*i+c)*(i*sin(t)+d*cos(t)-1/4)+1), where a, b, c, and d are classified under the technology export act. But it occurred to me that my rattleback (a solid plastic lifesize statue of a dead banana slug, with the puzzling ability to spin only counterclockwise on a flat surface)
It rectifies time! I.e., the time-reversal of the rotation is unphysical.
The rectification also takes time -- it is not instantaneous. Have you tried rotating in the unstable sense on a very low friction surface? I expect this will increase the reversal time. Veit
On Apr 16, 2009, at 7:06 AM, rwg@sdf.lonestar.org wrote:
I've been telling people that the ovoid hole in the Arnold puzzle is a disappointingly nondescript image of an ellipse under a homographic transformation, but on prodding from Veit, found a surprisingly simple form, a/((b*i+c)*(i*sin(t)+d*cos(t)-1/4)+1), where a, b, c, and d are classified under the technology export act. But it occurred to me that my rattleback (a solid plastic lifesize statue of a dead banana slug, with the puzzling ability to spin only counterclockwise on a flat surface)
It rectifies time! I.e., the time-reversal of the rotation is unphysical.
The rectification also takes time -- it is not instantaneous. Have you tried rotating in the unstable sense on a very low friction surface? I expect this will increase the reversal time.
Veit
Spoilsport. Yeah, clearly in the limit of zero friction, it will spin both ways. As usual, friction supplies the time arrow. I have to believe this thing was discovered rather than invented. Reject surfboard? Lopsided kayak? Anyone know? --Bill
_______________________________________________ math-fun mailing list math-fun@mailman.xmission.com http://mailman.xmission.com/cgi-bin/mailman/listinfo/math-fun
Bill asked:
I have to believe this thing was discovered rather than invented. Reject surfboard? Lopsided kayak? Anyone know?
According to Wikipedia: "The antiquarian word "celt" (the "c" is pronounced as "s") describes adze-, axe-, chisel- and hoe-shaped lithic tools and weapons. The first modern descriptions of these celts were published in the 1890s when Sir Gilbert Thomas Walker FRS wrote his 'On a curious dynamical property of celts' for the Proceedings of the Cambridge Philosophical Society in Cambridge, England, and 'On a dynamical top' for the Quarterly Journal of Pure and Applied Mathematics in Somerville, Mass."
Gene's Gaussian quadrature observations largely trivialize Simpson, but we still have the mystery of how 1 4 1 (3rd order) beats 7 32 12 32 7 (5th order) on 4/(1+x^2).
rcs>Maybe 1/(1+x) would be a better test function, since 4/(1+x^2) has
only even-exponent terms in the power series? Before even trying it, I'll say no, since we're not 0-centered. E.g., we could pretend we did 1/(1 + i x) and took the realpart afterwards. OK, now let's compute 1 / [ 1 I ----- dx = log(2) = K(0, 1, 2, 3, 1, 6, 3, 1, 1, 2, 1, 1, 1, 1, 3, ] x + 1 / 0
10, 1, 1, 1, 2, 1, 1, 1, 1, 3, 2, 3, 1, 13, 7, 4, 1, 1, 1, 7, 2, 4, ...)
by averaging 1/(1+x) over [0,1]. With nine samples, Thomas gives cf(''(makelist(1/(1+x),x,(0..8)/8).[1,4,2,4,2,4,2,4,1]/24))
[0, 1, 2, 3, 1, 6, 4, 1, 1, 2, 2, 5, 3, 1, 1, 1, 1, 1, 2]
and with *ten*, Homer gives cf(''(makelist(1/(1+x),x,(0..9)/9).[1,3,3,2,3,3,2,3,3,1]/24))
[0, 1, 2, 3, 1, 6, 5, 8, 2, 1, 2, 1, 8, 2, 1, 1, 5]
Worse, as predicted. On the other hand, the bizarreness with 7 32 12 32 7 giving -2 times the error with 1 4 1 may well depend on which function we're integrating, so 1/(1+x) will be good to test.
Sho nuff. I'd've bet my Heine this was just experimenter malfeasance, but, by Gauss, it's real. First, just to prove 7 32 12 32 7 really integrates quintics, (c41) sum(a[k]*x^k,k,0,6) 6 5 4 3 2 (d41) a x + a x + a x + a x + a x + a x + a 6 5 4 3 2 1 0 (c42) block([fancy_display:false],print(expand(integrate(%,x,0,1) <makelist(''%,x,(0..4)/4).[7,32,12,32,7]/90)),0)$ a a a a a a 55 a a a a a a 6 5 4 3 2 1 6 5 4 3 2 1 -- + -- + -- + -- + -- + -- + a < ----- + -- + -- + -- + -- + -- + a 7 6 5 4 3 2 0 384 6 5 4 3 2 0 (Note that the sides differ by a_6/384/7.) Here is 7 32... giving more than triple the absolute error for 32 samples of 4/(1+x^2): (c22) dfloat([makelist(4/(1+x^2),x,(0..32)/32) . ([32,12,32,7+7,32,12,32,7+7,32,12,32,7+7,32,12,32],append([7],%%,[14],%%,[7])/90/4/2),makelist(4/(1+x^2),x,(0..32)/32) . ([4,2,4,1+1,4,2,4,1+1,4,2,4,1+1,4,2,4],append([1],%%,[2],%%,[1])/12/4/2)]-%pi) (d22) [1.18244525282307d-10, - 3.69566599545123d-11] And for 64 samples, the error ratio gets slightly worse! (c24) dfloat([makelist(4/(1+x^2),x,(0..64)/64) . ([32,12,32,7+7,32,12,32,7+7,32,12,32,7+7,32,12,32],append(%%,[14],%%),append([7],%%,[14],%%,[7])/90/4/2/2),makelist(4/(1+x^2),x,(0..64)/64) . ([4,2,4,1+1,4,2,4,1+1,4,2,4,1+1,4,2,4],append(%%,[2],%%),append([1],%%,[2],%%,[1])/12/4/2/2)]-%pi) (d24) [1.84785520218611d-12, - 5.76871883595232d-13] (c25) d22[1]/d22[2] # %[1]/%[2] (d25) - 3.19954577680578d0 # - 3.20323325635104d0 Now try 1/(1+x). (c26) dfloat([makelist(1/(1+x),x,(0..64)/64).([32,12,32,7+7,32,12,32,7+7,32,12,32,7+7,32,12,32],append(%%,[14],%%),append([7],%%,[14],%%,[7])/90/4/2/2),makelist(1/(1+x),x,(0..64)/64).([4,2,4,1+1,4,2,4,1+1,4,2,4,1+1,4,2,4],append(%%,[2],%%),append([1],%%,[2],%%,[1])/12/4/2/2)]-log(2)) (d26) [3.61832785955584d-12, 1.86150950209907d-9] 7 32 ... did about as well, but Simpson (1 4 1) did far worse! So the question is not why 7 32 ... is lousy, but rather why is 1 4 1 so "lucky" on 4/(1+x^2). Obviously, we look at (c47) ('integrate(1/(1+x^3),x,0,1),%% = expand(apply_nouns(%%))) 1 / [ 1 log(2) sqrt(3) %pi (d47) I ------ dx = ------ + ----------- ] 3 3 9 / x + 1 0 (c48) dfloat([makelist(1/(1+x^3),x,(0..64)/64).([32,12,32,7+7,32,12,32,7+7,32,12,32,7+7,32,12,32],append(%%,[14],%%),append([7],%%,[14],%%,[7])/90/4/2/2),makelist(1/(1+x^3),x,(0..64)/64).([4,2,4,1+1,4,2,4,1+1,4,2,4,1+1,4,2,4],append(%%,[2],%%),append([1],%%,[2],%%,[1])/12/4/2/2)]-log(2)/3-sqrt(3)*%pi/9) (d48) [1.22610255282041d-12, 2.60732344048442d-9] Was Rich right about even:odd after all? 1 / [ 1 sqrt(2) (log(2 sqrt(2) + 3) + %pi) (d55) I ------ dx = ---------------------------------- ] 4 8 / x + 1 0 (c56) dfloat([makelist(1/(1+x^4),x,(0..64)/64) . ([32,12,32,7+7,32,12,32,7+7,32,12,32,7+7,32,12,32],append(%%,[14],%%),append([7],%%,[14],%%,[7])/90/4/2/2),makelist(1/(1+x^4),x,(0..64)/64) . ([4,2,4,1+1,4,2,4,1+1,4,2,4,1+1,4,2,4],append(%%,[2],%%),append([1],%%,[2],%%,[1])/12/4/2/2)]-rhs(%)) (d56) [6.0729199446996d-14, 1.98681771035325d-9] No, there's something magic about 1/(1+x^2) and 1 4 1 ! --rwg PS, to its credit, Mma 7.0 gets 1 1 1 1 / psi (--- + -) - psi (---) [ 1 0 2 n 2 0 2 n I ------ dx = ------------------------- ] n 2 n / x + 1 0 1 1 psi (-) - psi (---) - log(2) 0 n 0 2 n = ---------------------------- n by specializing a 2F1.
From: "rwg@sdf.lonestar.org" <rwg@sdf.lonestar.org> To: math-fun <math-fun@mailman.xmission.com> Sent: Saturday, March 28, 2009 1:30:31 AM Subject: Re: [math-fun] Simpsons' rules Gene>If I'm integrating over a circle, so that there is no boundary, then one point is as good as another, Not quite. Suppose we seek the average value (= 1/rt3) of 1/(2+sin(2(t+f) pi)), a smooth, period 1 fn(t) phase-shifted by the fraction f of a period. Then we can write the equal weight average exactly: (sum(1/(sin(%pi*(2*k/n+2*f))+2),k,0,n-1))/n = ((sqrt(3)+2)^n-(2-sqrt(3))^n)/(sqrt(3)*(-2*cos(%pi*(2*f+1/2)*n)+(sqrt(3)+2)^n+(2-sqrt(3))^n)) n - 1 ==== \ 1 > ----------------------- / 2 k ==== sin(pi (--- + 2 f)) + 2 k = 0 n ----------------------------- =: F(f) n n n (sqrt(3) + 2) - (2 - sqrt(3)) = ----------------------------------------------------------------- n n 1 ((sqrt(3) + 2) + (2 - sqrt(3)) - 2 cos(pi (2 f + -) n)) sqrt(3) 2 (Anybody want to hire me to make their CAS do these?) which does depend (slightly) on f. (And, ironically, is free of rt3.) For n = 1..6, 1 4 15 56 ---------------, ---------------, ----------------, ----------------, 2 + sin(2 f pi) 7 + cos(4 f pi) 26 - sin(6 f pi) 97 - cos(8 f pi) 209 780 ------------------, ------------------- 362 + sin(10 f pi) 1351 + cos(12 f pi) showing a dependence on f declining exponentially with n. Simpson's rule in this case is just F(f)/3 + 2 F(f+1/2n)/3, which is only slightly better, but has 2n samples, so is really worse! Gene>and I would expect the best approximation to be to give equal weights to each point. Now if instead, I integrate over an interval, there are some boundary effects, but deep within the interval , why would I want to do otherwise than to weight the points equally? This is a devilish question. But I don't think you can find any way of fading from 1,4,2,4,2,..., to 3,3,3,3,3,... and back to ...,2,4,2,4,1 that will exactly integrate cubics. Simpson's is geared to polynomials vs periodics. --rwg _______________________________________________ Integrating over the unit circle, let f(x) = sum(a[n] exp(i n x), n = -infinity..infinity). The exact integral is (2 pi a[0]). The uniform weight, N-point approximation is F[N] = (2 pi / N) sum( f(x0 + 2 pi k / N), k = 1..N) = (2 pi / N) sum( a[n] exp(i n x0) exp(2 pi i k n / N), k = 1..N, n = -infinity..infinity) Summing over k gives 0 unless n is a multiple of N. So F[N] = (2 pi) sum( a[N m] exp(i N m x0), m = -infinity..infinity) = (2 pi a[0]) + (error). Assuming that the Fourier coefficients fall off rapidly, the error is bounded by | a[N] | + | a[-N] |. If the function has a discontinuity, the a[n] fall off as 1/n, and it would be better to unwrap the circle and use Gaussian quadrature. -- Gene
On Sat, Dec 20, 2008 at 12:56 PM, <rwg@sdf.lonestar.org> wrote:
On Fri, Dec 19, 2008 at 12:20 PM, <rwg@sdf.lonestar.org> wrote:
Amen. I've now tried ~ 10^8 cases of (a^(1/5)-b^(1/5))^(1/3) without further success, and for sqrt, only
3/5 4/5 3/5 2/5 2/5 4/5 1/5 1/5 1/5 1/5 - 2 3 + 3 + 2 2 3 - 2 3 + 2 sqrt(4 - 3 ) = ---------------------------------------------------. 5
And, of course, these can only give pentanomials. --rwg ALGORISMIC MICROGLIAS
I have no technical knowledge on denesting, but here is how I look at the problem:
Since sqrt( a^(1/5) + b^(1/5) ) = a^(-2/5) * sqrt( a + (a^4*b)^(1/5) ), I'll consider only the form: sqrt( a + b^(1/5) ). If sqrt( a + b^(1/5) ) could be denested, I'd expect it to be denested into the form:
x0 + x1*b^(1/5) + x2*b^(2/5) + x3*b^(3/5) + x4*b^(4/5).
Notice that in the (comparatively easy) 4th root case, your conjecture catches
1/4 3/4 1/4 sqrt(161 - 12 5 ) = 2 5 - 3 sqrt(5) + 4 5 + 6
but not 1/4 1/4 1/4 3/4 3/4 1/4 sqrt(2) sqrt(31 - 4 3 5 ) = 3 5 - 2 sqrt(5) + 3 5 + 2 sqrt(3).
That's why I don't think you could find anything longer than pentanomial. OTOH, sqrt( a^(1/7) + b^(1/7) ) is a better bet for finding hexanomials or longer.
Indeed, but the only one I've found so far is perversely pentanomial:
1/7 Sqrt(7 (2 - 2 )) 1/7 3/7 5/7 6/7 ------------------ = -1 + 2 2 + 2 + 2 - 2 . 1/14 2
Just my 2 cents.
Warut
You didn't propose 6th roots. Did you have reason to suspect that they are as infertile as they seem to be? --rwg ALGORISMIC MICROGLIAS
Oops, found this in my "notes": 1/3 5/6 1/6 5/6 1/6 1/6 2/3 1/6 5 5 2 sqrt(5) 2 5 2 sqrt(4 - sqrt(3) 5 ) = ------- - --------- + ------------ - --------- + ------- sqrt(6) 3 sqrt(2) 3 3 sqrt(3)
In fact, I didn't mean x0, x1, ... to be rational numbers. It could be sqrt(2) if it could disappear after squaring. Of course, for the fifth root case, sqrt(2) could appear in x0 iff it appears in x1, ..., x4, too.
I used to think that 7th roots would have more chance than 6th roots due to more variables (i.e., more flexible), but now I believe I was wrong and that pentanomial may be the limit.
It seems so incidental that one or two coeffs keep vanishing. It's like a carnival sucker game. It looks so solvable, but something is secretly forbidding it.
Later, Warut
participants (17)
-
Allan Wechsler -
David Wilson -
Edwin Clark -
Erich Friedman -
Eugene Salamin -
Fred lunnon -
Hans Havermann -
Henry Baker -
J.P. Grossman -
James Buddenhagen -
Joshua Zucker -
Mike Stay -
rcs@xmission.com -
Robert Baillie -
rwg@sdf.lonestar.org -
Veit Elser -
Warut Roonguthai