[math-fun] What comes after tetration?
Ak[_,1]=2;Ak[0,x_]:=x+2;Ak[n_,x_]:=Ak[n - 1, Ak[n, x - 1]] In[3]:= TableForm[Table[Ak[n,x],{n,0,3},{x,1,6-n}]] Out[3]//TableForm= 2 4 5 6 7 8 2 4 6 8 10 2 4 8 16 2 4 16 (Rows: n = 0,1,2,3; Columns: x = 1,2,3,4) Let's try bumping the row length. In[4]:= TableForm[Table[Ak[n,x],{n,0,3},{x,1,7-n}]] During evaluation of In[4]:= $RecursionLimit::reclim2: Recursion depth of 1024 exceeded during evaluation of 1-1. >> One minus one? It's trying for 65536 by adding up 2s recursively. Let's give it a hint: In[2]:= Ak[1,x_]:=2x In[3]:= TableForm[Table[Ak[n,x],{n,0,3},{x,1,7-n}]] Out[3]//TableForm= 2 4 5 6 7 8 9 2 4 6 8 10 12 2 4 8 16 32 2 4 16 65536 Trying for one more x, In[5]:= TableForm[Table[Short[Ak[n,x]],{n,0,3},{x,1,8-n}]] During evaluation of In[5]:= $RecursionLimit::reclim2: Recursion depth of 1024 exceeded during evaluation of 2-1. >> Out[5]//TableForm= 2 4 5 6 7 8 9 10 2 4 6 8 10 12 14 2 4 8 16 32 64 2 4 16 65536 2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 MaxFormatDepthExceeded MaxFormatDepthExceeded MaxFormatDepthExceeded)))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))) Time for another hint: In[7]:= Ak[2,x_]:=2^x In[8]:= TableForm[Table[Short[Ak[n,x]],{n,0,3},{x,1,8-n}]] Out[8]//TableForm= 2 4 5 6 7 8 9 10 2 4 6 8 10 12 14 2 4 8 16 32 64 2 4 16 65536 20035299304068464649790723<<19677>>72339445587895905719156736 (Insanely) trying for another row, In[9]:= TableForm[Table[Short[Ak[n, x]], {n, 0, 4}, {x, 1, 8 - n}]] $RecursionLimit::reclim2: Recursion depth of 1024 exceeded during evaluation of 3-1. >> (See nice try <http://gosper.org/ack.png>. Hey, where's the diagonal scroll bar?) --rwg
Suppose X, Y, Z, W are real random variables with a joint distribution such that each one has a finite mean and standard deviation. Suppose that all pairs of these random variables have the same correlation coefficient:* R = rho(X,Y) = rho(X,Z) = rho(X,W) = rho(Y,Z) = rho(Y,W) = rho(Z,W) . Find the minimum possible value of R. —Dan ____________________________________ * The definition of the correlation coefficient rho(U,V) of two random variables U and V is the expected product of their standardizations: rho(U,V) = E( ((U-mu_U)/sigma_U) * ((V-mu_V)/sigma_V)) ) where E is expectation, mu denotes mean and sigma denotes standard deviation.
On 03/11/2015 21:01, Dan Asimov wrote:
Suppose X, Y, Z, W are real random variables with a joint distribution such that each one has a finite mean and standard deviation.
Suppose that all pairs of these random variables have the same correlation coefficient:*
R = rho(X,Y) = rho(X,Z) = rho(X,W) = rho(Y,Z) = rho(Y,W) = rho(Z,W) .
Find the minimum possible value of R.
WLOG X,Y,Z,W all have mean 0 and stddev 1 so E(XX)=E(YY)=...=1. We are given that E(XY)=E(XZ)=... and we want to know how small that can be. (That value will be the correlation coefficient.) We can think of X,Y,Z,W as vectors in some Hilbert space, with E(XY) etc. being their inner products. And then of course we can look at just the space spanned by X,Y,Z,W, which looks just like R^4. So the question is: given four real 4-dimensional vectors, each of norm 1 and with all their pairwise inner products equal, how small (i.e., large and negative) can the inner products be? Put our vectors in a single array A; then the matrix of inner products is At.A which we require to have the form I+kM where I is the identity and M is the 4x4 matrix that's all ones, and we want k as negative as possible. Well, At.A has to be positive semidefinite, which happens iff k >= -1/3. (For those values the matrix is diagonally dominant. Otherwise the all-1s vector is an eigenvector with negative eigenvalue.) And any positive semidefinite matrix has a factorization of the form At.A, so in fact the value -1/3 is attained. I therefore claim that R=-1/3 is the best possible. -- g
Yes, you nailed it, Gareth. —Dan
On Nov 4, 2015, at 3:26 PM, Gareth McCaughan <gareth.mccaughan@pobox.com> wrote:
On 03/11/2015 21:01, Dan Asimov wrote:
Suppose X, Y, Z, W are real random variables with a joint distribution such that each one has a finite mean and standard deviation.
Suppose that all pairs of these random variables have the same correlation coefficient:*
R = rho(X,Y) = rho(X,Z) = rho(X,W) = rho(Y,Z) = rho(Y,W) = rho(Z,W) .
Find the minimum possible value of R.
WLOG X,Y,Z,W all have mean 0 and stddev 1 so E(XX)=E(YY)=...=1. We are given that E(XY)=E(XZ)=... and we want to know how small that can be. (That value will be the correlation coefficient.)
We can think of X,Y,Z,W as vectors in some Hilbert space, with E(XY) etc. being their inner products. And then of course we can look at just the space spanned by X,Y,Z,W, which looks just like R^4. So the question is: given four real 4-dimensional vectors, each of norm 1 and with all their pairwise inner products equal, how small (i.e., large and negative) can the inner products be?
Put our vectors in a single array A; then the matrix of inner products is At.A which we require to have the form I+kM where I is the identity and M is the 4x4 matrix that's all ones, and we want k as negative as possible. Well, At.A has to be positive semidefinite, which happens iff k >= -1/3. (For those values the matrix is diagonally dominant. Otherwise the all-1s vector is an eigenvector with negative eigenvalue.)
And any positive semidefinite matrix has a factorization of the form At.A, so in fact the value -1/3 is attained.
I therefore claim that R=-1/3 is the best possible.
-- g
_______________________________________________ math-fun mailing list math-fun@mailman.xmission.com https://mailman.xmission.com/cgi-bin/mailman/listinfo/math-fun
Though I would add that only R^3 is necessary to find the maximum common angular separation of 4 vectors. Taking every other vertex of the cube [-1,1]^3 gets an example. —Dan
On Nov 4, 2015, at 3:26 PM, Gareth McCaughan <gareth.mccaughan@pobox.com> wrote:
On 03/11/2015 21:01, Dan Asimov wrote:
Suppose X, Y, Z, W are real random variables with a joint distribution such that each one has a finite mean and standard deviation.
Suppose that all pairs of these random variables have the same correlation coefficient:*
R = rho(X,Y) = rho(X,Z) = rho(X,W) = rho(Y,Z) = rho(Y,W) = rho(Z,W) .
Find the minimum possible value of R.
WLOG X,Y,Z,W all have mean 0 and stddev 1 so E(XX)=E(YY)=...=1. We are given that E(XY)=E(XZ)=... and we want to know how small that can be. (That value will be the correlation coefficient.)
We can think of X,Y,Z,W as vectors in some Hilbert space, with E(XY) etc. being their inner products. And then of course we can look at just the space spanned by X,Y,Z,W, which looks just like R^4. So the question is: given four real 4-dimensional vectors, each of norm 1 and with all their pairwise inner products equal, how small (i.e., large and negative) can the inner products be?
Put our vectors in a single array A; then the matrix of inner products is At.A which we require to have the form I+kM where I is the identity and M is the 4x4 matrix that's all ones, and we want k as negative as possible. Well, At.A has to be positive semidefinite, which happens iff k >= -1/3. (For those values the matrix is diagonally dominant. Otherwise the all-1s vector is an eigenvector with negative eigenvalue.)
And any positive semidefinite matrix has a factorization of the form At.A, so in fact the value -1/3 is attained.
I therefore claim that R=-1/3 is the best possible.
-- g
_______________________________________________ math-fun mailing list math-fun@mailman.xmission.com https://mailman.xmission.com/cgi-bin/mailman/listinfo/math-fun
On 04/11/2015 23:37, Dan Asimov wrote:
Though I would add that only R^3 is necessary to find the maximum common angular separation of 4 vectors.
Taking every other vertex of the cube [-1,1]^3 gets an example.
I came very close to just writing "Obviously it's -1/3 because the right picture is a regular tetrahedron" but thought that might be not quite rigorous enough :-). (Not quite rigorous enough to be sure I wasn't completely wrong, as well as not quite rigorous enough to satisfy Dan.) -- g
Is -1/sqrt3 wrong? It's smaller than -1/3. --Rich --- Quoting Gareth McCaughan <gareth.mccaughan@pobox.com>:
On 04/11/2015 23:37, Dan Asimov wrote:
Though I would add that only R^3 is necessary to find the maximum common angular separation of 4 vectors.
Taking every other vertex of the cube [-1,1]^3 gets an example.
I came very close to just writing "Obviously it's -1/3 because the right picture is a regular tetrahedron" but thought that might be not quite rigorous enough :-).
(Not quite rigorous enough to be sure I wasn't completely wrong, as well as not quite rigorous enough to satisfy Dan.)
-- g
_______________________________________________ math-fun mailing list math-fun@mailman.xmission.com https://mailman.xmission.com/cgi-bin/mailman/listinfo/math-fun
I guess a rigorous proof does require a lemma. Which is perhaps a bit surprising. ____________________________________________________________________ Lemma: ------ Suppose vectors v_1,...,v_n belong to some Euclidean space and have all pairwise angular separations equal to a constant, call it theta. Then the maximum such theta can be realized in R^(n-1). ____________________________________________________________________ Proof left as an exercise. —Dan
On Nov 4, 2015, at 3:37 PM, Gareth McCaughan <gareth.mccaughan@pobox.com> wrote:
On 04/11/2015 23:37, Dan Asimov wrote:
Though I would add that only R^3 is necessary to find the maximum common angular separation of 4 vectors.
Taking every other vertex of the cube [-1,1]^3 gets an example.
I came very close to just writing "Obviously it's -1/3 because the right picture is a regular tetrahedron" but thought that might be not quite rigorous enough :-).
(Not quite rigorous enough to be sure I wasn't completely wrong, as well as not quite rigorous enough to satisfy Dan.)
(:-)>
participants (4)
-
Bill Gosper -
Dan Asimov -
Gareth McCaughan -
rcs@xmission.com