[math-fun] complex hermitian matrix
--I'm not buying Veit's claims: The definition of "random hermitian matrix" is key. If I take it to mean "pick the eigenvalues from a symmetric real prob.distrib. then multiply resulting diagonal matrix D to get H = U^(-1) D U = hermitian with U random unitary (Haar measure) -- which is a perfectly good probability distribution -- then the proposed solution prob(all eigevalues>0) = 2^(-N) is right, regardless of what Veit says. --Veit: The claim 2^(-N) assumes much more, that the eigenvalues are independently distributed. In fact, they are very far from independent, making the actual probability much smaller: c^(-N^2). This result is asymptotic, for large N (the limit of interest in statistical mechanics). The number c is well known, but it is not 2. As Andy pointed out, the probability measure should be invariant under arbitrary unitary transformations, i.e. M -> U M U^(-1). But the Hermitian matrices live in an N^2 dimensional space while the space of unitary matrices has only N(N-1) dimensions. The extra N dimensions correspond to the eigenvalues of M. Wigner had the idea of using the maximum entropy probability distribution, constrained by just two properties: the expectation values of Tr M and Tr M^2. If we want the expectation value of Tr M to be zero, then our probability distribution is simply the Gaussian e^(-Tr M^2) times the unitary-invariant measure. --that would yield 2^(-N) as above! If you marginalize this distribution on just the eigenvalues (i.e. integrate out the unitary transformations) you get, say in the case of N=3 (unnormalized) dP = e^(-E1^2-E2^2-E3^2) (E1-E2)^2 (E2-E3)^2 (E3-E1)^2 dE1 dE2 dE3. It's the product over all eigenvalue pairs -- their differences squared -- that ruins the independence of the eigenvalue distribution. BTW, this very same distribution seems to perfectly model the distribution of Riemann zeta function zeros, but nobody understands why!
On Thu, Nov 22, 2012 at 11:41 AM, Warren Smith <warren.wds@gmail.com> wrote:
As Andy pointed out, the probability measure should be invariant under arbitrary unitary transformations, i.e. M -> U M U^(-1).
No, I'm saying something much stronger. The probability measure should be invariant under arbitrary translations, M -> MN, with N hermitian. This, together with the requirements that the entire group has measure 1 and that open sets are measureable, uniquely determines a measure. Andy
From: Andy Latto <andy.latto@pobox.com>
To: math-fun <math-fun@mailman.xmission.com> Sent: Thursday, November 22, 2012 9:01 AM Subject: Re: [math-fun] complex hermitian matrix
On Thu, Nov 22, 2012 at 11:41 AM, Warren Smith <warren.wds@gmail.com> wrote:
As Andy pointed out, the probability measure should be invariant under arbitrary unitary transformations, i.e. M -> U M U^(-1).
No, I'm saying something much stronger. The probability measure should be invariant under arbitrary translations,
M -> MN, with N hermitian. This, together with the requirements that the entire group has measure 1 and that open sets are measureable, uniquely determines a measure.
Andy
_______________________________________________
But Hermitian matrices do not make a group. (AB)* = B* A* = BA, which need not equal AB. -- Gene
On Thu, Nov 22, 2012 at 12:28 PM, Eugene Salamin <gene_salamin@yahoo.com> wrote:
But Hermitian matrices do not make a group. (AB)* = B* A* = BA, which need not equal AB.
You're right; I was thinking of Unitary matrices. The set of Hermtian matrices forms neither a group nor a compact set. So it's not obvious what, if anything, is a natural measure on them. Andy
Consider the case N = 2. A general Hermitian 2 x 2 matrix may be written as H = [r, p + qi] [p - qi, s] where p,q,r,s are arbitrary real numbers. Let's experiment. Take p,q,r,s to be random real numbers. Since multiplication by a positive scalar c gives eigenvalues of cH of with the same signs we can take the random reals to be in the interval [-1,1]. The eigenvalues of H are lambda1 = 1/2*s+1/2*r+1/2*((s-r)^2+4*p^2+4*q^2)^(1/2) lambda2 = 1/2*s+1/2*r-1/2*((s-r)^2+4*p^2+4*q^2)^(1/2) After a million random choices for p,q,r,s in [-1,1] -- using the Maple command rand(-10^10..10^10)()/10.0^10 to generate the random p,q,r,s -- I get the following frequencies: two positive eigenvalues: 0.0489030 one positive eigenvalue: 0.9021480 zero positive eigenvalues: 0.0489490 So either I'm doing something wrong, or Maple is, or the claim prob(all eigevalues>0) = 2^(-N) is wrong. ---Edwin On Thu, Nov 22, 2012 at 11:41 AM, Warren Smith <warren.wds@gmail.com> wrote:
--I'm not buying Veit's claims:
The definition of "random hermitian matrix" is key. If I take it to mean "pick the eigenvalues from a symmetric real prob.distrib. then multiply resulting diagonal matrix D to get H = U^(-1) D U = hermitian with U random unitary (Haar measure) -- which is a perfectly good probability distribution -- then the proposed solution prob(all eigevalues>0) = 2^(-N) is right, regardless of what Veit says.
--Veit: The claim 2^(-N) assumes much more, that the eigenvalues are independently distributed. In fact, they are very far from independent, making the actual probability much smaller: c^(-N^2). This result is asymptotic, for large N (the limit of interest in statistical mechanics). The number c is well known, but it is not 2.
As Andy pointed out, the probability measure should be invariant under arbitrary unitary transformations, i.e. M -> U M U^(-1). But the Hermitian matrices live in an N^2 dimensional space while the space of unitary matrices has only N(N-1) dimensions. The extra N dimensions correspond to the eigenvalues of M.
Wigner had the idea of using the maximum entropy probability distribution, constrained by just two properties: the expectation values of Tr M and Tr M^2. If we want the expectation value of Tr M to be zero, then our probability distribution is simply the Gaussian e^(-Tr M^2) times the unitary-invariant measure.
--that would yield 2^(-N) as above!
If you marginalize this distribution on just the eigenvalues (i.e. integrate out the unitary transformations) you get, say in the case of N=3 (unnormalized)
dP = e^(-E1^2-E2^2-E3^2) (E1-E2)^2 (E2-E3)^2 (E3-E1)^2 dE1 dE2 dE3.
It's the product over all eigenvalue pairs -- their differences squared -- that ruins the independence of the eigenvalue distribution. BTW, this very same distribution seems to perfectly model the distribution of Riemann zeta function zeros, but nobody understands why!
_______________________________________________ math-fun mailing list math-fun@mailman.xmission.com http://mailman.xmission.com/cgi-bin/mailman/listinfo/math-fun
Hmm, so I wonder what eigenvalue-sign frequencies would arise if instead of the uniform distribution on [-1,1], one used the standard normal, independently for p, q, r, s. (Also: is there a simple explanation for why Edwin found almost identical frequencies for two as for zero positive eigenvalues?) --Dan On 2012-11-24, at 12:20 PM, W. Edwin Clark wrote:
Consider the case N = 2. A general Hermitian 2 x 2 matrix may be written as H = [r, p + qi] [p - qi, s] where p,q,r,s are arbitrary real numbers.
Let's experiment. Take p,q,r,s to be random real numbers. Since multiplication by a positive scalar c gives eigenvalues of cH of with the same signs we can take the random reals to be in the interval [-1,1].
The eigenvalues of H are
lambda1 = 1/2*s+1/2*r+1/2*((s-r)^2+4*p^2+4*q^2)^(1/2) lambda2 = 1/2*s+1/2*r-1/2*((s-r)^2+4*p^2+4*q^2)^(1/2)
After a million random choices for p,q,r,s in [-1,1] -- using the Maple command rand(-10^10..10^10)()/10.0^10 to generate the random p,q,r,s -- I get the following frequencies:
two positive eigenvalues: 0.0489030 one positive eigenvalue: 0.9021480 zero positive eigenvalues: 0.0489490
So either I'm doing something wrong, or Maple is, or the claim prob(all eigevalues>0) = 2^(-N) is wrong.
If H is Hermitian with eigenvalues lambda1 and lambda2 then -H is Hermitian with eigenvaluses -lambda1 and -lambda2. This accounts for identical frequencies for two as for zero positive eigenvalues. --Edwin On Sat, Nov 24, 2012 at 4:16 PM, Dan Asimov <dasimov@earthlink.net> wrote:
Hmm, so I wonder what eigenvalue-sign frequencies would arise if instead of the uniform distribution on [-1,1], one used the standard normal, independently for p, q, r, s.
(Also: is there a simple explanation for why Edwin found almost identical frequencies for two as for zero positive eigenvalues?)
--Dan
On 2012-11-24, at 12:20 PM, W. Edwin Clark wrote:
Consider the case N = 2. A general Hermitian 2 x 2 matrix may be written as H = [r, p + qi] [p - qi, s] where p,q,r,s are arbitrary real numbers.
Let's experiment. Take p,q,r,s to be random real numbers. Since multiplication by a positive scalar c gives eigenvalues of cH of with the same signs we can take the random reals to be in the interval [-1,1].
The eigenvalues of H are
lambda1 = 1/2*s+1/2*r+1/2*((s-r)^2+4*p^2+4*q^2)^(1/2) lambda2 = 1/2*s+1/2*r-1/2*((s-r)^2+4*p^2+4*q^2)^(1/2)
After a million random choices for p,q,r,s in [-1,1] -- using the Maple command rand(-10^10..10^10)()/10.0^10 to generate the random p,q,r,s -- I get the following frequencies:
two positive eigenvalues: 0.0489030 one positive eigenvalue: 0.9021480 zero positive eigenvalues: 0.0489490
So either I'm doing something wrong, or Maple is, or the claim prob(all eigevalues>0) = 2^(-N) is wrong.
_______________________________________________ math-fun mailing list math-fun@mailman.xmission.com http://mailman.xmission.com/cgi-bin/mailman/listinfo/math-fun
On 11/24/2012 1:16 PM, Dan Asimov wrote:
Hmm, so I wonder what eigenvalue-sign frequencies would arise if instead of the uniform distribution on [-1,1], one used the standard normal, independently for p, q, r, s.
(Also: is there a simple explanation for why Edwin found almost identical frequencies for two as for zero positive eigenvalues?)
Of course that's easy. If e is an eigenvalue then there is another matrix with a sign change such that -e is an eigenvalue. Under any reasonable measure these two matrices are equally probable. So having 0 out of 2 is as probable as having 2 out of 2. What is is puzzling is why these are not of probability 1/4. The implication is that if one eigenvalue, e1>0 then the other eigenvalue is more likely e2<0. But suppose you generate the random Hermitian matrices by choosing the eigenvalues and then a random unitary matrix to rotate them into a random basis. In that case you would clearly get the same measure for e1,e2 and e1, -e2. I realize this is not a proof because the rotation doesn't necessarily give you an Hermitian matrix, only a normal matrix. But I don't think that effects the conclusion that if you restrict the random matrices to the Hermitian ones you must still get equal measure for e1,e2 and e1,-e2. It is not correct to argue that because the eigenvalues are not independent that this condition doesn't hold. It is only necessary that the *signs* be independent. Brent
It seems that the question whether both eigenvalues are positive can be reduced to "when is lambda2 positve?" if p and/or q are too large the discriminant gets too large and lambda2 gets negative. So, maybe we should try to take |p+qi| with uniform distribution in [-1,1] and not p and q separately? Note that 4p^2 + 4q^2 = 4|p+iq|^2 is symmetric in p and q. Christoph ________________________________________ From: math-fun-bounces@mailman.xmission.com [math-fun-bounces@mailman.xmission.com] on behalf of W. Edwin Clark [wclark@mail.usf.edu] Sent: Saturday, November 24, 2012 9:21 PM To: math-fun Subject: Re: [math-fun] complex hermitian matrix Consider the case N = 2. A general Hermitian 2 x 2 matrix may be written as H = [r, p + qi] [p - qi, s] where p,q,r,s are arbitrary real numbers. Let's experiment. Take p,q,r,s to be random real numbers. Since multiplication by a positive scalar c gives eigenvalues of cH of with the same signs we can take the random reals to be in the interval [-1,1]. The eigenvalues of H are lambda1 = 1/2*s+1/2*r+1/2*((s-r)^2+4*p^2+4*q^2)^(1/2) lambda2 = 1/2*s+1/2*r-1/2*((s-r)^2+4*p^2+4*q^2)^(1/2) After a million random choices for p,q,r,s in [-1,1] -- using the Maple command rand(-10^10..10^10)()/10.0^10 to generate the random p,q,r,s -- I get the following frequencies: two positive eigenvalues: 0.0489030 one positive eigenvalue: 0.9021480 zero positive eigenvalues: 0.0489490 So either I'm doing something wrong, or Maple is, or the claim prob(all eigevalues>0) = 2^(-N) is wrong. ---Edwin On Thu, Nov 22, 2012 at 11:41 AM, Warren Smith <warren.wds@gmail.com> wrote:
--I'm not buying Veit's claims:
The definition of "random hermitian matrix" is key. If I take it to mean "pick the eigenvalues from a symmetric real prob.distrib. then multiply resulting diagonal matrix D to get H = U^(-1) D U = hermitian with U random unitary (Haar measure) -- which is a perfectly good probability distribution -- then the proposed solution prob(all eigevalues>0) = 2^(-N) is right, regardless of what Veit says.
--Veit: The claim 2^(-N) assumes much more, that the eigenvalues are independently distributed. In fact, they are very far from independent, making the actual probability much smaller: c^(-N^2). This result is asymptotic, for large N (the limit of interest in statistical mechanics). The number c is well known, but it is not 2.
As Andy pointed out, the probability measure should be invariant under arbitrary unitary transformations, i.e. M -> U M U^(-1). But the Hermitian matrices live in an N^2 dimensional space while the space of unitary matrices has only N(N-1) dimensions. The extra N dimensions correspond to the eigenvalues of M.
Wigner had the idea of using the maximum entropy probability distribution, constrained by just two properties: the expectation values of Tr M and Tr M^2. If we want the expectation value of Tr M to be zero, then our probability distribution is simply the Gaussian e^(-Tr M^2) times the unitary-invariant measure.
--that would yield 2^(-N) as above!
If you marginalize this distribution on just the eigenvalues (i.e. integrate out the unitary transformations) you get, say in the case of N=3 (unnormalized)
dP = e^(-E1^2-E2^2-E3^2) (E1-E2)^2 (E2-E3)^2 (E3-E1)^2 dE1 dE2 dE3.
It's the product over all eigenvalue pairs -- their differences squared -- that ruins the independence of the eigenvalue distribution. BTW, this very same distribution seems to perfectly model the distribution of Riemann zeta function zeros, but nobody understands why!
_______________________________________________ math-fun mailing list math-fun@mailman.xmission.com http://mailman.xmission.com/cgi-bin/mailman/listinfo/math-fun
_______________________________________________ math-fun mailing list math-fun@mailman.xmission.com http://mailman.xmission.com/cgi-bin/mailman/listinfo/math-fun
What we are asking is when is a Hermitian matrix positive definite? By Sylvester's criterion <http://en.wikipedia.org/wiki/Sylvester%27s_criterion> a Hermitian matrix is positive definite iff the leading principle minors are all positive. So for the 2x2 matrix H = [r, p + qi] [p - qi, s] this becomes simply r > 0 and rs - (p^2 + q^2) >0. It should be possible to calculate the exact probability. Since I am lazy I will only do it for q = 0. Which is itself interesting. In case q = 0 and H is real symmetric and these conditions become r > 0 and rs > p^2. In this case we have the interior of a surface lying in the region of r,s,p space with r > 0 and s > 0, |p| < sqrt(rs). If we consider the solid lying in the box |r| <= n, |s| <= n, |p| <= n, and if my calculations are correct the volume is 8*n^3/9 and since the volume of the whole box is (2*n)^3. The quotient is 1/9. Note that this holds no matter how large n is, so it is natural to take this as the probability that a 2x2 symmetric matrix has two positive eigenvalues. Note that 1/9 < 1/4. So perhaps it is not surprising that this probability is also < 1/4 for 2x2 Hermitian matrices and that Viet is right. --Edwin On Sat, Nov 24, 2012 at 9:32 PM, Pacher Christoph < Christoph.Pacher@ait.ac.at> wrote:
It seems that the question whether both eigenvalues are positive can be reduced to "when is lambda2 positve?" if p and/or q are too large the discriminant gets too large and lambda2 gets negative.
So, maybe we should try to take |p+qi| with uniform distribution in [-1,1] and not p and q separately? Note that 4p^2 + 4q^2 = 4|p+iq|^2 is symmetric in p and q.
Christoph
________________________________________ From: math-fun-bounces@mailman.xmission.com [ math-fun-bounces@mailman.xmission.com] on behalf of W. Edwin Clark [ wclark@mail.usf.edu] Sent: Saturday, November 24, 2012 9:21 PM To: math-fun Subject: Re: [math-fun] complex hermitian matrix
Consider the case N = 2. A general Hermitian 2 x 2 matrix may be written as H = [r, p + qi] [p - qi, s] where p,q,r,s are arbitrary real numbers.
Let's experiment. Take p,q,r,s to be random real numbers. Since multiplication by a positive scalar c gives eigenvalues of cH of with the same signs we can take the random reals to be in the interval [-1,1].
The eigenvalues of H are
lambda1 = 1/2*s+1/2*r+1/2*((s-r)^2+4*p^2+4*q^2)^(1/2) lambda2 = 1/2*s+1/2*r-1/2*((s-r)^2+4*p^2+4*q^2)^(1/2)
After a million random choices for p,q,r,s in [-1,1] -- using the Maple command rand(-10^10..10^10)()/10.0^10 to generate the random p,q,r,s -- I get the following frequencies:
two positive eigenvalues: 0.0489030 one positive eigenvalue: 0.9021480 zero positive eigenvalues: 0.0489490
So either I'm doing something wrong, or Maple is, or the claim prob(all eigevalues>0) = 2^(-N) is wrong.
---Edwin
On Thu, Nov 22, 2012 at 11:41 AM, Warren Smith <warren.wds@gmail.com> wrote:
--I'm not buying Veit's claims:
The definition of "random hermitian matrix" is key. If I take it to mean "pick the eigenvalues from a symmetric real prob.distrib. then multiply resulting diagonal matrix D to get H = U^(-1) D U = hermitian with U random unitary (Haar measure) -- which is a perfectly good probability distribution -- then the proposed solution prob(all eigevalues>0) = 2^(-N) is right, regardless of what Veit says.
--Veit: The claim 2^(-N) assumes much more, that the eigenvalues are independently distributed. In fact, they are very far from independent, making the actual probability much smaller: c^(-N^2). This result is asymptotic, for large N (the limit of interest in statistical mechanics). The number c is well known, but it is not 2.
As Andy pointed out, the probability measure should be invariant under arbitrary unitary transformations, i.e. M -> U M U^(-1). But the Hermitian matrices live in an N^2 dimensional space while the space of unitary matrices has only N(N-1) dimensions. The extra N dimensions correspond to the eigenvalues of M.
Wigner had the idea of using the maximum entropy probability distribution, constrained by just two properties: the expectation values of Tr M and Tr M^2. If we want the expectation value of Tr M to be zero, then our probability distribution is simply the Gaussian e^(-Tr M^2) times the unitary-invariant measure.
--that would yield 2^(-N) as above!
If you marginalize this distribution on just the eigenvalues (i.e. integrate out the unitary transformations) you get, say in the case of N=3 (unnormalized)
dP = e^(-E1^2-E2^2-E3^2) (E1-E2)^2 (E2-E3)^2 (E3-E1)^2 dE1 dE2 dE3.
It's the product over all eigenvalue pairs -- their differences squared -- that ruins the independence of the eigenvalue distribution. BTW, this very same distribution seems to perfectly model the distribution of Riemann zeta function zeros, but nobody understands why!
_______________________________________________ math-fun mailing list math-fun@mailman.xmission.com http://mailman.xmission.com/cgi-bin/mailman/listinfo/math-fun
_______________________________________________ math-fun mailing list math-fun@mailman.xmission.com http://mailman.xmission.com/cgi-bin/mailman/listinfo/math-fun
You can make life a lot simpler by using normal distributions on the matrix elements instead of uniform ones. First off, suppose you are interested in (the different problem of) generating random vectors [x y] with a distribution that is rotationally symmetric. You would choose independent normal distributions on x and y (rather than uniform) because, well, e^(-x^2) e^(-y^2) = e^(-x^2-y^2). The most general probability distribution on 2x2 Hermitian matrices that is symmetric with respect to arbitrary unitary transformations is (in your notation), dP = f(H) dp dq dr ds where the function f is invariant, i.e. f(UHU^(-1)) = f(H) for arbitrary unitary U. A "natural" choice is f(H) = e^(-Tr H^2) = e^(-p^2-q^2-r^2-s^2). This is also Wigner's choice, as the distribution having maximum entropy over all distributions having a given expectation value for Tr H^2. If you are interested in eigenvalue distributions you would prefer a different set of coordinates from p,q,r,s. Two coordinates can be the eigenvalues e1 and e2, leaving two additional "angular" coordinates. It's a relatively straightforward exercise to compute the Jacobian of the transformation, integrate out the angular coordinates, and arrive at (up to a normalization constant): dP' = f(H) (e1-e2)^2 de1 de2 The (e1-e2)^2 is the famous "level repulsion" factor, which explains the statistics of energy level spacings and apparently also the spacing of zeroes of the zeta function. Since f(H)=e^(-e1^2-e2^2), the problem of calculating the probability that both e1 and e2 are positive is just a matter of doing some Gaussian integrals ... -Veit On Nov 25, 2012, at 11:21 AM, "W. Edwin Clark" <wclark@mail.usf.edu> wrote:
What we are asking is when is a Hermitian matrix positive definite? By Sylvester's criterion <http://en.wikipedia.org/wiki/Sylvester%27s_criterion> a Hermitian matrix is positive definite iff the leading principle minors are all positive. So for the 2x2 matrix H = [r, p + qi] [p - qi, s] this becomes simply r > 0 and rs - (p^2 + q^2) >0. It should be possible to calculate the exact probability. Since I am lazy I will only do it for q = 0. Which is itself interesting.
In case q = 0 and H is real symmetric and these conditions become r > 0 and rs > p^2. In this case we have the interior of a surface lying in the region of r,s,p space with r > 0 and s > 0, |p| < sqrt(rs). If we consider the solid lying in the box |r| <= n, |s| <= n, |p| <= n, and if my calculations are correct the volume is 8*n^3/9 and since the volume of the whole box is (2*n)^3. The quotient is 1/9. Note that this holds no matter how large n is, so it is natural to take this as the probability that a 2x2 symmetric matrix has two positive eigenvalues. Note that 1/9 < 1/4. So perhaps it is not surprising that this probability is also < 1/4 for 2x2 Hermitian matrices and that Viet is right.
--Edwin
On Sat, Nov 24, 2012 at 9:32 PM, Pacher Christoph < Christoph.Pacher@ait.ac.at> wrote:
It seems that the question whether both eigenvalues are positive can be reduced to "when is lambda2 positve?" if p and/or q are too large the discriminant gets too large and lambda2 gets negative.
So, maybe we should try to take |p+qi| with uniform distribution in [-1,1] and not p and q separately? Note that 4p^2 + 4q^2 = 4|p+iq|^2 is symmetric in p and q.
Christoph
________________________________________ From: math-fun-bounces@mailman.xmission.com [ math-fun-bounces@mailman.xmission.com] on behalf of W. Edwin Clark [ wclark@mail.usf.edu] Sent: Saturday, November 24, 2012 9:21 PM To: math-fun Subject: Re: [math-fun] complex hermitian matrix
Consider the case N = 2. A general Hermitian 2 x 2 matrix may be written as H = [r, p + qi] [p - qi, s] where p,q,r,s are arbitrary real numbers.
Let's experiment. Take p,q,r,s to be random real numbers. Since multiplication by a positive scalar c gives eigenvalues of cH of with the same signs we can take the random reals to be in the interval [-1,1].
The eigenvalues of H are
lambda1 = 1/2*s+1/2*r+1/2*((s-r)^2+4*p^2+4*q^2)^(1/2) lambda2 = 1/2*s+1/2*r-1/2*((s-r)^2+4*p^2+4*q^2)^(1/2)
After a million random choices for p,q,r,s in [-1,1] -- using the Maple command rand(-10^10..10^10)()/10.0^10 to generate the random p,q,r,s -- I get the following frequencies:
two positive eigenvalues: 0.0489030 one positive eigenvalue: 0.9021480 zero positive eigenvalues: 0.0489490
So either I'm doing something wrong, or Maple is, or the claim prob(all eigevalues>0) = 2^(-N) is wrong.
---Edwin
On Thu, Nov 22, 2012 at 11:41 AM, Warren Smith <warren.wds@gmail.com> wrote:
--I'm not buying Veit's claims:
The definition of "random hermitian matrix" is key. If I take it to mean "pick the eigenvalues from a symmetric real prob.distrib. then multiply resulting diagonal matrix D to get H = U^(-1) D U = hermitian with U random unitary (Haar measure) -- which is a perfectly good probability distribution -- then the proposed solution prob(all eigevalues>0) = 2^(-N) is right, regardless of what Veit says.
--Veit: The claim 2^(-N) assumes much more, that the eigenvalues are independently distributed. In fact, they are very far from independent, making the actual probability much smaller: c^(-N^2). This result is asymptotic, for large N (the limit of interest in statistical mechanics). The number c is well known, but it is not 2.
As Andy pointed out, the probability measure should be invariant under arbitrary unitary transformations, i.e. M -> U M U^(-1). But the Hermitian matrices live in an N^2 dimensional space while the space of unitary matrices has only N(N-1) dimensions. The extra N dimensions correspond to the eigenvalues of M.
Wigner had the idea of using the maximum entropy probability distribution, constrained by just two properties: the expectation values of Tr M and Tr M^2. If we want the expectation value of Tr M to be zero, then our probability distribution is simply the Gaussian e^(-Tr M^2) times the unitary-invariant measure.
--that would yield 2^(-N) as above!
If you marginalize this distribution on just the eigenvalues (i.e. integrate out the unitary transformations) you get, say in the case of N=3 (unnormalized)
dP = e^(-E1^2-E2^2-E3^2) (E1-E2)^2 (E2-E3)^2 (E3-E1)^2 dE1 dE2 dE3.
It's the product over all eigenvalue pairs -- their differences squared -- that ruins the independence of the eigenvalue distribution. BTW, this very same distribution seems to perfectly model the distribution of Riemann zeta function zeros, but nobody understands why!
_______________________________________________ math-fun mailing list math-fun@mailman.xmission.com http://mailman.xmission.com/cgi-bin/mailman/listinfo/math-fun
_______________________________________________ math-fun mailing list math-fun@mailman.xmission.com http://mailman.xmission.com/cgi-bin/mailman/listinfo/math-fun
_______________________________________________ math-fun mailing list math-fun@mailman.xmission.com http://mailman.xmission.com/cgi-bin/mailman/listinfo/math-fun
On 11/25/2012 10:34 AM, Veit Elser wrote:
You can make life a lot simpler by using normal distributions on the matrix elements instead of uniform ones.
First off, suppose you are interested in (the different problem of) generating random vectors [x y] with a distribution that is rotationally symmetric. You would choose independent normal distributions on x and y (rather than uniform) because, well, e^(-x^2) e^(-y^2) = e^(-x^2-y^2).
The most general probability distribution on 2x2 Hermitian matrices that is symmetric with respect to arbitrary unitary transformations is (in your notation),
dP = f(H) dp dq dr ds
where the function f is invariant, i.e. f(UHU^(-1)) = f(H) for arbitrary unitary U. A "natural" choice is f(H) = e^(-Tr H^2) = e^(-p^2-q^2-r^2-s^2). This is also Wigner's choice, as the distribution having maximum entropy over all distributions having a given expectation value for Tr H^2.
If you are interested in eigenvalue distributions you would prefer a different set of coordinates from p,q,r,s. Two coordinates can be the eigenvalues e1 and e2, leaving two additional "angular" coordinates. It's a relatively straightforward exercise to compute the Jacobian of the transformation, integrate out the angular coordinates, and arrive at (up to a normalization constant):
dP' = f(H) (e1-e2)^2 de1 de2
The (e1-e2)^2 is the famous "level repulsion" factor, which explains the statistics of energy level spacings and apparently also the spacing of zeroes of the zeta function. Since f(H)=e^(-e1^2-e2^2), the problem of calculating the probability that both e1 and e2 are positive is just a matter of doing some Gaussian integrals ...
But why not just take e1 and e2 to independently be ~N(0,1)? Brent
participants (8)
-
Andy Latto -
Dan Asimov -
Eugene Salamin -
meekerdb -
Pacher Christoph -
Veit Elser -
W. Edwin Clark -
Warren Smith