[math-fun] Question about differential equations
Can someone help me out with a problem involving differential equations? The equation u"(x) = (V(x) - E) u(x), with V(x+P) = V(x) is the Schroedinger wave equation for an electron in a 1-dimensional crystal. V(x) is the periodic potential, and E is the energy of the electron. If u1(x) and u2(x) are linearly independent solutions, so are u1(x+P) and u2(x+P). Since the solution space is 2-dimensional, it follows that [u1(x+P)] = [A B] [u1(x)] [u2(x+P)] [C D] [u2(x)] for some constants A, B, C, D. Call the matrix M. The absence of a u'(x) term implies that the Wronskian is constant, from which it follows that det(M) = 1. A change of basis in the solution space induces a similarity transformation on M. The differential equation is real, so either u1 and u2 are real, or u2 can be chosen to be the complex conjugate of u1. Either way, tr(M) = A+D is real. So the eigenvalues of M are either (exp(+i t), exp(-i t)) or (r, 1/r), with r and t real. In the first case, an electron of energy E can exist in the crystal, while in the second case the energy is forbidden. Thus the band structure is characterized by the function tr(M(E)). The allowed energy bands are given by -2 <= tr(M(E)) <= 2, and energies E for which this inequality is false lie in the gap between bands. Now for my question. Suppose I'm only interested in the energy bands, and I don't care about the actual wave function u(x). Then, given some value of E, is there a way to calculate tr(M(E)) without having to solve the differential equation? Gene __________________________________ Do you Yahoo!? Yahoo! Mail Address AutoComplete - You start. We finish. http://promotions.yahoo.com/new_mail
Quoting Eugene Salamin <gene_salamin@yahoo.com>:
Now for my question. Suppose I'm only interested in the energy bands, and I don't care about the actual wave function u(x). Then, given some value of E, is there a way to calculate tr(M(E)) without having to solve the differential equation?
As far as I know, which may not be all that much, there is no shortcut. However, depending on your potential, you may be able to develop a good approximation and relate your answer to the stability chart of the Mathieu functions. That would work if your potential were close to a cosine and splitting the coefficient matrix would give a new equation for the coefficients of the desired solution in terms of Mathieu functions. But if the new coefficient were small, a couple of terms of the power series for the new solution might suffice. I've done this for other equations, but never tried it in the Mathieu environment. - hvm ------------------------------------------------- www.correo.unam.mx UNAMonos Comunicándonos
--- mcintosh@servidor.unam.mx wrote:
Quoting Eugene Salamin <gene_salamin@yahoo.com>:
Now for my question. Suppose I'm only interested in the energy bands, and I don't care about the actual wave function u(x). Then, given some value of E, is there a way to calculate tr(M(E)) without having to solve the differential equation?
As far as I know, which may not be all that much, there is no shortcut.
I had come to the same conclusion myself. If a simple cosine potential leads to complicated Mathieu function solutions, then the appearance of Mathieu functions in tr(M) seems unavoidable, and the full machinery of solving the differential equation should be required just to bring these Mathieu functions into existence.
However, depending on your potential, you may be able to develop a good approximation and relate your answer to the stability chart of the Mathieu functions. That would work if your potential were close to a cosine and splitting the coefficient matrix would give a new equation for the coefficients of the desired solution in terms of Mathieu functions. But if the new coefficient were small, a couple of terms of the power
series for the new solution might suffice.
I've done this for other equations, but never tried it in the Mathieu environment.
- hvm
Could you elaborate a bit more about this "splitting"? Assuming I can't avoid solving the differential equation, rewrite it as a first order matrix differential equation. Let U(x) be the column vector [u(x) u'(x)]. Then U'(x) = K(x) U(x) with K(x) = [0 0] [V(x)-E 1]. The wave function can be written symbolically as U(x) = G(x) U(0), but because the commutator [K(x1),K(x2)] is nonzero, the expression of G(x) in terms of V(t), 0<=t<=x, is nontrivial. Indeed, we know that if V(x) is a cosine, G(x) has Mathieu functions. If V(x) is approximated by a step function, then we can integrate over each step. U(x2) = G0(x2-x1) U(x1), with G0(s) = exp(sK) = cosh(s sqrt(V-E)) + (sinh(s sqrt(V-E))/sqrt(V-E)) K. Then, for 0 < x1 < ... < xn < x, we have the approximation G(x) = G0(x-xn) ... G0(x2-x1) G0(x1-0). If we take as basis in the solution space the functions u1(x), u2(x) such that u1(0) = 1 = u2'(0), u1'(0) = 0 = u2(0), then, in the notation of my previous message, M = G(P). Gene __________________________________ Do you Yahoo!? Yahoo! Mail is new and improved - Check it out! http://promotions.yahoo.com/new_mail
Quoting Eugene Salamin <gene_salamin@yahoo.com>:
I had come to the same conclusion myself. If a simple cosine potential leads to complicated Mathieu function solutions, then the appearance of Mathieu functions in tr(M) seems unavoidable, and the full machinery of solving the differential equation should be required just to bring these Mathieu functions into existence.
Are Mathieu functions such bad little critters? Mainly they just don't get used much unless you specialize in the things we are talking about. They´re just lumpy sines and cosines.
However, depending on your potential, you may be able to develop a good approximation and [...] I've done this for other equations, but never tried it in the Mathieu environment.
Could you elaborate a bit more about this "splitting"?
It is described in some notes on complex variable theory which I am using in my class. Look at http://delta.cs.cinvestav.mx/mcintosh/oldweb/pothers.html and then at the first item, ¨Complex Analysis.¨ Section 8 goes into systems of linear differential equations, while section 8.4.3 talks about the sum rule for logarithms. This is somewhat related to the Campbell-Hausdorff formula; W. Magnus has a very nice commutator identity which isn't discussed there. Further down, "Resonance in the Dirac Harmonic Oscillator" splits off the mass from the rest of the Hamiltonian, and was the most interesting of the "other equations" which I mentioned. Still further along, "Periodic Potentials in One Dimension" talke about some Mathieu-like situations, including the use of the Dirac equation. All of these items have options to either just look at them, or to coly them.
Assuming I can't avoid solving the differential equation, rewrite it as a first order matrix differential equation. Let U(x) be the column vector [u(x) u'(x)]. Then U'(x) = K(x) U(x) with
K(x) = [0 0] [ <--- 1,1 element is 1] [V(x)-E 1].
The wave function can be written symbolically as U(x) = G(x) U(0), but because the commutator [K(x1),K(x2)] is nonzero, the expression of G(x) in terms of V(t), 0<=t<=x, is nontrivial. Indeed, we know that if V(x) is a cosine, G(x) has Mathieu functions.
This show up in my treatment where G is written as a sum, the nice half is solved, but then it must be used to transform the second half. Sometimes the process can be repeated, usually not. That is, it always can be, but the result may not be pleasant. With the Dirac Harmonic Oscillator you get some nice spirals.
If V(x) is approximated by a step function, then we can integrate over each step. U(x2) = G0(x2-x1) U(x1), with [...]
Authors have suffested this, I have done it myself. Even though the half intervals are readily soluble, you may need more intervals, and then the hyperbolic trigonometry becomes oppressive, and the convergence may not be as good as for other methods. But I don't know than anyone has made a really systematic study. It works well for the Kronig-Penny combs, where your lattice is made up of delta functions, and that is good enough to deduce bands. - hvm ------------------------------------------------- www.correo.unam.mx UNAMonos Comunicándonos
Quoting mcintosh@servidor.unam.mx:
^ easy to forget the twiddle! - hvm ------------------------------------------------- www.correo.unam.mx UNAMonos Comunicándonos
--- mcintosh@servidor.unam.mx wrote:
Quoting mcintosh@servidor.unam.mx:
^
easy to forget the twiddle!
- hvm
It's a very nice paper, and spells out much of what I said in these messages. I programmed this method in Maple for the case of a symmetric square well potential, and plotted T(E) vs. E, where E is the energy and T is the trace of the propagator matrix. T(E) is continuous and oscillatory; nothing special happens when |T(E)| crosses between <2, when the energy is allowed and >2, when the energy is forbidden. When the potential acts as a barrier, T(E) swings well beyond +-2, so that the allowed energy band is narrow. When the energy lies above all of the potential, T(E) ranges nearly, but not exactly, between -2 and +2. When the extremum goes beyond +-2, we have a small energy gap. But it may also happen that an extremum lies short of +-2, in which case we have a small momentum gap. This method we have been discussing appears capable of analysing in great detail the electronic structure of one dimensional crystals, at least under the constraint of a given fixed potential. We have relied upon the theorem that the solution space is two dimensional, corresponding to the two arbitrary constants in the general solution of the ODE. How do we handle the three dimensional crystal? The Schroedinger equation becomes a PDE, and the general solution possesses arbitrary functions. I am not aware of any "momentum gaps" in solid state physics, and I will try to see whether this phenomenon is possible in three dimensions. Gene __________________________________ Do you Yahoo!? Vote for the stars of Yahoo!'s next ad campaign! http://advision.webevents.yahoo.com/yahoo/votelifeengine/
Quoting Eugene Salamin <gene_salamin@yahoo.com>:
This method we have been discussing appears capable of analysing in great detail the electronic structure of one dimensional crystals, at least under the constraint of a given fixed potential. We have relied upon the theorem that the solution space is two dimensional, corresponding to the two arbitrary constants in the general solution of the ODE. How do we handle the three dimensional crystal? The Schroedinger equation becomes a PDE, and the general solution possesses arbitrary functions.
I am not aware of any "momentum gaps" in solid state physics, and I will try to see whether this phenomenon is possible in three dimensions.
The results might be a little easier to visualize in two dimensions, where the complexity inherent in three dimensions is already present. You get Brillouin zones and Fermi levels and such like. I am not sure whether there are separable potentials to get nice, even if unrealistic, models. Generally, I think, matrix Hamiltonians are used rather than the partial differential Schroedinger equation. I'm not competent to discuss this in any further detail, but I do know that bands and gaps and all that are calculable and work out quite nicely. - hvm ------------------------------------------------- www.correo.unam.mx UNAMonos Comunicándonos
Quoting mcintosh@servidor.unam.mx:
You get Brillouin zones and Fermi levels and such like. I am not sure ...
I got to thinking about what this has to do with the Mathieu equation. I guess that it has to do with the band edges: the place where the cosine in that trace formula is +1 or -1 is when the phase shift across the unit cell is zero or 180 degrees; and the Brillouin zone boundaries tell where you can get reflection from the differently oriented crystal planes, so it would be pretty much the same thing. - hvm ------------------------------------------------- www.correo.unam.mx UNAMonos Comunicándonos
I have to retract my statement concerning the existence of "momentum gaps". Plotting tr(M(E)) at higher resolution in the neighborhood of the maxima shows that the maximum is actually >2, so we have the usual energy gap. Gene __________________________________ Do you Yahoo!? New and Improved Yahoo! Mail - Send 10MB messages! http://promotions.yahoo.com/new_mail
--- mcintosh@servidor.unam.mx wrote:
Quoting Eugene Salamin <gene_salamin@yahoo.com>:
I had come to the same conclusion myself. If a simple cosine potential leads to complicated Mathieu function solutions, then the appearance of Mathieu functions in tr(M) seems unavoidable, and the full machinery of solving the differential equation should be required just to bring these Mathieu functions into existence.
Are Mathieu functions such bad little critters? Mainly they just don't get used much unless you specialize in the things we are talking about. They´re just lumpy sines and cosines.
I'm just speaking complexity theoretically here. The machinery needed to go from rational functions to logarithms is integration.
Assuming I can't avoid solving the differential equation, rewrite it as a first order matrix differential equation. Let U(x) be the column vector [u(x) u'(x)]. Then U'(x) = K(x) U(x) with
K(x) = [0 0] [ <--- 1,1 element is 1] [V(x)-E 1].
That was indeed a typo. I should have said K(x) = [0 1] [V(x)-E 0].
The wave function can be written symbolically as U(x) = G(x) U(0), but because the commutator [K(x1),K(x2)] is nonzero, the expression of G(x) in terms of V(t), 0<=t<=x, is nontrivial. Indeed, we know that if V(x) is a cosine, G(x) has Mathieu functions.
This show up in my treatment where G is written as a sum, the nice half is solved, but then it must be used to transform the second half. Sometimes the process can be repeated, usually not. That is, it always can be, but the result may not be pleasant. With the Dirac Harmonic Oscillator you get some nice spirals.
If V(x) is approximated by a step function, then we can integrate over each step. U(x2) = G0(x2-x1) U(x1), with [...]
Authors have suffested this, I have done it myself. Even though the half intervals are readily soluble, you may need more intervals, and then the hyperbolic trigonometry becomes oppressive, and the convergence may not be as good as for other methods. But I don't know than anyone
has made a really systematic study. It works well for the Kronig-Penny combs, where your lattice is made up of delta functions, and that is good enough to deduce bands.
- hvm
For Kronig-Penny, or more generally, when V(x) is stepwise constant plus delta functions (finitely many of each), my method exactly integrates the Schroedinger equation. An alternative to increasing the number of intervals is to approximate V(x) by piecewise linear. Then the hyperbolic/trig functions in G0 are replaced by Airy functions. Better yet, use a professional numerical differential equation solver. Gene __________________________________ Do you Yahoo!? New and Improved Yahoo! Mail - 100MB free storage! http://promotions.yahoo.com/new_mail
As I remember from a quantum mechanics course I took in the 1940s, there is a way of estimating energy levels that does not solve the wave equation. I learned it only for discrete energy levels, but I'd suppose it applies to bands. It is based on the fact that the energy level is a minimum of an energy integral over all functions satisfying certain conditions. The method consists of representing a hypothetical wave function by a linear combination of terms satisfying boundary conditions. You then choose coefficients to minimize the energy which is a sum of products. As I recall it was used to estimate the energy levels of the helium atom and the ionized hydrogen molecule. You get an upper bound, and sometimes it was quite close to the experimental value.
Quoting John McCarthy <jmc@steam.Stanford.EDU>:
As I remember from a quantum mechanics course I took in the 1940s, there is a way of estimating energy levels that does not solve the wave equation. I learned it only for discrete energy levels, but I'd suppose it applies to bands.
It is based on the fact that the energy level is a minimum of an energy integral over all functions satisfying certain conditions. The method consists of representing a hypothetical wave function by a linear combination of terms satisfying boundary conditions. You then choose coefficients to minimize the energy which is a sum of products. As I recall it was used to estimate the energy levels of the helium atom and the ionized hydrogen molecule. You get an upper bound, and sometimes it was quite close to the experimental value.
I'd guess that this is the Rayleigh-Ritz principle. In terms of matrices, XtMX/XtX is less than the largest eigenvalue, so vary X and pick the biggest result; a more extensive search gives better results. Usually you pick a unit X and look at XtMX and the inequality reverses and you have lowering upper bounds. Think of looking for the longest semiaxis on an ellipsoid. A variant is the Courant minimax principle, which says that the minumum over hyperplanes of the longest semiaxis in the hyperplane is an eigen vector, whose eigenvalue is the semiaxis. As I recall, these calculations are harder to manage; you are looking for saddle points on the ellipsoid, in terms of distance from the center. Use positive definite matrices so as not to worry about hyperboloids, but I'd say the critical pointology is the same. In terms of band theory, there is a continuum of eigenfuctions, typically non-normalizable, but I wouldn't say that there isn't a relevant variational principle. Moreover, you are looking for band edges, not wave functions in the interior of the band; these have distinguishing characteristics which may help searching for them. But I don't recall the details, just sitting here. It has to do with the trace being exactly 1 or -1; for -1 you get subharmonics (in vibration theory). - hvm ------------------------------------------------- www.correo.unam.mx UNAMonos Comunicándonos
participants (3)
-
Eugene Salamin -
John McCarthy -
mcintosh@servidor.unam.mx