Solving Hermite’s Equation

Back to Contents

    I want to show you how to solve the second-order partial differential equation

(Eq’n 1)

that mathematicians associate with Charles Hermite (1822 Dec 24 – 1901 Jan 14) and I want to do it directly. I do not want to use educated guessing, the method that we usually employ in solving differential equations, but I want to use a process more akin to what we use to solve simple algebraic equations. For convenience I have written the differential operator as

(Eq’n 2)

I now want to determine the algebraic form of the function u=u(x) that will solve Equation 1.

    If Equation 1 did not involve differential operators, I could treat it as an ordinary quadratic equation, complete the square, and extract the square root. But differential operators don’t work that way, so I will have to do something different. In particular I shall use a technique that I discovered (or, more likely, rediscovered) while composing a tutorial essay on the quantum mechanical theory of an harmonic oscillator.

    Start by eliminating the middle term from Equation 1. To that end rewrite the solution function as a product,

(Eq’n 3)

and substitute that product into the left side of Equation 1 to get

(Eq’n 4)

I made no assumptions in contriving Equation 3 because I can always represent a function of one variable as a product of two other functions of that variable, even if those functions happen to consist of some function and its reciprocal.

    I want to determine the algebraic form of w(x) and I would like w(x) to have such an algebraic form that the second-order derivatives get eliminated from Equation 4, but that plan involves my knowing the explicit algebraic form of the second-order derivative of ψ, which I won’t have until I have solved Hermite’s Equation. Alternately, I can require that w(x) have such a form that it makes the terms in xψ add up to a net zero. In that case I don’t need to know the explicit form of xψ before I solve Hermite’s Equation and I get

(Eq’n 5)

which equation I solve by dividing it by w, multiplying it by dx, and integrating it. That procedure yields

(Eq’n 6)

whose antilogarithm gives me

(Eq’n 7)

    Substituting from Equation 7 into Equation 3, substituting the result of that move into Equation 1, and then dividing out w yields

(Eq’n 8)

That equation has the same form as does Schrödinger’s Equation describing a simple harmonic oscillator (a small mass attached to a spring that’s attached to an effectively immovable support). Because I want a purely mathematical treatment in this case I have left out the physical factors (e.g. Planck’s constant, the mass, and the spring constant). Also, strictly speaking, Equation 8 represents the negative of Schrödinger’s Equation. Properly I should multiply it by minus one, but I want to obtain the strictly correct solution of Equation 1, so I will leave Equation 8 unmodified. To solve that equation I first define two operators;

(Eq’n 9)

and

(Eq’n 10)

If I had multiplied Equation 8 by minus one, those operators would appear with their terms reversed and the A*-operator would appear as the negative of what you see in Equation 10. I will return to that fact when I have my solution of Equation 8.

    I use those operators to rewrite Equation 8 as

(Eq’n 11)

in which I have the commutator

(Eq’n 12)

A commutator merely tells us what we must add to or subtract from a product-like combination of two mathematical entities when we commute one of those entities past the other. I can also reverse the order of the A-operators and rewrite Equation 8 as

(Eq’n 13)

Comparing that equation with Equation 11 demonstrates that the order in which we apply operators to a function makes a difference in the outcome; thus the need for commutators. Subtracting Equation 13 from Equation 11 yields the commutator

(Eq’n 14)

    In the next step I devise one more commutator. I need the ability to commute trios of the A-operators. So I have

(Eq’n 15)

I have commuted the A*-operator with the commutator [x, x] as if that latter commutator equaled a constant: for a proof and verification see the Appendix. Using the same reasoning, I also get

(Eq’n 16)

    Apply the A*-operator to Equation 11. Using the commutator of Equation 15, I get

(Eq’n 17)

which yields, via a little algebraic rearranging,

(Eq’n 18)

That result tells me that if ψ represents a solution of Equation 11, then A*ψ also represents a solution of Equation 11 with the value of λ augmented by one. If I apply the A*-operator to Equation 11 n times, I get

(Eq’n 19)

Thus, if ψ solves Equation 11, then so does (A*)nψ. In like manner if I apply the A-operator to Equation 13 n times, I get (using the commutator of Equation 16)

(Eq’n 20)

Because the operator changes the function to which I apply it, I can define the set of functions ψn by

(Eq’n 21)

so that we have

(Eq’n 22)

and

(Eq’n 23)

Because they change the function to a higher or lower order, physicists and mathematicians call A* a raising operator and call A a lowering operator.

    So now I know that I can start with any function that solves Equation 8 and, by applying the raising or lowering operators repeatedly, generate all of the other solutions of that equation. An examination of Equation 8 shows me that I must include ψ=0 among the solutions of the equation, trivial though it may appear. That fact gives me a minimum solution and thus necessitates the existence of a solution ψ0 such that

(Eq’n 24)

stands true to mathematics. I can rewrite that equation by expressing the lowering operator explicitly to get

(Eq’n 25)

Solving that equation takes only a short series of quick, easy steps:

    1. Subtract xψ0 from both sides,

    2. Divide by ψ0,

    3. Multiply by dx and integrate,

    4. Take the antilogarithm,

(Eq’ns 26)

Et voila!

    (Given the number of great mathematicians who were either French (such as Charles Hermite) or Francophile (e.g. the Russians prior to 1917), the absence of that phrase in mathematical presentations astonishes me.)

    But that solution looks wrong. It makes no reference to the value of λ. However, if I substitute that solution into Equation 8, I find that it solves the equation if and only if λ=1/2. That result puts too great a restriction on the value of λ: indeed, Hermite’s Equation places no restriction on the value of λ. Equation 19, though, implies that other solutions will work for different values of λ over the whole range of values conforming to λ=n+1/2. Equation 8 then becomes

(Eq’n 27)

Modifying Equation 21 by expressing the raising operator explicitly yields

(Eq’n 28)

    But the solutions of Equation 27 still restrict the values of λ to integer and a half. For other values of λ we must use a linear superposition of the basic solutions,

(Eq’n 29)

in which the coefficients Ci represent constants. Each term in that sum satisfies Equation 8 separately and the whole superposition produces a value of λ as a kind of average of the value of λ inherent in each term. Thus the entire set of functions ψn(x) provides the basis for a function that will satisfy Equation 8 for any value of λ.

    Finally I multiply Equation 28 by Equation 7 to get the full solution, the set of functions that dissolve Equation 1;

(Eq’n 30)

But that formula does not represent the standard solution of Hermite’s Equation, although it does represent a legitimate set of solutions. In working out my solution I used the negative of the raising operator that usually gets used in solving the Schrödinger version of Hermite’s Equation, so for every instance of the raising operator in my solution I must multiply the solution by minus one to obtain the standard solution. Multiplying Equation 30 by (-1)n thus produces the basic form of the standard Hermite polynomials

(Eq’n 31)

I can simplify that a bit by multiplying it by 1=exp[+x2/2]exp[-x2/2] and commuting the second factor in that expression with the operator. I do that in order to simplify the operator. I know that

(Eq’n 32)

For this derivation I can use that fact and make use of the fact that

(Eq’n 33)

with U=V=exp[-x2/2] in this case. For convenience I define

(Eq’n 34)

and

(Eq’n 35)

I thus have

(Eq’n 36)

I express Wm-1 as

(Eq’n 37)

and repeat the process. After m repetitions and re-multiplication by the factors I left behind, I get

(Eq’n 38)

That, of course, displays Rodrigues’ formula for the standard Hermite polynomials.

Appendix: The Commutator of A* and [x, x]

    In the text I have assumed that, because applying the commutator [x,x] to any function of x has the same effect as does multiplying that function by -1, that we can treat it as a constant and commute it with our A-operators. Now I want to prove and verify the content of that assumption. To that end I want to determine the form of the commutator [A*, [x,x]].

    We want to calculate

(Eq’n A-1)

Although I have expressed the operators as standing alone, in mathematical fact we must apply them to some function of the relevant variable, f(x), in order to ensure the obtaining of a correct result. This calculation breaks easily into two parts.

First we have

(Eq’n A-2)

And second we have

(Eq’n A-3)

Subtracting the second of those equations from the first gives us

(Eq’n A-4)

That seems fairly hopeless until we evaluate four of the terms with the aim of putting all of the differentiation operators on the right side of their respective terms. Thus, we have;

1.

2.

3. and

4.

(Eq’ns A-5)

Making the appropriate substitutions into Equation A-4 gives us

(Eq’n A-6)

which means that A* commutes with [x, x]] as if it represents a constant, as we hypothesized. The same procedure proves and verifies that the A-operator also commutes with [x, x].

habg

Back to Contents