Liouville=s Theorem

Back to Contents

    That name denotes a theorem in complex analysis discovered by Joseph Liouville (1809 Mar 24 - 1882 Sep 08) that has a remarkable application in physics and its description of Reality. Simply put, Liouville= s theorem tells us that if a bounded complex function, f(Z), is continuous and differentiable over the entire complex plane, it must be constant.

    Consider a complex function f(Z) that is analytic on a region bounded by a circle C centered on the point Z0. The circle consists of the points z=Z0+Reiθ, so Cauchy=s integral formula tells us that

(Eq=n 1)

for all points Z within the circle. But we can replace the denominator in the integrand of that equation by z-Z0 in accordance with

(Eq=n 2)

in which we have converted the unit fraction of the second factor on the second line into its equivalent infinite series, exploiting the convenient fact that (Z-Z0)/(z-Z0)<1. So now Equation 1 becomes

(Eq=n 3)

    That equation gives us the equivalent of a Taylor series representation of a complex function,

(Eq=n 4)

The coefficients kn in that equation must represent constants, so comparing Equations 3 and 4 shows us that we must have as true to mathematics

(Eq=n 5)

In that equation I used our definition of z above to rewrite the integrand. Because R represents an arbitrary number, the only way in which the integral can yield a constant is for kn=0 for all n 0. For n=0 we have

(Eq=n 6)

In that case the fact that k0 must represent a constant and that we integrate and evaluate the function only over an angle necessitates that the function have no dependence upon the value of R. Thus we must have k0=f(Z0), which leads to f(Z)=f(Z0), which makes the function a constant everywhere in the region enclosed by the circle C.

    Now let C expand to enclose the entire complex plane. Because the complex plane consists of an infinite set of points, such an enclosure is an impossibility: the circle must expand endlessly because the complex plane has no end. Thus we must express our result as a limit. We also need to specify that the function is bounded (i.e. having only finite values) to exclude functions that Ablow up@ somewhere Away out there@. So we have

(Eq=n 7)

And that is Liouville=s theorem.

    We can also write the Taylor series of Equation 3 with the coefficients represented by derivatives;

(Eq=n 8)

We can represent f(Z) as the sum of two real-valued functions of the complex variable,

(Eq=n 9)

To differentiate that function with respect to Z=x+iy we use the complex differentiation operator

(Eq=n 10)

For our first differentiation we get

(Eq=n 11)

Because the coefficients in Equation 8 for all n>0 must equal zero, the derivatives in Equation 8 must all equal zero. We can meet that criterion in Equation 11 if we require that u(Z) and v(Z) have such a form that

(Eq=n 12)


(Eq=n 13)

We call those criteria the Cauchy-Riemann equations, which are only satisfied by functions that are analytic in the region in which we apply them. And since that first derivative in Equation 11 must be an inherent zero, rather than a calculated zero, all of the other derivatives zero out as well and we have, as we must,

(Eq=n 14)

And again we get Liouville=s theorem.

    That theorem transforms the complex plane into an almost magical realm. Consider the fact that on the real line we have many functions that are bounded (that is, have only finite values) and analytic (that is, have derivatives of all orders). Sin(ax) provides a good example of such a function. Complex functions, on the other hand, under the constraint of Liouville= s theorem, are either not bounded (that is, they have infinite values at some points in the complex plane) or they are not analytic (differentiable) at some points in the complex plane, but they cannot be both. One of the consequences of that fact is the fundamental theorem of algebra.

    That theorem states that any n-th order polynomial p(x) has n roots; that is,

(Eq=n 15)

In that equation the roots zi are, in general, complex numbers (for example, the roots of x2+1=0 are +i and -i). To prove and verify that theorem we define f(z)=1/p(z) and then assume that there exists no point in the complex plane for which p(z)=0. That means that f(z) has no infinities on the complex plane (it is bounded). The inverse of a polynomial is endlessly differentiable, so we have f(z) both bounded and analytic on the entire complex plane, which contradicts Liouville=s theorem. That contradiction obliges us to dismiss at least one of our premisses and the only one available is our assumption that p(z) has no zeroes in the complex plane. Thus there must exist at least one point z1 at which p(z) goes to zero. We can then factor our polynomial, p(x)=(x-z1)q(x). We then apply the same analysis to q(x) and so on to generate the product form of p(x) in Equation 15. Thus we verify the fundamental theorem of algebra.

    We can extend the applicability of Liouville=s theorem even further, into the realm of physics by considering an entity analogous to the complex plane. We note that, even though we use x and y as variables in the complex plane, we cannot transform them as we do when we use them as coordinates in the spatial x-y plane. We cannot rotate the axes of the complex plane as we can rotate the x- and y-axes of the spatial plane, because real and imaginary numbers are not inter-convertible. If we can find another two-dimensional array whose variables share that non-inter-convertibility property, then we may find a version of Liouville=s theorem applicable to functions of that array=s variables. The array that comes most readily to mind is the one that physicists call phase space. In its simplest form phase space consists of two dimensions: the horizontal axis represents the position of a body on the x-axis of our spatial coordinate system and the vertical axis represents the component of the body=s linear momentum in that x-direction. The complete phase space has six dimensions and we can add even more non-inter-convertible pairs, such as angular momentum and angular displacement.

    For the moment imagine only the x-component part of phase space and imagine that we have a collection of identical particles whose positions and momenta at some initial instant are represented on that plane by a uniformly filled square above the abscissa. To maintain a correct description of the particles as time elapses, that square drifts to the right on the diagram. We assume that no external forces act on the particles, so the width and location of the square in the momentum direction doesn=t change. In the spatial direction the horizontal lines in the square move at different speeds, because they represent different momenta and, therefore, speeds of the particles, but they don=t change their lengths because the particles they represent all move at the same speed. Thus, as time elapses the square drifts to the right and deforms into a parallelogram, which covers the same amount of area that the square did. That fact, that the phase-plane area occupied by the particles doesn=t change, implies that something like Liouville=s theorem applies to phase space.

    Assume that we have a collection of N particles and that at some instant we know the positions and the linear momenta of those particles so that we can represent each particle as a point in phase space. From those data we can construct a Hamiltonian function and a distribution function describing the system and determining its evolution. The distribution function, which we represent with the Greek letter rho, tells us how many particles we will find in an element of phase-space volume;

(Eq=n 16)

such that

(Eq=n 17)

We now want to determine how the distribution function evolves about some point in phase space that corresponds to the system=s center of mass (a point that traces a straight horizontal lines as time elapses).

    To that end we calculate the time derivative of the distribution function;

(Eq=n 18)

We eliminate the third term on the right side of that equation by substituting from Hamilton=s equations,

(Eq=n 19)


(Eq=n 20)

the phase-space analogues of the Cauchy-Riemann equations. The remaining two terms on the right side of Equation 18, the partial time derivative of the density and the phase-space divergence of the density current, comprise the continuity derivative of a fluid system. If

(Eq=n 21)

then we have the equation of continuity, which expresses a conservation law pertaining to the quantity whose phase-space density rho represents. We can take the converse of that fact: the number of particles in our system does not change (is conserved), so Equation 21 correctly describes the system and we have Equation 18 as

(Eq=n 22)

which means that the density function is a constant in phase space, which is equivalent to Liouville=s theorem. Note that the statement that the number of particles in the system remains unchanged is equivalent to the requirement in the mathematical Liouville=s theorem that the function under study be continuous, because any change in the number of particles corresponds to a temporal discontinuity in the density function.

    Given that the number of particles in the system doesn=t change and that the distribution of those particles in phase space remains constant, we infer that the phase-space volume enclosing the system also does not change. We have

(Eq'n 23)

On the second line of that equation the second and third terms on the right side of the equality sign represent the phase-divergence of the volume current in phase space, so those terms plus the first represent the continuity derivative of the volume, which zeroes out. In the fourth and fifth terms on the right side of the equality sign we replace the time derivatives with their equivalents from Hamilton's equations of motion (Equations 19 and 20) and find that the terms cancel each other out. Thus the phase volume occupied by the system conforms to something like a conservation law.

    That fact necessitates that nothing can ever subdivide a phase volume into an infinite set of infinitesimal elements. Consider what we could do if that statement did not stand true to Reality. We could subdivide a finite phase volume into infinitesimal elements and label them with index numbers. We know that in the infinite set of the natural numbers we can make a one-to-one match between all of the elements of the set and the elements of the set of the even positive integers. On that basis we claim that we can remove all of the odd-indexed elements from our phase volume without changing the size of that volume: the even-indexed elements match one-to-one with all of the elements of the figure, so their cumulative volume equals the original volume. But in Reality we cannot remove half of some geometric figure's volume without changing that volume. Therefore, we must infer that Reality has such a structure that no phase-space volume can be subdivided into an infinite set of infinitesimal elements: that inference necessitates the existence of a minimum element of phase-space volume as a fundamental property of Reality, an element that is finite, however small.

    We construct phase space from pairs of coordinates, each pair consisting of a displacement and its associated momentum; more specifically, a spatial or space-like dimension and a momentum that represents the motion of mass-bearing bodies in that dimension. One such pair represents the minimum phase space that we can have, so the minimal element of phase-space volume must represent an area in that pair. If we take, for example, x and px as our pair, we must have, then,

(Eqn 24)

in which h represents a constant (Planck's constant, rather obviously). We can thus conceive the idea of dividing our x-px phase plane into Δx-by-Δpx "pixels" of area h and note, since h has units of action, that we cannot, by any means, resolve actions smaller than h in that plane.

    Equation 24 looks very much like the algebraic expression of Werner Heisenberg's indeterminacy principle. To test that idea let's conduct an imaginary experiment in phase space with a two-particle system. Lying on the x-axis, the particles have different x-ward momenta and so, after a time, they collide. So long as the particles lie far enough apart that ΔxΔpx>h, they will move as Newtonian dynamics leads us to expect. But once the product of the differences between their locations and their momenta becomes less than h, the particles enter a state of indeterminacy: their momenta take on random values, subject to the restriction that those values add up to the original sum (due to the conservation law pertaining to linear momentum). When the particles come out of the indeterminacy zone they have momenta different from what they had when they went into the zone. If that statement did not stand true to Reality, then we could use encounters among pairs of particles to survey phase space with arbitrary, even infinite, precision and that would violate the theorem that we deduced above.

    A single particle moving through space does not display that kind of indeterminacy. We must have at least two particles within the indeterminacy zone (as we must have if, for example, we were to try to measure the particle's position and momentum with fine precision) so that the conservation laws will come into play. One of those laws pertains to the conservation of energy, which the above imaginary experiment does not satisfy unless the particles either suffer no change in their momenta or merely interchange their momenta. In order to give the particles' momenta full indeterminacy we need at least one other dimension of space in which changes in the particles' momenta absorb the changes in the particles' kinetic energies (which also come under Equation 24, with respect to indeterminacies in timing). But even that restricts the changes too much, making the changes in one direction fully dependent upon the changes in the other direction. For a fully proper indeterminacy we need at least three dimensions. Because the laws of physics must have such a form as to uphold Equation 24, space must have at least three dimensions.

    Let me also note that full indeterminacy also necessitates irreversibility. If we collide two particles and somehow arrange for those particles to reverse their motions and recollide, the momenta that they have coming out of the recollision will not equal the negatives of their original momenta, but will differ from the original momenta completely. If that statement did not stand true to Reality, then we would not have true indeterminacy and we could contrive violations of Equation 24 that would enable us to survey phase space more precisely than that equation allows. That irreversibility of collisions will solve a problem that we will encounter when we consider Boltzmann's H-theorem.

So now we know that a realm of Reality conforms to Liouville's theorem and that fact necessitates that the laws of physics include Heisenberg's indeterminacy principle acting on particles in a three-dimensional space.


Back to Contents