The Gamma Matrices

Back to Contents

    We want to factor the equation

(Eq地 1)

in which m0 represents the rest mass of a particle. We want to devise an equation that痴 first order, rather than second order, in the variables. Start with something like

(Eq地 2)

Because they differ from each other only by an algebraic sign on the mass term, we can choose either bracket to create a new equation, confident that the solution will tell us whether we致e gone wrong. Let痴 choose

(Eq地 3)

    But that痴 not a clear first-order equation. The energy-momentum term does not give us a linear array of the components of the momentum-energy four-vector. We have the square root of the dot product of the four-momentum of the particle with itself. We need something that, when multiplied by itself, will yield that dot product and we need that something to be a linear array of the components of the four-vector (px, py, pz, pt), in which pt=E/c; that is, we need (using Einstein痴 summation rule)

(Eq地 4)

Because the pi represent numbers (presumably obtained through measurements), we have

(Eq地 5)

for all binary combinations of the indices x, y, z, t. Comparing the right sides of Equations 4 and 5 tells us that

(Eq地 6)

in which the double-superscripted delta represents the Kronecker delta.

    For all cases ik we have

(Eq地 7)

From that fact we infer that the gammas consist of sets whose elements are positive and negative numbers. To multiply two of the sets together we multiply the ordered subsets of the sets together in the manner of a dot product (element by element, subset by subset) and thereby generate a new set. We can display that process graphically as the multiplication of the columns of a matrix by the rows of another matrix to produce yet another matrix. Thus, we infer that the gammas are manifested in matrices and we rewrite Equation 3 as

(Eq地 8)

in which I represents the identity matrix

(Eq地 9)

a matrix with all ones on the upper-left to lower-right diagonal and zeroes elsewhere. We include that matrix in Equation 8 so that both terms in the equation have the same mathematical shape and follow the same rules.

    Equation 7 also tells us that the gammas are square matrices; that is, that they are nxn matrices, each matrix having the same number of rows and columns. We can multiply non-square matrices, but we have no guarantee that if we can multiply AB that we can legitimately multiply BA. We can, for example, multiply [3x3][3x6] ([3x6] has 3 rows and 6 columns), but how can we multiply [3x6][3x3]? But Equation 7 necessitates such a guarantee in the case of the gamma matrices. And Equation 7 requires that we use gamma matrices that are anti-Hermitian (or skew Hermitian).

    Mathematicians define a matrix to be Hermitian if it equals its own conjugate transpose; that is, it痴 Hermitian if we flip it about its upper-left to lower-right diagonal and change the algebraic sign on all of its imaginary elements and get a matrix that痴 identical to the one we started with. The Pauli spin matrices, for example, are Hermitian. A matrix is anti-Hermitian if the transposed and conjugated matrix equals the negative of the original matrix. As one consequence of those definitions we see that any non-zero elements on the upper-left to lower-right diagonal must be entirely real in an Hermitian matrix and entirely imaginary in a skew Hermitian matrix.

    A comparison between the two terms in Equation 8 tells us that the gamma matrix has units of velocity. It doesn稚 represent an actual linear velocity because that痴 covered by the momentum vector. It must represent a motion that doesn稚 actually go anywhere; that is, it must represent some kind of reciprocating or rotary motion. Because the mass term does not (cannot) fluctuate, the elements of the matrix must be constants, which means that the motion they represent cannot be fluctuating. But the matrices also cannot represent an orbital angular momentum, because an orbital angular momentum involves a cross product with the particle痴 linear momentum and, thus, the dot product that we see in Equation 8 would equal zero. We must thus infer that the gamma matrices represent the particle痴 spin. The first term of Equation 8 thus gives us the particle痴 helicity, the measure of its spin-linear momentum coupling.

    We know that we must represent the spin of a particle as a linear combination of eigenstates, whose eigenvalues are separated by one full Dirac unit (aitch-bar) of angular momentum. Thus, for example, for a particle that carries 3/2 units of spin we have four eigenstates, corresponding to projected spin values of +3/2, +1/2, -1/2, and -3/2; that is, we have four different values of the spin projected onto some axis, generally taken to be the z-axis of an arbitrarily established coordinate grid.

    Equation 8 gives us only an operator, an entity that transforms other mathematical formulae. This being quantum mechanics, the other mathematical formula must consist of a state function associated with a particle. Thus, we must rewrite Equation 8 as

(Eq地 10)

in which the Greek letter psi represents the state function, which expresses the partial probability of the particle existing in a given state. In this case the state function must involve an nx1 column vector, called a spinor. The purpose of the spinor, as the name implies, is to represent spin states.

    Using spinors and spin matrices, we can calculate an expectation value for a particle痴 spin by way of Born痴 theorem in the form

(Eq地 11)

in which the last expression gives us the Dirac bra-ket formulation of the integral. The ket, |s,m>, is analogous to the state function , in which the sigma in parentheses represents the column-matrix spinor. In this case |s,m> represents a particle carrying spin s with the projection of that spin onto the z-axis equal to m. Of course, we have <s,m|s,m>=1.

When we apply an operator to a state function we have, in essence, committed an act of measurement. When we multiply that entity by the transpose conjugate of the state function and integrate over all space, as in Equation 11, we extend the measurement in the form of an expectation value. But we are not obliged to extract the measurement through the same state on which we made the measurement. Certainly we know that <s,n|s,m>=0 whenever n m, but <s,n|Ω|s,m> does not necessarily equal zero, especially if the act of measurement changes the state of the system. That fact enables us to work out a full description of the gamma matrices.

If we choose to make the most basic spinors consist of only ones and zeroes (essentially yes or no on each eigenstate), then the elements of the gamma matrices must consist of the expectation values of the corresponding spin states. A particle that carries one Dirac unit of spin, for example, has three spin eigenstates (+1,0,-1), so we have its gamma matrices as

(Eq地 12)

The z-gamma comes easily, because we already know the eigenvalues of the z-spin states:

(Eq地 13)

Contriving the x-gamma and y-gamma matrices won稚 be so easy: we don稚 know the x-axis and y-axis eigenstates; we池e not even sure the system has any.

    We assume that we can measure the amount of a system痴 spin that痴 manifested in the x- and y-directions, so we assume the existence of spin operators Sx and Sy. So we have

(Eq地 14)

by way of the Pythagorean theorem, as usual. We can factor the right side of that equation, while remembering that we are using operators that don稚 commute with one another, and we get

(Eq地 15)

Rearranging that equation gives us

(Eq地 16)

and applying it to a state function (expressing as a ket) gives us

(Eq地 17)

We know that S2 gives us a fixed number for a given particle and that the coefficient on the left side of Equation 17 cannot be less than zero; therefore, we must have

(Eq地 18)

Thus we have Equation 17 as

(Eq地 19)

in which we have the operators

(Eq地s 20)

each operator being the conjugate transpose of the other. If we had factored the right side of Equation 14 in reverse order, we would have Equation 19 as

(Eq地 21)

    Suppose we calculate an expectation value from Equation 19 or Equation 21. In the first case we get

(Eq地 22)

In that calculation I exploited the fact that a bra is effectively the conjugate transpose of the corresponding ket, so that

(Eq地 23)

which means that S+|s,m>=|a> lets us write Equation 22 as

(Eq地 24)

In the case of Equation 21, we get an expectation value of

(Eq地 25)

    Next we want to make a pair of measurements on a particle in the state |s,m>. We want to apply S+ and then Sz. For the appropriate calculation we need to remind ourselves of the commutation relations,

(Eq地s 26)

We thus have

(Eq地 27)

But we already know that

(Eq地 28)

so we infer the basic effect of the S+ operator as

(Eq地 29)

The coefficient on the right side of that equation comes from Equation 22. We call S+ a raising operator because its basic action transforms the state |s,m> into the state |s,m+1>. The same argument, using Equation 25 for the coefficient, tells us that S- is a lowering operator, which alters |s,m> in accordance with

(Eq地 30)

Solving Equations 20 gives us

(Eqns 31)

so now we know how to work out the x-gamma and y-gamma matrices of a spin-endowed particle. For a spin-1 particle, for example, we have Equation 12 derived from

(Eq地 32)

in which a,b,c in the spinors denumerate the proportions in which the particle occupies the indicated state. Substituting the first of Equations 31 into Equation 32 gives us

(Eq地 33)

Likewise, we get the y-gamma matrix as

(Eq地 34)

Equation 32 thus shows us how to work out the gamma matrices for particles endowed with any amount of spin.

eabf

Back to Contents