The Levi-Civita Tensor

Also known as the permutation symbol, it was devised by Tullio Levi-Civita (TOOL-lyoh LAH-vee CHEE-veetah; 1873 Mar 29 – 1942 Dec 29) while he was elaborating tensor analysis, which evolved from the absolute differential calculus, originated by his teacher Gregorio Ricci-Curbastro (1853 Jan 12 – 1925 Aug 06). Mathematicians use it primarily in representing the evaluation of a determinant. To see how that works we need to take a quick detour through linear algebra.

Linear algebra primarily concerns the solution of systems
of linear equations that have the form a_{1}x_{1}+a_{2}x_{2}+a_{3}x_{3}=b,
in which a_{i} represents a constant, x_{i} represents a
variable, and b represents the constant term of the equation. That seems to give
us a rather small subset out of the infinite set of all possible equations, but
it has special importance. A linear equation in n variables represents an
uncurved figure spanning an n-dimensional space: if n=2, for example, the
equation represents a straight line in a plane; if n=3, the equation represents
a flat plane in 3-dimensional space; and so on. If we have n equations in n
variables, the solution of those equations represents the point in n-space where
the uncurved figures that the equations represent all cross each other.

In light of those comments we can take the system of equations

(Eq’ns 1)

and represent it as the transformation of a vector [x_{j}] into
another vector [b_{i}] by way of the transformation matrix [a_{ij}].
Thus we have Equations 1 as

(Eq’n 2)

in which I have left summation over all values of the repeated index
implicit. We want to solve that equation for the vector [x_{j}], so we
multiply that equation on the left side of each of its terms by the inverse of
the matrix [A]=[a_{ij}],

(Eq’n 3)

Now we exploit the associative law of matrix multiplication and the fact that
[A]^{-1} [A]=[I], the identity matrix, to get

(Eq’n 4)

For the inverse of the matrix [A] we have

(Eq’n 5)

in which detA represents the determinant of [A] and A_{ij} represents
the ij-th cofactor of [A]. For calculating the cofactor we have

(Eq’n 6)

in which [M_{ij}] represents the minor matrix that we make from the
matrix [A] by removing its i-th row and j-th column. By the rules of matrix
multiplication and in terms of the cofactors of [A] we can rewrite Equation 4 as

(Eq’n 7)

Thus, each row of the inverse transformation matrix converts the vector [b_{i}]
into one component of the vector [x_{j}]. Note that in that equation we
have transposed the indices of [A], making j represent the row number and making
i represent the column number. Our original assignment of indices to [x] and [b]
in Equation 2 necessitates that transposition.

We can calculate the value of a determinant by the method of minors, which gives us

(Eq’n 8)

in which j takes a single, arbitrary value. That statement means that we take
a single column of the matrix [A], multiply each of its elements by that
element’s cofactor, and add up the results. Which column we choose makes no
difference in the result and we can also get the same result by choosing a row
instead of a column. That sum looks like the sum in Equation 7: in fact, we can
describe the sum in Equation 7 as the determinant of a matrix that we derive
from the matrix [A] by replacing the elements of its j-th column with the
elements of the vector [b_{i}]. If we symbolize that altered matrix as [A_{j}(b)],
then we can rewrite Equation 7 as

(Eq’n 9)

Mathematicians call the process that equation encodes Cramer’s rule.

To carry out the actual calculation of a determinant, we want to use Leibniz’s formula, which we have in the form

(Eq’n 10)

The symbol ε_{ijk...q}
represents the Levi-Civita tensor, an N-dimensional object whose sides span N
elements. Thus, if we have [A] as a 2x2 matrix, then
ε_{ij}
gives us a 2x2 matrix; if we have [A] as a 3x3 matrix, then
ε_{ijk}
gives us a 3x3x3 cubic array; if we have [A] as a 4x4 matrix, then
ε_{ijkl}
gives us a 4x4x4x4 tesseract; and so on. As the indices cycle through the
numbers 1 through N (think of a fancy odometer) the boxes that they label in the
array take the values 0, +1, or -1 in accordance with the sign function of the
permutation of 1,2,3,...N that labels each box. Because no two factors a_{ij}
in each partial product in Equation 10 can come from the same column of the
matrix, all elements of ε_{ijk}...q
for which two indices equal each other must equal zero.

The nonzero values of the Levi-Civita tensor go into the boxes whose indices display permutations of the natural order of the numbers 1 through N. If the index of a box comes from the natural order of the digits by an even number of interchanges of neighboring digits, the box takes a +1. But if the index number of a box comes from the natural order of the digits by an odd number of interchanges of neighboring digits, the box takes a -1. Consider, for example the case of N=3: we have the permutations as 123(+1), 213(-1), 231(+1), 321(-1), 312(+1), 132(-1). Displaying those results graphically, we have

The 3x3 Levi-Civita Tensor

Because i gives the row number (counted from top to bottom), j gives the
column number (counted from left to right), and k gives the slab number (counted
from front to back), the first two entries in our list of permutations (k=3)
give the values of ε_{ijk}
on the rear slab of the cubic array: the box in the first row, second column
takes +1 and the box in the second row, first column takes -1. In this way we
can build up the elements of the Levi-Civita tensor. The process of permutation
looks very much like the process of solving a puzzle called a word ladder, in
which we transform one word into another by changing one letter at a time
subject to the restriction that each and every change must yield a legitimate
word.

Because we have related the Levi-Civita tensor to transformations of vectors (by way of the determinants in Cramer’s rule), we should also find it involved in other vector relations. In three dimensions we get two useful results.

If we have three vectors, **A**, **B**, and **C**,
and we use their components as the rows in a matrix [V], then applying Sarrus’
rule to that matrix gives us

(Eq’n 11)

If we take **A** to comprise the unit vectors of our grid, then

(Eq’n 12)

the cross (or outer) product of **B** and **C**. If we take **A** to
represent a full vector, then we have

(Eq’n 13)

Thus we have the Levi-Civita tensor.

habg