Residues of the Dirac Delta

Back to Contents

    In complex analysis the calculus of residues enables us to evaluate functions from which we have no hope of obtaining values by direct calculation. Such functions have isolated singularities, points on the number line or on the complex plane where we have no possibility of differentiating the function. Usually those singularities arise at points where the function instructs us to divide some number by zero; such a division gives the function an indefinable value, thereby making derivatives of the function at that point incalculable and, therefore, undefined. When we take the mathematical steps to remove those singularities from our consideration, they leave residues that we can use to evaluate the function.

    If we have in the complex plane a positively oriented (traversed counter-clockwise) simple closed contour C on and within which a function f(z) is analytic (differentiable) except at a point we call z0, which designates an isolated singular point, then the function has at that point a residue

(Eq’n 1)

In that situation there exists a radius R such that within the annular ring R>|z-z0|>0 the function f(z) has a Laurent expansion,

(Eq’n 2)

in which

(Eq’n 3)

As we have seen (in the essay "Cauchy’s Integral Theorem"), the only term in Equation 2 that survives the integration of Equation 3 is A-1/(z-z0), so we have the complex number

(Eq’n 4)

as the residue of f(z) at the isolated singular point z0. In essence we use a complex-number version of Stokes’ theorem to use the integration of a function on the boundary of a small patch of the complex plane to stand in for the integration of the function’s derivatives on the patch itself. Then Cauchy’s Integral Theorem,

(Eq’n 5)

tells us that the residue thus calculated stands in direct proportion to the negative of the integral of the function taken over the putative and non-existent boundary Ω of the complex plane, which boundary we feign locating through the limiting process of "approaching" infinity.

    On the complex plane the Dirac Delta δ(z) has the character of an isolated singularity. It should thus have a residue. To discover that residue we must overcome the fact that the Dirac Delta, as an improper function, has no Laurent series expansion around its point of singularity. And to make the discovery even more difficult, I want to determine the residue, not of the simple δ(z), but of the more complicated δ(f(z)).

    Let’s first ask what leads us to believe that the Dirac Delta has a residue on the complex plane. In that regard I refer to the clear analogy between the sifting property of the Dirac Delta,

(Eq’n 6)

and Cauchy’s Integral Formula in the form

(Eq’n 7)

That analogy breaks down over the fact that in Equation 6 we integrate across the singularity while in Equation 7 we integrate around it. We gain some hope of repairing that breakdown when we notice that the geometric representation used in Equation 6 we have used only one dimension while in Equation 7 we have used two dimensions, those of the complex plane.

    We begin, therefore, by extending the representation of the Dirac Delta off the real axis and out over the entire complex plane. To that end we want to replace the real-valued coordinate x with the complex-valued coordinate z=x+iy. Paul Dirac defined his delta function through the properties δ(x=0)=, δ(x0)=0, and

(Eq’n 8)

We must now resist the temptation to go and simply replace x with z in that equation. We achieve that resistance by observing that δ(z) can only take the infinite value when both x and y go to zero together and that the differential dz=dx+idy gives us two separate integrals. Certainly, then, δ(z=0)=and δ(z0)=0, but how can we rewrite Equation 8? We shall have to carry out a transformation of coordinates.

    First, we note that in more than one dimension in Cartesian coordinates we have

(Eq’n 9)

so that

(Eq’n 10)

We know that equation stands true to mathematics because in three dimensions the Dirac Delta can only achieve its nonzero value when x, y, and z all go to zero together. In a Cartesian space of m dimensions, then, with xi representing the coordinate in the i-th dimension, the i-th component of the vector x, we have

(Eq’n 11)

    But we don’t always use Cartesian coordinates. Polar coordinates often come into play in physics and engineering. So we want to produce something like Equation 10 in generalized coordinates with a coordinate transformation applied to the differential volume element.

Assume that we have two overlapping coordinate grids xi and qj on an m-dimensional space, in which xi represents the coordinates of a Cartesian grid and qj represents the coordinates of a non-Cartesian grid (such as, for example, the grid of polar coordinates). Assume also that we have a coordinate transformation Ti such that

(Eq’n 12)

Let P represent a point that has coordinates a=(a1,a2,...am) in the x-grid and α=(α1,α2,...αm) in the q-grid and let J represent the Jacobian determinant that has elements xi/ qj. For some function F(x) the definition of the Dirac Delta gives us

(Eq’n 13)

We want to express that equation in terms of the q-frame, so we must make the substitutions dmx=Jdmq and ai=Ti(α) for the basic conversion of the coordinates to start. Then we define H(q)=F(x)=F(T1(q),...Tm(q)), so we have Equation 13 in the form

(Eq’n 14)

From that equation we infer that

(Eq’n 15)

    Any point at which J=0 exists as a singular point of the coordinate transformation. Because J represents a determinant, its vanishing at a point implies a lack of invertability of the coordinates at that point, an inability to reverse the transformation Ti(qj) above. That fact means that at the singular point some of the coordinates are indeterminate (e.g. θ and φ in polar coordinates at the origin of the grid): we call those ignorable coordinates.

    Among the coordinates q at the singular point let the qj for the indices with the values k+1≤j≤m represent ignorable coordinates with respect to the Cartesian coordinates a. Any function of q will not depend upon those ignorable coordinates, so, in light of the above equations, especially Equation 15, we must have as true to mathematics

(Eq’n 16)

in which

(Eq’n 17)

    For example, in two dimensions we have the transformation between Cartesian coordinates and polar coordinates as

(Eq’n 18)

and

(Eq’n 19)

The Jacobian determinant of that transformation comes out as

(Eq’n 20)

which vanishes at the origin of the grid. Thus θ represents the ignorable coordinate at the origin, so k=1 and

(Eq’n 21)

which leads us to

(Eq’n 22)

    Now we want to shift from Dirac Deltas that have coordinates as arguments to Dirac Deltas that have functions of coordinates for arguments. If we have a function y=f(x), then we have the associated Dirac Delta δ(y) in accordance with

(Eq’n 23)

But that result also equals the integral of δ(x)dx, so we have

(Eq’n 24)

While δ(y) displays its nonzero values wherever y=0, δ(f(x)) displays its nonzero values only at the points that represent the roots of f(x), only those values of x for which f(x)=0. We can use any function as an argument for the Dirac Delta, even a periodic function; for example, δ(sinπx) has its nonzero values only at the integer values of x.

    Next I want to look at the functional-argument analogue of the defining equation of the Dirac Delta,

(Eq’n 25)

What form must we assign to the function h(x)? To find out the answer to that question start by substituting from Equation 23 to get

(Eq’n 26)

in which the numbers xi represent the roots of f(x). That statement stands true to mathematics because over the domain where δ(f(x))0 the function g(x) essentially has a constant value, regardless of the shape of f(x), so we can treat it as a constant in the integration.

    But if f(x) has a large number of roots, how can we evaluate that sum with a reasonable amount of effort? Because the Dirac Delta represents a singularity, perhaps we can exploit Cauchy’s Integral Theorem. To make the procedure clearer, I want to work out a specific example and then abstract out the general rules afterward. To that end I want to calculate the value of the Riemann zeta function, ζ(j) for j=4 (which finds use in our description of pressure in a photon gas).

    We have

(Eq’n 27)

We can’t hope to add up an infinite set of terms directly, so we need to be sneaky. We want to convert that sum into an integral that corresponds to the sum; more specifically, we want the integral to extend over the domain 1 x< and yet contribute nonzero increments to its Riemannian sum only at the integer values of x. To achieve that end we apply a periodic Dirac Delta to our function and rewrite our sum as

(Eq’n 28)

That equation comes from adapting Equation 23 to convert the Riemann zeta function from a sum of discrete elements into an integral. Note that the cosine comes from the derivative in the functional differential.

    To make that integral susceptible to Cauchy’s Integral Theorem we must extend its domain out onto the complex plane. We start by replacing x with z=x+iy. That replacement turns the argument of the Dirac Delta in Equation 28 into

(Eq’n 29)

Because the hyperbolic cosine never takes a value less than one and the hyperbolic sine only goes to zero when y=0, that function can only go to zero when y=0. Thus, that transformation does not add any new terms to the sum in our zeta function.

    But in order to invoke Cauchy’s Integral Theorem, we must extend the domain of the integration over the entire complex plane. That fact necessitates that we extend the domain of the integration from between one and infinity to between minus infinity and plus infinity. That extension will add extra terms to our sum, those that come from the negative integers plus a singularity at x=0. Because we have used an even function, the sum of the negative-integer terms equals the sum of the positive-integer terms, so if we add up all of the terms in the new domain, we need only divide that sum by two to get the correct value of the zeta function. The singularity at the origin of the number line will give us the means to achieve that calculation.

    Cauchy’s Integral Theorem (also known as the Cauchy-Goursat theorem) tells us that if we have a function that remains analytic on a closed contour and on all points enclosed within that contour, then the integral of that function on the contour equals zero. In this case we want the contour to coincide with the boundary enclosing the entire complex plane. Of course, the complex plane has no proper boundary: with infinite radius, the putative boundary cannot represent an actual, specific place where the complex plane ends. For the purpose of using Cauchy’s Integral Theorem we assume a circle of arbitrarily large radius and take the limit of the integral as the radius of that circle increases endlessly.

    Cauchy’s Integral Theorem comes from applying Green’s theorem,

(Eq’n 30)

to analytic functions on the complex plane. Cauchy equates the line integral of an analytic complex-valued function on the boundary of a region to the area integral of that function’s "curl" on the region itself, uses the Cauchy-Riemann conditions to zero out the curl, and thereby zeroes out the line integral. In essence Cauchy made a feint at creating the complex analysis analogue of Ampere’s law, but produced instead a simple version in which we preserve the zeroness of the line integral by cutting out any singularities in the region within the line C bounding the region R. But if the function that we want to integrate satisfies a simple criterion, we can apply Jordan’s lemma and develop a true pure-mathematics version of Ampere’s law.

    If we have a function, f(z), whose value diminishes faster than 1/|z| as the magnitude of z increases and if we have a semicircle CR of radius R whose every point conforms to z=Rexp[iθ], then we have as true to mathematics the fact that

Eq’n 31)

That equation expresses Jordan’s lemma and it looks very much like Cauchy’s Integral Theorem. It differs only in specifying that CR represent a semicircle centered on the origin of the complex plane. We can close that contour by adding to it the real-number axis, thereby creating a contour that completely encloses the upper half of the complex plane or the lower half of the complex plane. If we have an integral that we need to evaluate over the entire real-number axis, we can exploit the residue theorem, using Jordan’s lemma to eliminate the part of the contour off the real axis; that is, for some f(z) we have

(Eq’n 32)

    The example that we are pursuing has a pole at the point z=0. All of the other singularities on the real axis, also based on the Dirac Delta, when integrated add up to twice the value of the zeta function. So we have, by Equation 32,

(Eq’n 33)

Thus we need to calculate

(Eq’n 34)

In that equation the minus sign comes in because the rest of the integration on the real-number axis yields a positive number, which this integral, in its raw form, must compensate in order to yield the necessary zero. And in jumping from the first to the second line of that equation I have played a number of mathematical tricks. First I extended the domain of the integration over the complex plane (something we had already done implicitly by invoking Jordan’s lemma) and replaced the variable x with z=x+iy (treating y as an implicit, hidden variable that had been present all along). Thus, in the first integral we integrate across the singularity and in the second integral we integrate around it, as Cauchy taught us to do. Next I replaced the Cartesian representation of the complex plane with the corresponding polar representation (z=rexp(iθ) and dz=izdθ), which necessitated dividing the Dirac Delta by two pi eye times its argument (noting that the square root of minus one comes from using the complex plane rather than the standard Cartesian plane of geometry) in accordance with Equation 22. The differential element of area, (πdx)(πdy)=π2rdθdr, reflects my choice to apply a Cauchy integration (assuming that r is held constant during the integration around the contour) and then an integration over radial distance along the real-number axis.

    To convert the integral of Equation 34 into Cauchy integrable form we exploit the fact that

(Eq’n 35)

Multiplying that equation by the reciprocal fourth power of z and by the Dirac Delta gives us our integrand as an infinite series, of which only the 1/z term survives the Cauchy integration. Thus we rewrite Equation 34 as

(Eq’n 36)

Carrying out the longitudinal integration around a minuscule circle gives us

(Eq’n 37)

Although the argument of the Dirac Delta is a function of θ, the Dirac Delta has a constant value (zero) everywhere except at z=0, where θ becomes an ignorable coordinate, so the Dirac Delta comes through the integration with respect to θ unchanged. We can invoke Equation 24 and then make the substitution (for θ=0) of z=r. That move leads us to

(Eq’n 38)

because at r=0, cos(πr)=1.

Note that with that technique we can calculate the values of the Riemann zeta function for all of its positive even indices, the first five coming out as

(Eq’ns 39)

Thus we see how we can devise the effective residues of the Dirac Delta and apply complex analysis to integrating functions involving the Dirac Delta. We also see how a problem that we cannot solve in real numbers dissolves readily when we soak it in complex numbers.

    So now we have a general technique for integrating a function based on a Dirac Delta lying on the real-number line. We start by extending the domain of the function out onto the complex plane, doing so in a way that does not create any further singularities on the domain. Then we can carry out a Cauchy integration around a minuscule contour enclosing the singularity, thereby setting up the simple integration of the Dirac Delta in accordance with its defining equation. That simple integration gives us the residue of the singularity represented by the Dirac Delta.

Appendix: Derivatives of the Dirac Delta

    On first impression we would think that a function that we can describe as a spike of zero width and infinite height could not possibly have a derivative. How could we hope to differentiate such a thing? We certainly can’t apply the usual definition of a derivative. But, contrary to our expectation, the derivatives of the Dirac Delta do exist. We can prove and verify that proposition and determine the form and properties of those derivatives by assuming that they exist and using them in certain integrations by parts.

    Take an arbitrary function of x, multiply it by x and the presumed derivative of the Dirac Delta, and integrate the result with respect to x;

(Eq’n A-1)

In that calculation the first term of the result equals zero because xδ(x)=0 (which stands true to mathematics because its integral, by definition, must equal a finite constant). The second term in the second integral drops out of the calculation because

(Eq’n A-2)

which stands true to mathematics because, by the definition of the Dirac Delta, that integral gives us the value of xf(x) at x=0. Thus, in Equation A-1 we have two integrals with respect to the same variable equal to each other. That fact necessitates that the integrands equal each other. And we can solve that equality readily to obtain

(Eq’n A-3)

    We can extend that procedure to higher powers of the derivative through the integration

(Eq’n A-4)

which gives us

(Eq’n A-5)

Again, because any power of x multiplied by any function and δ(x) integrates to zero, all of the terms involving derivatives of f(x) zero out of the integral.

    We can also employ the derivative of the Dirac Delta in an equation analogous to the equation that defines the Dirac Delta;

(Eq’n A-6)

Again the first result of the integration zeroes out because, if we integrate it with respect to x, it must yield a constant, f(a): we must then treat that result as the derivative of a constant. That then leaves the second term, via the sifting action of the Dirac Delta, the derivative of f(x) evaluated at x=a.

    So now we know that the derivatives of the Dirac Delta exist and that we can use them in calculations.

gabh

Back to Contents