The Fundamental Theorem of The Calculus

Back to Contents

    We have defined the process of integration as the limit of a Riemannian sum. On the real-number line, on which we identify any indefinite number as x, we have a domain X on which we have a continuous function f(x). We divide the domain X into sub-domains of width ∆x and calculate on each of those sub-domains the product of that width and the value of f(x) at some point on that sub-domain. We then imagine recalculating those products as we diminish ∆x toward zero (that is, we transform it into the differential dx) and then calculating their sum. We thus obtain

(Eq'n 1)

in which F(x) represents the function that corresponds to the indefinite integral of f(x) with respect to x. In essence we have summed the areas of a series of rectangles more or less congruent to the area between a curve and a straight line. We don't expect the area that we calculate to equal the actual area under the curve, but the limiting process, by narrowing the rectangles, makes the difference between the two areas diminish until we can dismiss it as negligible, thereby validating the integral.

    If we integrate f(x) between two definite values of x, we get

(Eq'n 2)

That statement tells us that for any continuous function f(x) we have a function F(x) such that the values of F(x) at the points x=a and x=b give us a number that matches the number that we derive from the Riemannian sum of the minuscule increments f(x)dx over the interval between a and b.

    The standard proof of the theorem goes like this: For some function f(x) continuous on the domain between and including the points x=A and x=B we have

(Eq'n 3)

as a well-defined integral (the limit of the appropriate Riemannian sum) subject to the proviso that F(c)=0. We consider an increment of F,

(Eq'n 4)

in which I have exploited the linearity of the integral with respect to its integrand f(t) and its additivity over its domain. Let M represent the maximum value that f(t) achieves on the domain between and including the points t=x+h and t=x and let m represent its minimum value on the same domain. We know, then, that

because we know that the integral is monotonic with respect to its integrand; that is, as the terms in the Riemannian sum accumulate their cumulative value either increases or decreases over the span of the domain but not both (e.g. it does not increase, decrease, and then increase again). So now we know that

(Eq'n 5)

lies in the range between and including the points M and m. But f(t) is a continuous function of t, so by the mean-value theorem we know that there must exist an element ξ on the interval between and including the points t=x and t=x+h such that

(Eq'n 6)

so that we have as true to mathematics the statement that

(Eq'n 7)

because ξ→x as h0. Thus we prove and verify the proposition that f(x) is the derivative of F(x), thereby verifying that we have a proper integral.

    Next let G represent any antiderivative of f (that is, dG/dx=f) and let F represent the integral function

(Eq'n 8)

Because dF/dx=f, we have dF(x)/dx=dG(x)/dx for all values of x in the interval [A,B], so we also have d(G-F)/dx=0 or G-F=k, in which statement k represents some constant. That fact gives us G(x)=F(x)+k. Since F(A)=0, we have G(A)=k, so G(x)=F(x)+k for all values of x in the interval [A,B]. So we have at last

(Eq'n 9)

Thus we have as true to mathematics the statement that for a function that has values on an interval we can devise a function, the integral, whose values calculated on the endpoints of that interval sums up the original function on the interval.

On a Two-Dimensional Domain

    I want to integrate a function of two independent variables, f(x,y), on a domain C defined as consisting of all of the points (x,y) that comprise a closed curve C(x,y) that encloses an area A (for example, we might have C specified by the equation      y=, a circle of radius a). Let's begin by dividing the area A inside the curve into minuscule squares, each having an area dxdy. Thus we model C in approximation by a jagged line comprising segments of length dx and dy.

    Attend to one of those squares, one whose lower left corner lies on the point (x,y). We take as our convention the statement that multiplying the function f on the boundary consists in calculating in the clockwise sense; that is, beginning at (x,y), proceeding to (x,y+dy), thence to (x+dx,y+dy), and so on until we return to (x,y). We also specify that we calculate the value of f(x,y) that we use at a point halfway along the boundary line whose length multiplies the function. For our integrand, then, we have

(Eq'n 10)

If we gather terms with common factors, we get

(Eq'n 11)

The third and fourth terms on the right side comprise the partial differential of the function with respect to eks. The first and second terms comprise the partial differential of the function with respect to wye. Thus we have

(Eq'n 12)

    If we carry out that analysis on the square whose lower left corner lies on the point (x+dx,y), we will get the same result. So let's carry out that analysis on both squares together. In that case the product of the function and the increment of distance along the border on the side extending from (x+dx,y) to (x+dx,y+dy) makes a negative contribution from the first square and a positive contribution from the second square. Those contributions cancel each other out, so only the edges where the squares don't meet contribute to dF. We have, then,

(Eq'n 13)

Again, gathering terms with common factors gives us

(Eq'n 14)

    If we continue this process, we will find that we have constructed a Riemannian sum in two dimensions, for which sum we find the following true to mathematics: all of the squares' boundaries inside the area A cancel each others' contributions to the line integral. Thus, only the boundary lines that coincide with the loop C that bounds A (in the Riemannian limit) contribute to the line integral and we have as true to mathematics

(Eq'n 15)

Mathematicians know this result as Green's Theorem, a form that the fundamental theorem of the calculus takes when we apply it to functions on planar domains.

    Now let's assume that we have a second coordinate system x', y' related to the first through the transformation

(Eq'ns 16)

in which equations θ represents the angle between our x-axis and the x'-axis. Assume further that we have two functions, f(x,y) and g(x,y), such that

(Eq'ns 17)

In that case we have as true to mathematics the statement

(Eq'n 18)

which means that f(x,y) and g(x,y) comprise respectively the x- and y-components of a vector field. Equation 18 represents the inner (or dot) product of the two vectors f=(f,g) and x=(x,y); that is, fx. Thus the loop integral of Equation 15 becomes

(Eq'n 19)

in which we take the minuscule-element-of-area vector da to point in the direction perpendicular to the x- and y-directions. To maintain the proper form of the equation, specifically to avoid inserting a cosine into the expression on the right side of the equality sign, we must assert that the vector also points entirely in the z-direction.

    Again, Equation 19 expresses what mathematicians usually call Green's theorem, the form that the fundamental theorem of the calculus takes when we integrate a function over a bounded region of a plane. As in the case of integration on a line, we can divide our plane and its boundary into a finite, if large, number of minuscule elements, thereby making the additions on both sides of the equality sign finite; thus, we know that we have a valid result.

    If we had carried out that analysis with the area A lying in the y-z plane or the z-x plane, then we would have obtained the same equation but with the differentials, respectively, of and in the integrand of the expression on the right side of the equality sign. Again respectively, those vectors must point entirely in the x-direction and the y-direction.

On a Three-Dimensional Domain

    If we orient the area A in an arbitrary direction in a three-dimensional domain, then we must combine all three of the differential vectors above into one vector, the curl of the vector field f. In that case Equation 19 becomes

(Eq'n 20)

in which the nabla (or del) represents the vector operator

(Eq'n 21)

    Equation 20 gives us a version of Stokes' theorem, which I will discuss in greater detail below. Again we have obtained a result that displays finite additivity. That statement means that we can identify two disjoint regions and their boundaries on our two-dimensional domain and evaluate the integrals in Equation 20 over both of them to gain the sum of the integrals' values on the two regions separately. The contribution that each area makes to the integral on the left side of the equality sign does not change, even when we so define our regions that they have a common boundary; the contributions to the line integral from the common boundary will cancel (because we integrate them in opposite directions), thereby negating the common boundary's existence as a boundary.

    Now let's try something else.

    A vector field f exists at every point in a region of space. We identify a volume V within that region of space and a surface S bounding that volume. As we did when we subdivided an area A into an array of minuscule squares of dimensions dx and dy, so we can subdivide the volume V into an array of minuscule cubes of dimensions dx, dy, and dz in such a way that the outer faces of the unblocked cubes comprise a surface that differs from S by an arbitrarily small amount.

    Imagine the cube whose front lower left corner lies on the point (x,y,z) and calculate the sum of fda on all six of its faces. In calculating the dot product we exploit our knowledge that on each face of the cube only the component of the field f perpendicular to the face survives the process, so we have for the sum

(Eq'n 22)

That equation gives us the three-dimensional analogue of Equations 11 & 12, so we know that when we add up the contributions from adjacent cubes the contributions from faces that coincide with each other cancel. We thus obtain

(Eq'n 23)

    That equation expresses the Ostrogradsky-Gauss theorem. Note that we have represented the divergence of the vector field f as the dot product of nabla on the vector field, just as we represented the curl of the vector field as the vector cross product of nabla on the vector field.

    Mathematicians also call Equation 23 the divergence theorem. We verbalize the theorem by saying that the flux integral of a vector field f over the boundary S of a volume V equals the integral of the divergence of f over the volume V.

Stokes' Theorem

    On a domain m with boundary ∂m on a manifold M we have a smooth, continuous function F that we can differentiate with respect to any generalized coordinate q on the manifold. In that circumstance we have as true to mathematics the statement

(Eq'n 24)

which expresses Stokes' theorem in general. We can also see that equation as the most general extension of the fundamental theorem of the calculus.

In the Physical World

    In the realm that we call Reality, the realm that we posit outside ourselves, those results have a special significance. Consider, for example, what we get if we replace the generic vector field above with the electric and magnetic fields of physics.

    If we seek to integrate the divergence of the electric field in a volume of space bounded by a closed surface and correlate that integral with the surface integral of the electric field itself on that bounding surface, we obtain Gauss's law of the electric field, the first of Maxwell's Equations of Electromagnetism. If we carry out the same procedure on a magnetic field, we obtain Gauss's law of the magnetic field, the second of Maxwell's Equations.

    If, on the other hand, we seek to integrate an electric field around a closed loop bounding a certain surface, we will find that our result correlates with the integration of the curl of the electric field over the surface. But we know that we can describe any electric field as the negative gradient of an electrostatic potential minus the partial time derivative of a magnetic vector potential. Applying the curl operator to that description makes the gradient vanish automatically and leaves the negative curl of the partial time derivative of a magnetic vector potential. Because the differential operators commute with each other, we can restate that description as the negative partial time derivative of the curl of a magnetic vector potential, which we must recognize as the negative time derivative of a magnetic induction field. Thus we correlate the integral of an electric field around a loop with the negative integral of the partial time derivative of a magnetic induction field over the surface enclosed within the loop. In that way we get Faraday's law of electromagnetic induction, the third of Maxwell's Equations.

    Can we obtain Ampere's law in the same way? We must integrate a magnetic field around a loop bounding a surface over which we integrate the curl of that field. To obtain Ampere's law we must then recognize that the curl of the magnetic field equals the permeability of vacuum multiplied by the surface density of electric currents passing through the surface of integration. Completing Ampere's law then requires a step that goes beyond the scope of this essay. The relevant derivation appears in the Map of Physics.

efefefaaabbbefefef

Back to Contents