Infinitesimal

Back to Contents

    Having looked at the infinite, the endlessly large, what can we say about the infinitesimal, the endlessly small? As with infinity, we must take care in talking about the infinitesimal, especially we must take care not to use the word as though it refers to a number.

    Suppose we ignored that advice. What kind of trouble will we get into? Let's define the infinitesimal as a number smaller than any number greater than zero, or as a quantity less than any finite quantity without being equal to zero. If the infinitesimal denotes a number, then let's represent it with H. If X represents any natural number, can we calculate H/X? If we can, then H doesn't represent a number smaller than any number greater than zero. If we can't, then H doesn't represent a number. So we can say, in general, that if H is infinitesimal with respect to the elements of some set, then H cannot be an element of that set.

    To explore the concept of the infinitesimal we describe the smallest possible fraction, which we assume (reasonably) will take the form of a unit fraction. As a ratio we describe a unit fraction as a one divided by some larger number, so our smallest possible fraction consists of a one divided by the largest possible number. But we have already seen that a largest possible number does not exist. The natural numbers grow larger endlessly, striving toward infinity with no possibility of reaching it, so the denominator in our unit fraction would grow endlessly in pursuit of the elusive infinity. Thus our fraction shrinks endlessly, coming ever closer to zero but never reaching it. Like infinity, the infinitesimal lies forever beyond our grasp and comes to no definite conclusion.

    If we try to create a smallest possible fraction as a decimal extension, the infinitesimal gets even stranger. The infinitesimal must lie between zero and the smallest possible fraction. To make a fraction infinitely small we must describe it as a string of zeros extending endlessly to the right of the decimal point with a one at the right end of that string. But an infinite string of zeros doesn't have a right end, so we can't actually place the one into the fraction. But neither can we dismiss the one; if we did, our putative smallest fraction would equal zero and that would leave no room for the existence of the infinitesimal. By making our fraction endlessly smaller we shove the infinitesimal right up against zero, but it won't merge with that expression of emptiness.

    Another way of looking at infinitesimals displays them as spacers between certain elements of the set of real numbers. To see how that works calculate the decimal equivalent of 1/9. We know that we get an endless string of ones to the right of the decimal point, but look more closely at how we get them. In this infinitely long division we divide nine into one and get one tenth plus a remainder of one tenth. We divide nine into that remainder and get one hundredth plus a remainder of one hundredth. Continuing that process, we generate an infinitely long string of ones and an infinitely small remainder.

    Multiply that fraction by nine. The product of multiplying 1/9 by nine equals one. Multiplying an endless string of ones to the right of the decimal point plus a remainder by nine yields an endless string of nines to the right of the decimal point plus the same remainder. (The remainder represents a part of the original number that we have not yet divided by nine, so we also don't multiply it by nine when we seek to undo the division.) So we say that 0.99999...plus a remainder equals one.

    An alternate way of looking at that analysis comes from the October 2006 issue of Discover Magazine in a small item titled "Fuzzy Math...Almost Is Enough" (and no proper mathematician would ever agree with that statement). Define X=0.99999..., the ellipsis indicating that the string of nines extends to the right without end. Multiply that by ten to get 10X=9.99999... and subtract X to get 9X=9.00000.... If we divide that equation by nine, we get X=1.00000....

    So on one hand we get 1.00000...=0.99999...plus a remainder and on the other hand we get simply 1.00000...=0.99999.... Which of those analyses must we accept as the correct one?

    I say that we must accept the first analysis as the correct one. The second analysis contains a very subtle error that, when corrected, leaves us with the same result we get in the first analysis. Correcting the error also illustrates the application of the infinitesimal in areas other than "close to zero".

    The error occurs in the multiplication and subtraction in the second analysis. Suppose we define X to consist of a string of N nines to the right of the decimal point. When we multiply that X by ten we get a nine on the left side of the decimal point and N-1 nines on the right side of the decimal point: the multiplication has simply shifted each nine one decimal place to the left of its original position. After we subtract X from 10X we have a nine minus a fraction that consists of N-1 zeros on the right side of the decimal point and a nine in the N-th place. We divide that number by nine and get a one minus a fraction that consists of N-1 zeros on the right side of the decimal point and a one in the N-th place. That fact remains true to Mathematics regardless of how big we allow N to become. If we let N run on toward infinity, the fraction approaches the infinitesimal, but, of course, never reaches it. Thus we must say that 0.99999...=1.00000...minus a minuscule fraction, which corresponds to what we obtained in the first analysis. In this case the infinitesimal acts as a spacer between 1.00000... and 0.99999..., one that requires an actual number in the form of a very small fraction to bridge it.

    But we don't pursue knowledge of the infinitesimal merely in order to spoil parlor tricks. We want to have mathematics as an expression of pure logic: ultimately we want to achieve the perfect certainty that we commonly associate with arithmetic calculations. To that end we cannot allow any slippage to creep into our logic, not even an infinitesimal. In some respects mathematicians are very much like lawyers: when they speak or write formally, they do so not only to be understood, but also to ensure that they are not misunderstood. So often the formal content of mathematics differs from the content of the same subject in the popular imagination. The calculus gives us a good example of that fact.

    In the popular imagination the calculus involves the manipulation of infinitesimals. At first, even the mathematicians accepted that idea. But that acceptance went away as the formal content of Mathematics evolved.

    Consider differentiation. We have some function of x (say f(x)=x2) and we want to know the rate at which that function changes as we change the value of x. We gain that information by calculating the derivative of the function with respect to x; that is, we calculate the ratio of the amount the function changes with a given change in x to the given change in x. To make that ratio as accurate as possible we want to make the change in x as small as possible, so we use the differential dx to represent that change. We begin by calculating df=f(x+dx)-f(x)=2xdx+(dx)2 and divide that by dx to get 2x+dx. We define the derivative as the limit that function approaches as we let dx shrink without limit; in this case we get df/dx=2x.

    Of course dx cannot actually represent the infinitesimal. And we don't need it to do so. If our function represents some astrophysical quantity that we want to study and we make x equal some number of lightyears, then making dx equal some millimeters certainly makes it small enough that dropping it from the derivative will make no discernible difference in our calculations. If that turns out not to stand true to our study, then we can always make dx smaller. The fact that we can shrink dx as much as we like makes differentiation of functions possible. But however much we shrink dx, it can never become an actual infinitesimal.

    As with infinity, we must take care never to use the infinitesimal as an actual number. It merely denotes the fact that some arithmetic processes can never end and never come to a definite conclusion.

abefefefab

Back to Contents