The Rules of Arithmetic
Back to Contents
I claim at the beginning that I intend to create the Map of Physics as a formal system in the sense in which David Hilbert applied the term. As a formal system the Map of Physics comprises a set of symbols, a small set of initiating propositions (axioms) that relate those symbols to each other, and an ever-widening array of propositions (theorems) that we deduce from those elements through the rules of logic. In this particular system we use as symbols the names that we give to fundamental concepts that we obtain by abstraction from our percepts of what we call Reality. Thus I intend to make the Map of Physics not only true to logic but also true to Reality, its propositions giving us descriptions that mimic accurately the structure and behavior of Reality.
This seems an obvious point, but it bears overt stating nonetheless: the Map of Physics contains as a subset the rules and objects of arithmetic. That proposition will become especially relevant when someone extends the Map widely enough to run it up against Gödel's Theorem. But it also gives us a problem at the beginning and I want to take a look at that problem before going any further.
That problem became clear to me when I read an article on the Op-Ed page of the Los Angeles Times of 2001 Aug 21. In that article Crispin Sartwell, who teaches philosophy at the Maryland Institute College of Art, described mathematics as "a sort of necromancy or pagan religion" and claimed that mathematicians deal entirely with entities of which they have no clear conception. He came to that conclusion when he found that he could not obtain a satisfactory definition of the number seven.
Mr. Sartwell's dictionary defines seven as "the cardinal number between 6 and 8", a definition that obliges us to know the definitions of six and eight in order to understand seven. That fact, which Mr. Sartwell called "this apparent emptiness of the simplest notions of mathematics", moved him to issue a challenge on his website (www.crispinsartwell.com), in which challenge he seeks a defensible definition of seven. But the dictionary already provides such a definition: mathematicians see the purest essence of seven-ness in the phrases "successor to six and precursor of eight". We need nothing more.
Let me give a more explicit statement to show what I mean. Mathematicians see numbers simply as names in a fixed sequence of names and see the purest essence of number in the phrase "fixed sequence". That statement provides what the mathematicians would call a necessary and sufficient definition of number, necessary because we must say at least that much and sufficient because we need say no more to define number perfectly.
You might say, as Mr. Sartwell does, that number "does not actually refer to anything in the world" and I must say that you are correct. Numbers do indeed refer to nothing in the world until we use them to count something. Then we depend upon that fixed sequence nature of numbers to ensure that we can rely upon our counts. Absent that fact we could not use numbers and they would, as Mr. Sartwell claims, have no meaning for us. Consider the title of Mr. Sartwell's article, "Let's Pay This Guy Between $6 and $8" (which indicates, as Mr. Sartwell acknowledges on his website, that he wrote the article as something of a joke), and you know with certainty that Mr. Sartwell understands the meaning of seven dollars even if he has in mind no clear picture that corresponds to the abstraction "seven".
As for the charge of necromancy, I can say that mathematics offers some very strange results. All too often mathematicians seem to pull rather bizarre rabbits from exquisitely empty hats. But mathematics has nothing to do with magic or mysticism, however much Pythagoras and his successors would like to disagree. In pursuing what they call pure mathematics, mathematicians play a game, one conceptually no different from chess. They take a collection of markers (numbers) and a few simple rules and parlay them into an ever-widening array of moves and plays. Those of us who seek to understand Reality have found that we can use the formal system that game produces to create a description of Reality that aids our understanding. That fact gives us meaning enough, however abstract we may find the mathematics behind it.
So now let's review the basics of the game:
We first want to enlarge our collection of markers, to ensure that we always have enough for whatever task we choose that requires counting of objects. To that end we use a ploy that involves conceptually gathering objects into groups of ten, then tens of tens, then tens of tens of tens, and so on. By that means we devise new number names, a different name for each group, from the first ten names in our sequence. We also draw pictures of numbers from ten pictures, the Ghobar numerals, that we got from India through Muslim mathematicians in the Twelfth Century. To draw such a picture we again group objects into groups and draw a Ghobar picture for each of the leftovers from each stage of the grouping until we can make no more complete groups. So now we have the means to name and to draw any number that we could possibly need.
If I have a collection of objects, I can number them through the process of counting. I move the objects from one place to another, one at a time, and I call out the names in the number sequence as I do so, one name for every object I move. I make the name that I call out when I move the last object the number of the objects in the collection. We have all known that fact since before we started school, but I repeat it here because it constitutes the foundation of all that follows.
If I have two collections of objects and I know the number of the objects in each of those collections, I can combine the collections and determine the number of the objects in that combined collection without recounting the objects. I simply apply the process of addition and we all know how that works. In essence I have drawn the names of the number sequence along a straight line, put my finger on the number of the objects in one collection, and then put my finger on each successive name as I call out the numbers from one to the number of the objects in the second collection as I do so. I then look at the number name on which my finger ends up and call that the number of the objects in the combined collection.
Lest it seem that I belabor the obvious (after all, we have carried out that kind of calculation since our first years in grammar school), let me ask a question. Does the order in which I add the numbers make a difference in the result? If I have two arbitrarily chosen numbers (let's represent them with the letters A and B), then can we say that the statement A+B = B+A is always true to mathematics? I want to pursue that question because it offers some insight into the nature of mathematical proof, the kind of proof that I want to use in creating the Map of Physics.
Because we have more numbers than we could ever hope to examine, much less count out, we must find a way of proving the proposition and either verifying it or falsifying it without examining all possible additions. Logical proof of a proposition consists of statements sufficient to convince a reasonable and suitably knowledgeable person that the proposition is necessarily true to mathematics. We usually prove propositions through reduction to an absurdity, that method comprising statements that show clearly that all conceivable alternatives to the given proposition lead necessarily to absurdities, things that cannot be true to mathematics. In the present case, though, the proposition lies on the border between self-evident axioms and propositions that require proof by elimination of alternative statements.
I begin by choosing three numbers (represented by the letters A, B, and C) such that I have A+B = C. From an arbitrarily chosen point I draw a straight line due north, then on the east side of that line, proceeding from south to north, I write all of the number names from one to C in their proper order. On the west side of the line I write "one" adjacent to the eastern C and then write all of the number names in their proper order from north to south, ending with C adjacent to the eastern "one". Both sequences contain the same names, so I have written each number name on the west side of the line adjacent to a number name on the east side of the line and each number name on the east side of the line adjacent to a number name on the west side of the line. Further, I have made the array symmetrical: for each pair of number names east-N adjacent to west-M I have also drawn a pair of number names east-M adjacent to west-N, the sole exception being the number name that I have drawn adjacent to itself when C is an odd number.
We usually carry out operations in mathematics from right to left, so the statement A+B instructs us to count from one to B along the number sequence and then to count from one to A farther along the sequence. I want to carry out that specific procedure on the east side of my number line, so I begin counting on eastern "one" and end up on eastern C, thereby satisfying the statement A+B = C. Shifting my attention to the west side of my line, I count from western "one" to western A, which lies adjacent to the successor to eastern B (the successor to B being the number that comes immediately after B). I then count from one to B farther along the western sequence. Because I began that second part of the western count on the number adjacent to eastern B, I must end my count on western C. But that count down the western side of my number line is the same as the statement B+A = C.
What I have done in the preceding paragraph can only be false to mathematics if one of my sequences of number names has more names than the other does. But between one and any given number C we always find the same names in the same order. Drawing two sequences ending in the same number C does not change that fact. Therefore, we must infer that the fixed nature of the number sequence makes the statements pertaining to the two additions in the preceding paragraph true to mathematics. In that paragraph, then, we see the element of self-evidence: the very nature of the number sequence clearly makes the propositions true to mathematics. And yet in this paragraph I have mentioned and then dismissed a conceivable alternative, necessarily so just because we can conceive the alternative, thereby bringing into my argument something of the proof by process of elimination.
Now I can invoke a truly self-evident statement, Euclid's first Common Notion: things equal to the same thing are equal to each other. We have A+B = C and B+A = C, so Euclid's common notion makes the statement A+B = B+A also true to mathematics. We say that addition obeys the commutative rule (or that natural numbers commute under addition), which is to say that in addition the order in which we take the addenda does not affect the sum.
Having defined the process of addition, I can also define the inverse process - subtraction. Because we make addition a process in which we count up or forward along the number sequence to find the number of objects in a collection made by combining collections of objects, we must make subtraction a process in which we count down or backward along the sequence of number names to find the number of the objects in a collection that we have left over when we take a certain number of objects away from it.
But subtraction gets us into trouble. First, if I subtract a number from itself I have found the number of the objects in a collection from which I have taken all of the objects. That number must represent nothing. We call it zero and we cannot use our Ghobar numerals without it. OK, that doesn't give us so much trouble, but the real trouble comes when I try to subtract a large number from a smaller number. That subtraction represents an attempt to take more objects out of a collection than we have in the collection at the beginning. Assume that I have written the number sequence along a vertical line, writing the number names in both directions away from zero. If I am subtracting A from smaller B, then I will count down the array of names from upper B to the A-B below zero. That number looks like a debt, representing something that I owe, so I interpret it as such, as a kind of pre-emptive subtraction, a subtrahend waiting for a minuend.
Now we have two kinds of numbers, one representing things that we actually have in hand and one representing something that we owe. We call the former kind positive numbers and the other kind, which we use to represent any kind of deficiency, negative numbers. One of the tacit properties of the theory of numbers has obliged us to conceive a new kind of number: we want to make the process of subtraction complete; that is, we want the process of subtraction to yield an actual difference for any subtrahend and any minuend available from the known numbers.
Times will come when I will have a number of collections of objects with the same number of objects in each collection and I will want to combine all of those collections into one collection and determine the number of the objects in that combined collection. I make that determination through the process of multiple addition, which we commonly call multiplication. Again, we all know this very basic arithmetic: we all know how to multiply two numbers by the method of partial products, which exploits the place-value nature of the Ghobar numerals.
We have no trouble with what multiplication represents when both numbers are positive. We also have little trouble when one of the multipliers is a negative number: we interpret the multiplication as the accumulation of many debts into one large debt and note that the product must be a negative number. But what do we get when both multipliers are negative numbers? We know that multiplying any number by zero yields zero as its product, so let's multiply -N by zero in the form -N x 0 = -N x (C-C) = -N x C + (-N x -C) = 0. But multiplying the negative N by the positive C yields a negative number that must negate the product of the negative N and the negative C, so the product of the two negative numbers must yield a positive number.
Having defined multiplication, we can now define its inverse - division. We use the process of multiplication to carry out multiple additions, so the process of division must give us the means to carry out multiple subtractions. Multiplication tells us the number of the objects in a collection that we make by combining collections that contain the same number of objects, so division must tell us the number of objects that we will have in a given number of collections that we make from one collection.
In essence, we divide one number by another by subtracting the divisor repeatedly from the dividend and counting the number of subtractions that we need to carry out to reduce the dividend to zero. But what can we do if that process does not take the dividend to zero? What can we do if we end up with a remainder, a number smaller than the divisor? We have obliged ourselves to put the same number of objects into each of the collections that we are making in the division, so we must break the leftover objects into enough pieces that the division can be completed. And how will we represent those pieces numerically?
To represent broken objects we need broken numbers - fractions (from the Latin fractus, which means broken). I can draw a fraction with Ghobar numerals by drawing it as a ratio, drawing the number of pieces into which I have broken each leftover object under a horizontal line and drawing the number of those pieces that goes into each collection above the line. Or I can take advantage of our Ghobar place-value system to extend the dividend to the right of the units place (marked by a decimal point) and continue the process of division to generate a decimal fraction.
If we replace the number names on our number line by their Ghobar representations, then we can draw the decimal fractions and mixed numbers among them, all in proper order. Because we can extend the value places indefinitely - indeed infinitely - to the right, we have the possibility of filling the number line solid with numbers. We seem to have all of the numbers that we could ever possibly need.
Now I write some number X down N times and multiply those X's together. I have thus calculated what we call the N-th power of X. Because the name of a power, which name we call an exponent, is itself a number, we may ask whether we may manipulate those names by the processes of arithmetic and ask what those manipulations may mean.
Clearly from the definition of a power, if I have the (N+M)-th power of X, then I have the result of multiplying together the N-th power of X and the M-th power of X. We can add exponents and get a meaningful result, so exponents are subject to addition. But addition is the fundamental process of arithmetic, from which we derive the other processes, so we may also apply those other processes to exponents.
Consider subtraction. What do I mean when I say that I have the (N-M)-th power of X? I set up the calculation by writing N X's with multiplication signs among them and then erase M of the X's. In an actual calculation I would obtain that result by multiplying the N X's and then dividing the result by M X's. And if N = M I get the zeroth power of X, which clearly equals the number one. And what does calculating the minus N-th power of X give us? If I write it as the (0-N)-th power of X, then I see that I get one divided by the N-th power of X.
Having thus applied the process of subtraction to exponents and thereby brought negative numbers into play, I can now assert that I can apply the processes of multiplication and division to exponents and hope to get meaningful results. That hope includes exponents that are broken numbers, representing fractional powers of numbers. Before confronting that prospect let's look at the analytic correlative of the synthetic process or raising numbers to powers, the extraction of roots.
If I have two numbers A and B such that B equals the N-th power of A, then convention tells us that A is the N-th root of B. If I write B1 = AN and then represent both exponents as sums of N equal parts, then I get B1/N = A1; that is, that A is the 1/N-th power of B. Now we know that a fractional power of a number corresponds to a root of that number.
Now we also know that fractional powers represent legitimate numbers. So now I can take some number A and calculate all of its powers based on the numbers we have in hand. Those calculated numbers, if drawn along the number line, will fill the line completely from zero to infinity.
That one-to-one mapping, of all of the numbers onto all of the positive numbers, gives us an example of a function, a procedure that tells us to replace one number or set of numbers with another number or set of numbers. We make sufficient use of this particular function that we have devised two special versions of it and given the exponents of A in those versions special names. If we use the version in which A equals ten, then we call the exponents common logarithms. If we use the version in which A equals the irrational number 2.71828183..., then we call the exponents natural (or Naperian) logarithms.
But if all of the numbers that we know so far, both positive and negative, both whole and broken, comprise the logarithms of only the positive numbers, then we may well ask whether the negative numbers have logarithms. If we demand that our numbers be complete under all arithmetic operations, then such logarithms must certainly exist. What are they?
To answer that question consider the rational roots of negative numbers. Recalling the multiplication rules for negative numbers, we see that the odd-numbered roots of negative numbers are also negative numbers. So we know, for example, that the third (or cube) root of minus eight equals minus two, which also equals the fifth root of minus thirty-two. But what do we get as the even-numbered roots of negative numbers? We certainly know that we cannot claim that minus two equals the square root of minus four.
As we did when we considered subtraction and division, we must "discover" a new kind of number. In this case we call the new kind of number an imaginary number and we sign it with lower-case eye, so we say that the square root of -4 equals 2i. But what concept should an imaginary number represent in our minds? Negative numbers represent a kind of debt and fractions represent broken things, so what should imaginary numbers bring to mind?
In fact, imaginary numbers have no representation in our minds, no image that we can associate with them; they are pure abstraction, hints, if anything at all, of the Reality standing behind the world of our perceptions. But if we have no concrete image to anchor the concept of an imaginary number in our minds, can we say that we truly understand imaginary numbers? Without a clear picture in mind as a reference, can we rightly claim that we grasp the idea of an imaginary number firmly enough to use it correctly and to have full confidence in our results? Reason tells us to answer both of those questions in the affirmative: while we find mental pictures useful as aids to inference, only cold, dry, hard logic, with all the stark elegance of a Greek temple, provides the proper measure of our understanding in mathematics and in the sciences based on the use of mathematics.
With complex numbers, combinations of real numbers and imaginary numbers, we have all the kinds of numbers that exist and all that we will ever need. Complex numbers give us the logarithms of the negative numbers, which we wanted, and they give us the logarithms of other complex numbers as well. In that fact we can see that the numbers now comprise a complete set: any operation involving one or more of the six fundamental processes of arithmetic will yield a number from that set.
And what do we want to do with those numbers? Why, we want to count the uncountable, of course. We want to use this product of our imaginations to create a representation of Reality that accurately and precisely mimics the real thing.
What do I mean by counting the uncountable? I want to consider how we may contrive to count entities that do not comprise collections of discreet objects. I want to consider the process of measurement. In particular, I want to consider measurement of two things that appear to us as unbroken continua - distance and duration.
I haven't made that choice gratuitously. Fundamentally the study of physics begins with the motions of bodies. And we properly conceive motion as the crossing of space and the elapse of time considered conjointly. If we want to ascribe numbers to motion, then we must devise some way in which we can ascribe numbers to intervals of space and time. We especially want to ascribe numbers to those intervals so that our descriptions of space and time will have the kind of consistency and reliability that we have built into our fixed sequence of number names.
As evidence that we can make distance countable I offer the obvious observation that I can count the steps that I take as I walk from one place to another. Of course, we won't use my strides (or anyone else's) to count distances: even I can't use my strides, simply because I don't repeat any of them exactly. But if we have some length that we can definitely make the same for everybody, then we can cut rods to that length and then count distance by laying those rods end to end between the two places whose distance apart we want to know. One ten-millionth of the distance from the North Pole to the Equator along the meridian that passes through the Pantheon in Paris, France seems to work rather well. So well, in fact, that the United States Geological Survey, in defining the State Plane Coordinate Systems on the North American Datum of 1983, defined the foot as a fraction of one meter.
As evidence that we can make time countable I offer the observation that after I observe the occurrence of some event I can count the number of times that the sun comes up over the eastern horizon before I observer the occurrence of some other event. In pre-industrial societies people found days and crude fractions thereof sufficient for the counting of time. But in modern physics we need to measure much shorter intervals of time. We need a standard interval, one very much shorter than a day, and a means of reliably reproducing it so that we can count its repetitions as they occur between two events.
We can use any perfectly repeating action, such as the swing of a pendulum or the vibration of a tuning fork, as a standard interval for the measurement of time. We can relate that standard to the sidereal second (of which we count 86,400 in one sidereal day) and make it readily available to everyone, as we require. Now we can make our measurement of time intervals as precise as we want by making the standard interval suitably short. But as we make the standard interval shorter, we make its repetitions more difficult and then impossible for anyone to count. We will need some device that will automatically count the repetitions of the action for us. When we combine such a counter with the mechanism that produces the standard repeating action we have before us the essence of a clock. With such a device we can measure the time elapsed between two events by noting the number displayed on the clock when the first event occurs, noting the number displayed on the clock when the second event occurs, and subtracting the former number from the latter.
But we can make more phenomena than space and time countable by the process of measurement. If we can make a mechanical analogy between some phenomenon and distance, then we can assign numbers to that phenomenon. If I can build a device that transforms the chosen phenomenon (say, electric force between two points in a circuit or the electric current flowing through a wire) into motion of a pointer along a marked line, then I can use the marks on the line to so represent standard units of the phenomenon (volts or amperes in the present example) that I may count the amount of the phenomenon presented to my measuring device. Devising uniform standards of measurement for those phenomena, standards that allow all physicists to communicate their discoveries unambiguously, offers more of a challenge than did devising uniform standards of distance and duration, but people have met that challenge and that fact gives us our consistent physics. If we can make a phenomenon countable in this way, we may include it in the realm of physics. Indeed, only phenomena that we can number in that way may become part of physics. We take that fundamental level of mathematization as a sine qua non of physics.
Again I must emphasize that I can offer no magic here. We have become too well accustomed to thinking of the mathematical nature of physics as almost supernatural, what Stephen Weinberg calls "the unreasonable effectiveness of mathematics" in describing Reality. We might as well express astonishment at the observation that human language can offer such clear descriptions of the world we see around us. But a consideration of language might provide some clarification of meaning in mathematics.
Our ancestors created words as they created numbers and those creations have only one use: they evoke in our minds ideas drawn from memory or imagination so that we may think about and communicate those things. They bridge the gap between perception and conception, the gap that makes direct communication impossible. Because of that gap, because we humans do not possess real telepathic powers, I can no more communicate what seven evokes in my mind than I can communicate directly the meaning of red. I can only use other words or point to examples of what the word denotes. But I have, nonetheless, a clear idea of what seven means, what it represents.
Consider the definition of house as a building in which people live: can you know the meaning of house without also knowing the meaning of people and live? And can you know the meaning of those things without knowing the meaning of yet other things. Yet we feel no deficiency in our descriptions of Reality with words: we feel that we know the proper meanings of statements that we make about the world of our perceptions. Surely we are given no special burden when we say that knowing the meaning of seven obliges us to know the meaning of other numbers as well. What we find in the dictionary gives us sufficient knowledge of the meaning of seven that we can use the word in a correct description of Reality.
In demanding of the definition of seven something that goes beyond sufficiency, demanding that mathematicians multiply entities gratuitously, Mr. Sartwell violates Occam's Razor. Mathematicians know better: William of Occam devised his famous logical razor to avoid introducing factors that would more likely obscure the correct meaning of a term or statement than clarify it. No, in defining seven as the number that comes between six and eight, the dictionary tells us all that we need to know about seven, fulfilling the condition that mathematicians call necessary and sufficient. But look at what mathematicians have done with that knowledge. They have given us the means to take the full measure of Reality and taking that measure is what I intend to do with the Map of Physics.
Now let's get into the Map of Physics itself and see how far we can extend it before it fails to mimic Reality.
Back to Contents