Back to Contents
We use words to denote features of some reality, a realm that remains consistent with itself over the elapse of time, and we combine them into statements that evoke in our minds a mimicry of some features of that reality. Two realities, in particular, concern these essays. We have presently the Realm of Mathematics, the realm in which we explore the relationships among quantities (algebra and analysis), shapes (geometry and topology), and arrangements (set theory and formal logic). In a related series of essays (The Map of Physics) I also look at the Reality in which we exist, in which the laws of physics govern the actions of things participating in events in our observable Universe.
In sufficiently self-consistent realities we can combine statements into new statements that also match features of those realities. But how do we know when that happens? How can we know when a statement that we have synthesized from other statements does, in fact, conform to the appropriate reality?
Certainly we use proper logic in synthesizing the new statements. In that endeavor we string statements together in a manner reminiscent of a puzzle called a word ladder. The puzzle gives us two words and asks us to transform one into the other by changing only one letter at a time, subject to the restriction that all of the changes must yield actual words. So, for example, if we are told to"put the BIRD into the NEST", we devise the ladder BIRD, bard, band, wand, want, went, west, NEST. We could also devise BIRD, bard, band, bend, bent, best, NEST. Or we could devise BIRD, bind, find, fine, mine, mint, tint, tent, test, NEST. But unlike the word ladder, which has alternate routes between words, we want our logical ladder to have a unique route from step to step. We can only certify the conclusion if we know that the premises can lead to nothing else.
We have three basic methods of logical reasoning. From weakest to strongest, they bear the names Analogy, Induction, and Deduction. All three provide us with methods for transforming a small set of initial statements or propositions (the premises) into a final proposition (the conclusion). But those methods demand something from us: we can only properly obtain sound conclusions from sound premises. If all of the premises do not stand true to the reality under consideration, the conclusion can only stand true to that reality by luck and logic does not rely on luck.
Luck certainly seems to play a role in reasoning by analogy. We use analogy as a form of inference based on the idea that, if two things agree in some respects, they will likely agree in other respects. The use of"likely" in that description, rather than "definitely", tells us that we cannot use analogy as a method of proof. It gives us nothing more certain that an educated guess.
As a famous example, consider the analogy that the Reverend William Paley used in his 1802 book"Natural Theology". Reverend Paley invited his readers to imagine walking with him across a heath when he finds a pocket watch that someone has dropped beside the path. He assured his readers that from an examination of the watch they would infer the existence of a watchmaker, a sentient being who, with careful deliberation, designed and then made the watch. He then made an analogy between the watch and its inferred maker and the natural world and its presumed Creator. Reverend Paley made the analogy work for him by conceiving mechanism and contrivance in all of his examples (such as comparing the eye to a telescope). The weakness of analogy as a proof came clear in 1859, when Alfred Russel Wallace and Charles Darwin published their theories of the evolution of life by means of natural selection. By showing an alternative explanation for the phenomena that Reverend Paley explained as designed mechanisms they invalidated the proof that Reverend Paley had devised (though, to be fair, they did not falsify the proposition that Reverend Paley sought to verify (that is, the existence of God); they merely rendered the argument from design useless).
But analogy needn't always fail. Between geometry and arithmetic, for example, in the realm of mathematics, we make an analogy between squares constructed on a plane and products that we obtain by multiplying numbers by themselves (the square numbers). We find that if we construct two sets of equally spaced parallel lines on a plane such that the lines of one set cross the lines of the other set at a right angle, we get an array of mini-squares that tile the plane perfectly. If we construct a larger square on that plane, we will find that a number N of the mini-squares lie along each of its sides and that N2 of the mini-squares tile its interior (that gives us, after all, the definition of a square number). We then extend the analogy by constructing a rectangle on the gridded plane. We find that a number A of mini-squares lies along one side of the rectangle and that a number B of mini-squares lies along the adjacent side. So B lines, each containing A mini-squares, cover the interior of the rectangle, which means that the product AB enumerates the mini-squares that tile the rectangle.
But now draw a diagonal line on that rectangle, a straight line extending from one corner across the rectangle to the opposite corner. And so turn the rectangle that the diagonal coincides with one of the grid lines on the plane. We expect to count a number C of mini-squares along that line. Now the lines A, B, and C comprise the sides of a right triangle and we know, thanks to Pythagoras, who proved and verified it, that if we construct a square on the side C, it covers an area equal to the area covered by both squares constructed on sides A and B. In accordance with our analogy, then, we expect that A2+B2=C2. If A=3 and B=4, then C=5 and we have confirmation of the analogy. But if A=4 and B=5, we have a breakdown of the analogy because the square root of 41 is an irrational number (see proof below) and Greek arithmetic did not acknowledge the existence of any numbers but integers and rational fractions. Indeed, the desire to save the analogy likely provided most of the force driving ancient mathematicians into accepting the validity of irrational numbers. The use of Arabic numerals, with their place-value notation, replacing the Greek and Roman numerals, also helped by permitting the representation of endless decimal fractions.
After Analogy we come to Induction and enter the realm of logical proof in addition to logical inference. Logical Proof denotes the means by which we test propositions and declare them true or false to the appropriate reality. We need proof to ensure that our inferences, however reasonable they may seem to us, don't mislead us. In going from Analogy to Induction we go from luck to likelihood in the truth value of our inferences (and Deduction will take us to certainty).
Induction consists basically of drawing an inference from a pattern that we have discerned in a small set of objects and assuming that the pattern persists over the entire set of objects. We can also say, more formally, that induction consists in examining a subset and inferring rules that apply to the whole set to which it belongs. For example, we have the set of the odd integers and the set of the square integers and we believe that we can induce a relation between the two sets. Perhaps I have noticed the following relations:
We expect that pattern to continue and thus infer that the statement that the sum of the first N odd integers in their proper sequence equals the square of N stands true to mathematics. This gives us an example of incomplete induction, which draws a conclusion from a finite number of cases.
We also have complete induction, in which we prove and verify a theorem (in physics, a law) by showing that it holds in the first of an ordered set of cases, then showing that it holds for all cases preceding the n-th case to verify that it also holds for the n-th case. The cases must form an ordered set for the induction to gain its validity; specifically, the different cases must depend upon a parameter that takes on the successive values of the natural numbers. So we verify the proposition for the first case, verify that the proposition stands true to mathematics for the (n-1)-th case and all preceding cases, infer that it stands true to mathematics for the n-th case, and conclude that it stands true to mathematics for all cases, basing that conclusion on the principle of sufficient reason in the form it takes a change to make a change. Having seen no change in the proposition in the first n cases, we expect to see no change in subsequent cases.
As an example of a complete induction we prove and verify the above proposition relating odd integers and square integers;
1. On a flat surface lay out a square array of identical markers with N markers on a side. The square contains NxN markers, the square number associated with N.
2. Lay one marker on the surface. Call it a square array so that we have 1=1x1.
3. Create the next larger square by putting one marker along the upper side, one marker along the right side, and one marker in the upper right corner. We thus add 3 markers to get a square covered by 4 markers, the square of 2.
4. For a square with X markers on a side put X markers along the upper side, X markers along the right side, and one marker in the upper right corner. We thus add 2X+1 markers to get a square covered by (X+1)2 markers, the square of X+1.
5. But that construction gives us a square as the running sum of the odd integers, thereby proving and verifying the proposition.
We also have transfinite induction, which starts getting us close to deduction. It applies to a well-ordered set (an ordered set with a first element.) Take the well-ordered set S. If the proposition P is true of the first element of S and is true of the k-th element and of all elements preceding the Nth element, then P is true of all elements of S. We now prove and verify that statement. Let F represent the subset of elements of S for which P is false. As part of a well-ordered set, F must also be well-ordered, so it has a first element U, but that is not the first element of S. All elements preceding U are true of P, so U must also be true of P, which makes F=[Ø], the empty set.
At the interface between mathematics and Reality we have physics, in which we express our knowledge mathematically. In physics we also use induction; indeed, it is generally regarded as the only proper method of reasoning in physics. Francis Bacon and Isaac Newton both described its use.
Newton's "Rules for the Study of Natural Philosophy" (Book III, Philosophiae Naturalis Principia Mathematica) gives a good account of incomplete induction as it applies to the laws of physics:
1."No more causes of natural things should be admitted than are both true and sufficient to explain their phenomena." Basically this corresponds to Occam's Razor, which tells us that we require necessary and sufficient reasons to accept a proposition as true to Reality. Think of a proof as a set of statements; not too few (necessary) and not too many (sufficient). As William of Occam put it, we are not to multiply entities beyond necessity.
2."Therefore, the causes assigned to natural effects of the same kind must be, so far as possible, the same." Essentially this corresponds to Leibnitz's Principle of Sufficient Reason; "If something is so which could have been otherwise, then there must be some reason why it is so, and not otherwise." It takes a difference to make a difference. If we don't see a difference in the effects ("natural effects of the same kind"), then we should not suppose a difference in their causes.
3."Those qualities of bodies that cannot be intended and remitted and that belong to all bodies on which experiments can be made should be taken as qualities of all bodies universally." Newton here refers to properties that cannot be increased or decreased. Though a body's mass may increase or decrease, the fact that the body possesses mass cannot be changed, so we infer that all bodies possess mass. This gives us the basic principle of induction.
4."In experimental philosophy, propositions gathered from phenomena by induction should be considered either exactly or very nearly true notwithstanding any contrary hypotheses, until yet other phenomena make such propositions either more exact or liable to exceptions." Here Newton acknowledges that induction can only provide us with a kind of contingency. As long as we rely on induction we cannot have perfect knowledge.
We call the process of gathering propositions from phenomena by induction the Scientific Method. Also known as the Empirical-Inductive method, it was first presented to the world in Francis Bacon's unfinished science-fiction novel "The New Atlantis", published posthumously in 1626. That novel inspired the formation of The Royal Society of London for Improving Natural Knowledge (founded 1660, incorporated 1662), commonly called simply The Royal Society. The method, now taught in grammar-school science classes, appears as a series of steps:
We notice some phenomenon in the world and gather facts pertaining to it. In some cases we may take advantage of serendipity, the phenomenon of making a discovery through insight into an accidental finding of what one was not seeking. That description offers a fair idea of what Louis Pasteur had in mind when he claimed that chance favors the prepared mind.
I observe that when I rub my hands vigorously together they feel hot. When I put cold steel on an anvil and pound it, it becomes warm. Boy Scouts are taught to make fire by rubbing sticks together and many primitive people make fire by twirling a wooden rod in a pit in a block of wood to generate enough heat to make sawdust smolder. Sir Benjamin Thompson, Count Rumford (1753 Mar 26 - 1814 Aug 21) in late 1798 saw that boring cannon at the arsenal in Munich generated enough heat to make water boil.
We ask a question about the phenomenon, taking care to put it into such a form that we have the possibility of answering it by way of further observation.
Does motion that is opposed by friction or collision create heat? Does heat give us an example of a kind of energy?
We contrive a tentative answer to the question, one that seems to provide an explanation of the phenomenon. We need multiple hypotheses where possible and they must be subject to possible falsification by experimentation or deductive reasoning.
By Newton's first law we expect that a moving body carries some quantity that must be removed from it when it slows down. We do work upon the body to get it moving, so we expect that it will do work when it rubs against some other body to slow down. We expect that work to become heat.
From the hypothesis we deduce the result that we expect to obtain from further observations. Note that inductive reasoning goes from specific to general statements and deductive reasoning goes from general to specific statements.
If I were to make something turn against some resistance, trying to speed the thing up while the resistance acts to slow it down, the part of that something where the resistance occurs will heat up.
We test the prediction through further observations of the phenomenon, even if we must contrive the observations through an experiment. In designing the experiment we must include controls to isolate the desired data from influences that would obscure their relation to our hypothesis.
In 1845 James Prescott Joule (1818 Dec 24 - 1889 Oct 11) performed an experiment with falling weights driving a paddlewheel in an insulated barrel of water while he measured the temperature of the water. He found that the water did become warmer and that he needed to exert 819 foot-pounds for every BTU (4.41 Joules per calorie) that the water gained in heat. In 1850 with a much refined version of his experiment he got 772.692 foot-pounds per BTU (4.159 Joules per calorie. Today we describe the mechanical equivalent of heat as 4.184 Joules per thermochemical calorie.
We devise a well-tested and verified hypothesis or system of hypotheses. While mathematically expressed laws sum up scientific facts, usually in the form of equations; theories explain those facts. Ideally a theory explains the phenomenon by relating it to other phenomena. Our theory of heat, for example, explains heat by relating it to the laws of motion and showing that heat is a kind of energy.
The consistent production of one calorie of heat for every 4.184 newton-meters of work put into the system confirms that heat comes from a mechanical effect. We conclude that heat can indeed come from motion doing mechanical work and, therefore, that it is a kind of energy.
Now we have to reconsider the Problem of Induction to remind ourselves that, however convincing procedures like the Scientific Method may seem to us, we have no actual guarantee that the conclusions stand true to the reality under study. In using induction we lack a solid justification that would necessitate the conclusion that we draw from observing a number of objects or events. We must remain haunted by the fact that we have no guarantee that we will not find a contrary instance of the objects or events at some later time.
Consider, as an example, that up to 1911 physicists believed, quite reasonably, that all metals conduct electricity (essentially the definition of a metal), albeit with some resistance that requires an electromotive force to push the conduction electrons through the metal. As some physicists began to conduct experiments in which they drove the temperature of a sample ever closer to Absolute Zero, they found that the resistance of metals diminished with temperature, but did so in a way that implied that some resistance would still exist in the metal at absolute zero. A reasonable induction would have said that all metals resist the flow of electricity at all temperatures: no physicist at the time would have disagreed. But then in 1911 the Dutch physicist, Heike Kammerlingh Ohnes discovered superconductivity when he found that the electrical resistance in a sample of mercury went to zero at a temperature about four Celsius degrees above Absolute Zero. So much for induction!
To reiterate: a logical proof tests the validity of a logical statement and either verifies it or falsifies it. The word proof itself denotes the logical argument which establishes, by evidence or by demonstration, the truth of a statement to a given realm of discourse. In axiomatic-deductive reasoning we combine premises in a way that yields a new proposition, the conclusion, and then prove and verify the conclusion by showing that one and only one way of combining the premises leads to the conclusion.
Deduction gives us the strongest kind of proof. But a deductive proof obliges us to gain a deeper understanding of why the proposition stands true to the realm of discourse. The inductive proof requires of us only that we recognize a pattern; it does not oblige us to discern why that pattern and not some other describes the phenomenon under discussion.
More formally we describe deduction as follows: deduction denotes a formal structure that we construct from a set of unproved axioms and a set of undefined objects. We carry out that construction by defining new terms in terms of the given undefined objects and we devise new statements, or theorems, from the axioms by proof. We say that we have a model for a deductive theory when we have a set of elements that have the properties stated in the axioms. We can thus use a deductive theory to prove and verify theorems that stand true in all of its models.
In other words, we start with statements that we know or assume to be true to our realm of discourse and then combine those statements with each other and with other statements that we know to be true in our realm of discourse to advance our reasoning toward the statement which we want to prove and verify. In that way we proceed by synthesis from a known truth to that which is to be proved. We can also reverse that procedure and reason by analysis from the thing we want to prove back to a known truth.
Using the language of set theory, we can also say that we have a deduction when from axioms we infer the rules of the set, especially the rules that generate all of the members of the set and only the members of the set. In this kind of reasoning mathematicians use a form of Occam's Razor in their demand that the evidence for some proposition be both necessary and sufficient (not too little and not too much); we might call it the Goldilocks condition. We say that a proof, to be complete, must cover both necessity and sufficiency.
As an example of what that means consider the statement"This substance is a superconductor". A necessary condition gives us a logical consequence of the statement: conducting electricity without resistance is a necessary condition for the substance to be a superconductor (though not sufficient, because a thin plasma can conduct electricity with almost zero resistance, but in that case magnetic fields get "frozen in" to the substance). A sufficient condition is a statement from which the given statement logically follows: displaying the Meissner effect (that is, expelling all magnetic fields from its interior) is a sufficient condition for a substance to be a superconductor because no other kind of material displays that effect.
We can think of deduction as something like a game of dominos. Just as we have tiles with two arrays of dots on them, we have in deduction premises that connect two concepts (for example, A implies B or J is K). And just as in dominos we can only put two tiles together where they bear identical numbers of dots, so we combine premises where they bear identical terms (e.g. A implies B + B implies C= A implies C). We use all-inclusive or all-exclusive language (such as"all", "none", "every", etc.) to ensure that no alternatives can come up to invalidate our logic, that we can lay out one and only one line of dominos or of premises. Consider the following classical methods of logic.
Modus Ponendo Ponens
(the mode that affirms by affirming)
This is more commonly called Modus Ponens. In every direct proof we assume as true a premise P and then establish P implies Q along a path that admits no alternatives that would negate our deduction so that, by Modus Ponens, Q must be true. This may require more than one step; P implies Q1 implies Q2...implies Q. We have as our prime example of this kind of proof the famous syllogism:
Sokrates is a man;
all men are mortal;
therefore, Sokrates is mortal.
Note that we cannot reason by playing that syllogism in reverse; we cannot, for example, say Fido is mortal and conclude that Fido is a man (Fido is a dog).
The form more appropriate to manipulation of P's and Q's makes use of the conditional. In the present example the statement "all men are mortal" gives us the conditional "If Sokrates is a man, then Sokrates is mortal" and we have our syllogism as:
If Sokrates is a man, then Sokrates is mortal.
Sokrates is a man;
therefore, Sokrates is mortal.
In set theory we identify"all men" as a subset of the set of "mortal beings"; Sokrates is an element of the subset; therefore, he is an element of the set. Imagine drawing a circle and claiming that its interior represents the set of all mortal beings. Then draw a smaller circle inside it and claim that its interior represents the set of all men. Sokrates sits within the smaller circle, so he necessarily sits within the larger circle too. A non sequitur, affirming the consequent, would have us notice that Fido sits within the larger circle and then claim that he sits within the smaller circle too. But Fido actually sits within a different smaller circle, the one that represents the set of all dogs.
Modus Tollendo Tollens
(the mode that denies by denying)
This is more commonly called Modus Tollens. Also we know that the contrapositive of P implies Q is not-Q implies not-P (A is a man implies A is mortal, so B is not mortal implies B is not a man), so we establish the truth of not-Q implies not-P, which establishes the truth of P implies Q. Since the truth of P is our premise, the truth of Q follows by Modus Ponens.
Indirect Proof by Contraposition; given P and Q, we assume not-Q, deduce not-P, and invoke the statement"if not-Q implies not-P, then P implies Q". We have as our prime example of this kind of proof the syllogism:
Zeus is not mortal;
all men are mortal;
therefore, Zeus is not a man (he's a god).
And from that we reason that"If all immortals are not men, then all men are mortal". Again we cannot reason in reverse; we cannot say Flicka is not a man; therefore, Flicka is not mortal (Flicka is a horse and horses are mortal).
By set theory:"all men" is a subset of "mortal beings"; that is, in graphic terms, the circle enclosing all men lies entirely inside the circle enclosing mortal beings. Zeus does not lie inside the larger circle; therefore, he does not lie inside the smaller circle. R is a proper subset of S; X is not in S, therefore X is not in R. Non sequitur is denying the antecedent; X is not in R does not lead to X is not in S. In this sense Q is the set and P the subset, so not-in-Q implies not-in-P, which necessarily entails in-P implies in-Q. That is, if not being in Q necessitates not being in P, then P must lie in Q.
Modus Tollendo Ponens
(the mode that affirms by denying)
This is also known as the disjunctive syllogism, based on the law of the excluded middle. We have P or Q; not-P; therefore, Q.
In set theory: X is an element of P or Q; X is not an element of P; therefore, it is an element of Q. Sokrates is loyal to Sparta or to Athens; Sokrates is not loyal to Sparta; therefore, he is loyal to Athens. Here we have taken the set of all Greeks and divided them into two subsets; those loyal to Sparta and those loyal to Athens. Membership in one subset prevents membership in the other. This gives us the one mode that we can actually work in reverse: Sokrates is loyal to Athens; therefore, he is not loyal to Sparta.
Modus Ponendo Tollens
(the mode that denies by affirming)
We have not-(A and B); A; therefore, not-B, another application of the law of the excluded middle. We also call this reductio ad absurdum. If the square root of two is rational, then the two elements of the ratio cannot have an even factor in common. The two elements do, in fact, have the factor two in common; therefore, the square root of two is not rational. We can also have this as If P is true, then not-Q is true; Q is true; therefore, not-P is true.
In set theory: X is not an element of P and Q; X is an element of P; therefore, it is not an element of Q. One cannot have smooth cheeks and be a hoplite (an Ancient Greek foot-soldier); a boy/girl/woman has smooth cheeks; therefore, a boy/girl/woman cannot be a hoplite.
Again, of course, we cannot reason the reverse. We cannot say that someone who cannot be a hoplite has smooth cheeks. A man with some infirmity that would prevent him from participating in battle cannot be a hoplite, but he does not have smooth cheeks.
But not all mathematical proof employs those methods. Consider this simple example: given two different numbers A and B, prove and verify the statement that the number (A+B)/2 lies between A and B on the number line. We prove as follows;
a. Assume that A<B. This is our axiom.
b. Divide Statement a by 2; A/2 < B/2. We cannot reasonably doubt this because it uses standard division.
c. Add A/2 to Statement b; A < A/2 + B/2 = (A+B)/2. We cannot reasonably doubt this because it uses standard algebraic addition.
d. Add B/2 to Statement b; A/2 + B/2=(A+B)/2 < B. Ditto.
e. Applying Euclid's First Common Notion (Things equal to the same thing are equal to each other), combine Statements c and d by way of their common term to get; A<(A+B)/2 <B. Q. E. D.
This is not so much a syllogism as it is a recipe for generating the statement to be verified. What kind of proof is that? In this we seem to have actually created the P implies Q, the conditional part of a syllogism. But note also that we have no doubt that one and only one logical path leads from the premises to the conclusion.
And what can we say of this proof? Where are Modus Ponens or Modus Tollens? We didn't need them and in most mathematical proofs we don't. In many cases, as in the one above and in the one below, we can prove our reasoning as we go. In what follows the italicized statements constitute the proof:
To prove: the sum of the odd integers, in their proper sequence, yields the set of the square numbers. We use a further reach of abstraction (further from the concrete) in using a purely algebraic proof.
A. Given the square number of X is X2 (X multiplied by itself). We cannot doubt this because it defines the mathematical meaning of a square number.
A2. Definition of successor. The next element in an ordered set. The successor of En is En+1.We cannot doubt this because it defines the mathematical meaning of the word "successor".
B. In the set of the square integers the successor of X2 is (X+1)2=X2+2X+1. We cannot rationally doubt this because it applies established rules of algebraic multiplication and the definitions of square number and of successor.
C. the successor of X2 differs from X2 by DX=2X+1. We cannot rationally doubt this because it applies established rules of algebraic subtraction.
D. That statement applies to all possible values of X, so if we take all of the positive integer values of X in sequence, beginning with one, and apply them to DX=2X+1, we generate the sequence of the odd integers. We cannot rationally doubt that statement because doubling the sequence of the natural numbers generates the sequence of the even positive integers and adding one to each element of that set generates the sequence of the odd positive integers beginning with three. Thus, each odd integer represents the difference between two adjoining square numbers.
E. One is the first square number and is also the first member of the set of odd integers. We accept this as an axiom; that is, a statement whose truth to mathematical reality stands self-evident.
F. One equals Dx for the case X=0. If we add the elements of the set Dx to one, one at a time and in order, we thereby generate the set of the square numbers with its elements in order. Thus, adding the first N elements of the set Dx (counting one as the first element) generates the N-th element of the set of the square numbers, Q. E. D.
Existence proofs, proofs that objects with certain properties exist, tend to dominate in mathematics. We can prove and verify the existence of a certain mathematical object, for example, by constructing an example of it (a constructive proof) or by showing from its properties that it can't not exist (a non-constructive proof). The proof of the existence of irrational numbers given below provides an example of a non-constructive proof because it proceeds to its conclusion without actually constructing an irrational number.
We present for proof the proposition that the square root of any integer is an integer or an irrational number. We prove and verify that proposition as follows:
1. Let X represent the square root of some integer K and assume that X is rational but not an integer.
2. There exists some smallest positive integer N such that the product NX is an integer.
3. Consider M=N(X-[X]), in which [X] represents the floor function of X (the whole number part of X). In that equation we have simply multiplied N by the fractional part of X.
4. Because the fractional part of X lies between zero and one, the value of M must lie between zero and N. Thus we have M<N.
5. NX and N[X] are integers, so their difference, M, is also an integer.
6. MX=NX2-(NX)[X]=NK-(NX)[X], which is also an integer.
7. But we have shown that M is less than N and now we have MX yielding an integer, even though we defined N to be the smallest integer capable of yielding an integer when multiplied by X. Thus we have a contradiction that obliges us to dismiss one of our premises.
8.The only premise available for dismissal is our assumption that X represents a mixed number whose multiplication by some integer yields an integer.
9.The only kind of number that cannot yield an integer when multiplied by some integer, however large, is an irrational number, so X must represent an irrational number. Q. E. D.
Alternatively we have the proof and verification as follows:
Let D represent a positive integer that is not the square of an integer. Then we have a positive integer L such that the statement L2<D<(L+1)2 stands true to mathematics. Assume that there exists a rational number whose square equals D; then there must exist two integers T and U such that D=T2/U2. We can reduce that fraction, so we can assume into our premises the statement that U represents the smallest integer that possesses the property that the product of its square with D equals the square of T. Taking our original statement in the form
and multiplying it by U gives us LU<T<(L+1)U. Subtracting LU from that statement gives 0<V<U, in which we have a new integer V=T-LU. We further define the positive integer S=DU-LT. Now we have as true to mathematics the statement that
which necessitates that D=S2/V2. But we have already shown that U is the smallest integer whose square serves as the denominator in the rational fraction that represents D and that V is smaller than U, so we have a contradiction that obliges us to dismiss at least one of the premises that led to it. The only premise available for dismissal is our assumption that D equals a rational number. Since we know that we can make D equal an integer whose square root is not an integer, we must conclude that the square root of D in this case must be an irrational number. Q. E. D.
Now we raise a topic not often found in discussions of logical proof-- the invalidation of a standing proof. At the end of the Nineteenth Century Georg Cantor proved and verified the proposition that irrational numbers exist by proving and verifying two other propositions. First he proved and verified the proposition that the rational numbers comprise a denumerably infinite set. Then he used his famous diagonalization proof to verify that the real numbers comprise a non-denumerably infinite set, one vastly bigger than the set of the rational numbers. The difference between the sets of the real numbers and the rational numbers would, perforce, be not rational numbers or irrational numbers.
In the essay Advanced Infinity I show that Cantor made an error in his diagonalization proof (and it must be a very subtle error if it took 130 years for someone to notice it) and I then prove that no infinities larger than the infinity of the counting numbers exists in the realm of mathematics. Having thus invalidated one of Cantor's premises, I have also invalidated his existence proof of the irrational numbers. Please note that I have not thereby verified or falsified any statements about the existence of irrational numbers: I have merely shown that Cantor's proof is invalid and that we thus need a new proof to replace it. However, the proof above that the square roots of numbers whose square roots are not integers must be irrational suffices to prove and verify the existence of irrational numbers.
Like mathematics itself, logical proof is a work in progress. We constantly refine it and add to it.
For now, though, I want to return to where we started. This intellectual alchemy that we have been studying lends itself to one last word ladder. We now want to turn LEAD into GOLD: LEAD, leap, reap, real, meal, mean, moan, moon, goon, goof, golf, GOLD.
Back to Contents