The Fundamental Laws

of

Thermodynamics

Back to Contents

Take a book and give it a light toss onto a smooth table. Coming down on the table, the book emits a loud Whap!, slides, and then comes to rest. You know that you gave the book a small amount of kinetic energy and it gained even more from conversion of its gravitational potential energy as it descended from your hand to the table, yet now it seems to have none. Whither did the energy go?

Seeking the answer to that question will bring us into the realm of thermodynamics, commonly thought of as the study of heat (the word coming from the Greek words for heat [thermos] and power or strength [dynamos]). In this realm we study the various forms that energy can take and the processes that effect the various transformations and transfers of energy.

In the present case we begin with the tacit assumption that we made in asking our question. In asking whither the energy that we gave the book went we have implied our belief that the energy did not simply cease to exist. We have already justified that belief by deducing a conservation law pertaining to energy (albeit with a proviso).

First Law

Our first fundamental law of thermodynamics, then, must express the conservation of energy:

No given phenomenon can either create or destroy energy,

it can only move energy and/or change its expression.

So we know that the energy that we originally put into the book now resides in the book, in the table, and in the surrounding air. But we cannot see the book or the table moving, so how can they still contain the energy of the book's original motion?

To answer that question we need to look at two facts about solid bodies:

1) Following Leukippos and Demokritos, we imagine pulverizing a body, such as a stone. The fact that we can grind a body into powder tells us that we can represent a body as comprising very many small particles held together by forces that we overcome in taking the particles off the body in the pulverization process. Please note that in stating this fact we do not need to make any hypotheses about the existence of atoms.

2) From the observation that bodies have a size that they maintain against forces that act to change them we infer that the forces that hold the particles together have such a nature that we can represent the body as a collection of particles held together by springs. Each particle in the array can oscillate and thus store energy, shifting that energy back and forth between its kinetic form and its potential form.

Let's take a closer look at how that model reacts to something striking it. We start with the simplest possible version of that model. A body of mass M strikes a massless spring of stiffness k attached to an immovable wall at speed V and rebounds. M rebounds at the speed V in the opposite direction because it must retain its original energy. Now attach a body of mass m to the free end of the spring. M strikes the mass and spring combination at speed V and rebounds at speed v<V. We have

(Eq'n 1)

Thus, because it moves with the body M, the mass and spring combination absorbs some of M's original energy. And as the body M moves away from it, the body m will oscillate back and forth as the spring alternately stretches and compresses.

If a particle is connected to springs parallel to all three axes of our coordinate frame and a collision with some body drives it in the x-direction, it gains x-ward energy, which it partially passes on. But, also, the stretching of the lateral springs moves the particles to which they are attached and gives them some energy. In this way a body disperses and scrambles the energy it absorbs from a collision. In our example the book hit the table and, in accordance with Newton's third law of motion, the table hit the book; those collisions scrambled the energy originally contained in the falling book.

Now we have several new theorems about physical bodies:

A) We can represent any body as a set of particles, each connected to its neighbors by springs of appropriate stiffness. Bodies (particles) in contact, either through collisions or through forcefields, can exchange scrambled energy, because scrambled energy is manifested in matter as vibrations of the particles that comprise that matter and vibrations propagate through matter. Please note that the above analysis works, regardless of whether the body comprises a set of discrete particles interconnected via forcefields or the body consists of an elastic continuum.

However, if we had a continuum, we would have an infinite number of modes of oscillation among which we would have to divide the energy that we put into the body. But the finite-value theorem won't allow us to divide energy (or any other conserved quantity) into an infinite number of particles: therefore, it so constrains the structure of Reality that matter cannot allow such an infinite division to happen. Thus we infer that matter must, at the very least, act as though it had only a finite number of parts.

B) The presumed springs must be oriented along more than one direction in order to hold solid matter together. Thus those springs cause any kinetic energy applied to one part of a body to spread out away from that part in all directions. They thus produce an internal friction that diverts energy from straight propagation, thereby ultimately scrambling the energy.

C) A body reaches thermal equilibrium, the state in which the distribution of energy does not change, when energy moves at equal rates in opposite directions anywhere within the body or system. This defines thermal equilibrium and leads to the next topic.

Zeroth Law

Place your hands together, palm to palm, and rub them vigorously back and forth. In only a few seconds you will feel your hands grow hotter. You know that you have done work to overcome the friction between your hands and, so, have put scrambled energy into your skin. So now you know that increased scrambled energy in a body correlates in some way with the body's hotness; that is, if we put more heat (scrambled energy) into a body, that body will grow hotter and if we take heat out of that body, the body will grow colder.

Now imagine that we have two bodies: Body-A contains heat in the amount EA distributed over NA modes of vibration and Body-B contains heat in the amount EB distributed over NB modes of vibration. Let the two bodies come into contact with each other. We know that if EB = 0, then heat will migrate from Body-A into Body-B as the active vibrations in Body-A excite the quiescent vibrational modes in Body-B.

Inside each body the forces that act to move scrambled energy grow stronger as the amplitudes of the vibrations increase and become weaker as the amplitudes of the vibrations decrease. Thus we infer that heat will move more rapidly from a hot body than from a cold body, so if we put a hot body into contact with a cold body, heat will move faster from the hot body to the cold body than it does from the cold body to the hot body. As a consequence the cold body will become hotter and the hot body will become colder until heat flows as rapidly from the originally cold body to the originally hot body as it does in the reverse direction; that is, until the two bodies come into thermal equilibrium with each other.

We also know something else about bodies based on our model of them as collections of particles interconnected by springs. As we add heat to a body we amplify the vibrations that store the energy. That means that the body should become slightly larger, by an amount depending upon the body's composition and the strengths of the forces that we represent with springs. That fact gives us something useful in our exploration of the physics of heat.

We can't measure the internal energy of a body directly, so we must use indirect means to do so. If we have two otherwise identical bodies that expand by different amounts when we put them into contact with some large body, then we can use them and a little leverage to measure the relative expansion. In actuality we put mercury into a glass bulb that feeds the liquid metal into a capillary channel: as heat makes the mercury in the bulb expand, the extra volume gets pushed up the capillary. In this way we can establish a scale to measure the degree of hotness of a body. The number that we get in this way we call the body's temperature.

One of the tasks of thermodynamics is that of correlating temperature to the energy content of a body. We know, though we will have to prove it later as if we were still ignorant of it, that temperature correlates with the average energy per storage mode in a body: that is, we have ε=kT. And the device that we use to measure temperature we call a thermometer.

In order to ensure that our measurements come out as accurately as possible we want to use a thermometer that is very much smaller than the body whose temperature we want to measure. In a reflection of the limiting process used in differentiation, we want to determine the temperature of a body as the size of the thermometer we use tends toward zero. In this way we ensure that the thermometer does not interfere with the result by putting too much heat into the body or taking too much heat out of it.

Let's put a thermometer into contact with a body we call Able and wait until the pointer that indicates the measured temperature stops moving. When the pointer remains steady we know that the components of the thermometer have stopped expanding or contracting; which means that the thermometer is no longer drawing heat from Able or giving heat to Able; which means that the rate at which heat is going from Able to the thermometer equals the rate at which heat is going from the thermometer to Able; which means that the thermometer and Able are in thermal equilibrium with each other. We note the temperature that the thermometer shows and then put the thermometer into contact with a body that we call Baker. When the thermometer and Baker come to thermal equilibrium with each other we note that the temperature equals the temperature we measured with Able.

Now we know that if we were to put Able and Baker into contact they would already be in thermal equilibrium with each other. We know that statement must be true to Reality because we know that Able and Baker separately have excited the vibrations in the thermometer to a certain amplitude, which we see reflected in the temperature, and no greater or lesser. We know that in that state the thermometer also excites the vibrations in Able and Baker to certain amplitudes and none different. Thus we must infer that the vibrations in Able and Baker cannot change each other's amplitudes, but that energy must flow from Able to Baker at the same rate at which energy flows from Baker to Able. Thus we know that two bodies will already be in thermal equilibrium with each other and will stay in thermal equilibrium with each other when brought into contact if they have the same temperature (as measured by the same thermometer).

Here we have the physical analogue of Euclid's first common notion (things which are equal to the same thing are equal to each other): If two systems (Able and Baker) are in thermal equilibrium with a third system (the thermometer), then they are in thermal equilibrium with each other. That statement comprises the Zeroth Law of Thermodynamics.

Second Law

Again we have a body that we can represent as comprising a collection of particles interconnected by the equivalent of N springs. Each spring constitutes a mode of energy storage, so we take N to represent the number of storage modes in the given body. We also assert that the body contains scrambled energy in the amount we represent as E. We thus define average energy contained in each mode in the body as ε=E/N.

In how many ways can the body distribute that energy within itself? Each mode of energy storage can conceivably contain an amount of energy between E and zero, the body distributing the remaining energy over the other modes. As with mass, so with energy; the finite-value theorem requires that we cannot divide energy infinitely fine. The amounts of energy that a given mode can hold must differ, at the very least, by minuscule, not infinitesimal, increments. Each different way of distributing energy within the body constitutes a microstate of the system. The total number of states in which the body can distribute its energy comes, then, from a function of the total energy and the number of modes, so we represent that number as

(Eq'n 2)

Imagine now that we have divided the body into two parts by slicing it with a purely imaginary plane. Each of the parts has its own omega. For each microstate in which one of the parts can exist the body's overall omega gains one times the omega of the other part, so the overall omega must equal the product of the omegas of the parts; that is,

(Eq'n 3)

If we were to change the total number of states in our body, such as by adding more scrambled energy, then that equation tells us that

(Eq'n 4)

But, though true to mathematics, that statement does not give us a proper extensive parameter of the system. For an extensive parameter to be proper it must have the same mathematical form at all levels of division of the system. For example, in our body we have for the energy E=E1+E2 and for the volume V=V1+V2. To turn our omega into a proper extensive parameter, we need only divide Equation 4 by Equation 3 to obtain

(Eq'n 5)

That little manipulation is not as useless as it might seem at first. Consider that of all the microstates we can ascribe to our system, the body manifests none of them for more than an instant before the interactions among the spring equivalents shift it into another manifestation. And while we expect that the next microstate that the system manifests will be similar to the one it currently manifests, we really have no way of determining whether the system prefers one microstate over another; thus, we must assume that all of the microstates have an equal probability of being manifested in a given time interval. The inverse of omega thus becomes equal to the probability that the system will manifest any given microstate.

That inverse also serves as a normalizing factor. In Equation 5 it transforms the absolute change in the number of states in the system into the relative or fractional change in the number of states in the system. To go further with that idea let's revisit Equation 3 in the form

(Eq'n 6)

from which we infer

(Eq'n 7)

in which A represents a constant of proportionality and E' represents some characteristic energy of the system that we use here to reduce the total energy to a pure number that the exponential function can work on. If we now differentiate that equation, we obtain

(Eq'n 8)

which gives us, when we divide it by Equation 7,

(Eq'n 9)

For simplicity we define

(Eq'n 10)

which just recapitulates Ludwig Boltzmann's statistical definition of the entropy of a system. Then we can rewrite Equation 9 as

(Eq'n 11)

That equation gives us the usual mathematical description of entropy from classical thermodynamics, although we have yet to relate E' to the temperature of the system, and Equation 5, properly understood, gives us the statistical description of entropy, which I redefined immediately to provide Ludwig Boltzmann's famous description of it. So now for any composite system we have an additive, continuous, differentiable, function S of the extensive parameters of that system and monotonically increasing function of the energy, which function describes the entropy of the system and encodes the fact that the values assumed by the extensive parameters in the absence of internal constraints are those that maximize the entropy over the manifold of constrained equilibrium states.

We might also call this the Eagerness of the system to push energy into any other system, though that word seems a little too anthropomorphic. Whatever we call it, it gives us a measure of how easy or difficult any given transfer of energy between two bodies will be. Ultimately we know that in any system isolated from all others only those processes will occur for which the entropy of the whole system increases or remains unchanged.

Thus we state the second law of thermodynamics. We have the greatest expectation of finding any given system in a state with the greatest value of omega, the state of greatest probability. Thus, in accordance with Boltzmann's Equation, we know that if we remove internal constraints from a system, that system will evolve toward the equilibrium state that has the maximum possible entropy.

Third Law

One more consequence of the particle/spring model of solid matter that we have been using is our understanding that there must exist a state in which the particles don't move and the springs contain no energy. A body in such a state would be as cold as a body can get; nothing, we would think, could make it colder. To make it colder would require the use of negative energy and we know that we cannot have such a thing. Thus the energy-less body described above displays a temperature of absolute zero, the zero below which we cannot go.

We know that the first law necessarily entails a statement that we cannot have even the possibility of obtaining an infinite amount of energy from any source. That statement in turn entails a statement that Existence must so structure Reality that the Universe has a state of absolute zero energy. The existence of such a state upholds the law of conservation of energy. If we did not have an absolute zero of energy, then we could have a state of infinite negative energy that we could use to violate the conservation law. Such a state of infinite negative energy would give an endless downhill run that we could use to give a body infinite positive kinetic energy, something that the finite-value theorem forbids. We already have the possibility of infinite positive energy, but only because it will never be fulfilled to actuality. So we cannot have even the possibility of infinite negative energy.

We also know that any given system has only one way to contain zero energy. Thus we know that if a system has zero energy, it must also have zero entropy.

Now we can ask: In the race to absolute zero which property of a system diminishes faster - the energy or the entropy? Until we have a clearer picture of the relationships among entropy, energy, and temperature, we cannot answer that question. Nonetheless, for completeness I will state Nernst's Postulate, the third law of thermodynamics, here.

We state Nernst's Postulate thus: the entropy of any system vanishes in the state for which E/S = 0 (that is at the absolute zero of temperature). Nernst put it "the entropy change in any isothermal process approaches zero as the temperature at which the process occurs approaches zero." We can also say that the amount of heat that can be transferred in some process diminishes even faster than does the temperature. And that just gives us another way of stating Nernst's Postulate.

The Cynic's Laws of Thermodynamics

When I was learning physics at UCLA in the late 1960s I and my classmates discovered in our course in thermodynamics a mnemonic device for remembering the classical three laws of thermodynamics. We called it the Las Vegas version and it goes like this:

First Law; You can never win, but can only break even.

Second Law; You can only break even at absolute zero.

Third Law; You can never reach absolute zero.

In this grand casino that we call the Universe the house will take you every time. And lest you think this is nothing but a meaningless joke, I offer the following commentary:

The first law consists of the law mandating the conservation of energy. Because we can neither create nor destroy energy, any energy we put into a system must come out of the system in the same amount. Thus we cannot win anything in a thermodynamic system, but only get back what we put into it.

The second law tells us that if we put a certain amount of heat into the hot side of a heat engine, then we must exhaust a certain fraction of that heat out the cold side of the engine, leaving the rest available for the engine to convert into useful work. The amount of heat that the engine must exhaust only equals zero, leaving 100% of the input heat for conversion to useful work, if and only if the cold side has a temperature of absolute zero. So we can only break even and get back work equal to the heat that we put into the engine if we can lower the temperature of one part of the engine to absolute zero.

The third law, the punch line, then tells us that no process will enable us to lower the temperature of any object to absolute zero in a finite number of steps.

Thermodynamics

We solve the fundamental problem of thermodynamics when we devise the proper description of the equilibrium state that results from our removing internal constraints in a closed composite system or from our bringing formerly isolated systems into contact with each other. That is, thermodynamics, properly understood, tells us how matter distributes energy within itself.

Classical thermodynamics, derived from empirical studies of heat in matter, makes no assumptions about the underlying structure of matter. Statistical thermodynamics, on the other hand, necessitates the assumption into its premises of an atomistic view of matter as a postulate. But if we want a truly axiomatic-deductive thermodynamics, can we deduce atomism or get around it?

We try to get around that question with a pseudo-Demokritean hypothesis about the structure of bodies. We invoke the finite-value theorem to assert that we nothing can divide energy (or any other conserved quantity for that matter) into an infinite number of pieces; for such a division is the same as having an infinite amount of the quantity for the purposes of transfinite arithmetic. Thus we infer that matter must, at the very least, act as though it had only a finite number of parts. And we must be unable, by any means, to make those parts infinite.

Later we will deduce a true atomism when we deduce the quantum theory. But first we must attend to the science of thermodynamics. We note that the laws described above give us the barest foundation for an understanding of heat and its motions. Now we must elaborate them.

habg

Back to Contents