The Law of Entropy

Back to Contents

    Heat will not, of itself, flow from a cold body to a hot body. That statement of the second law of thermodynamics comes directly from the principle of least action density, the statement that any thermodynamic system, within the constraints imposed upon it, will so rearrange itself as to minimize the energy density at any given point within the system, consistent with the same thing happening at every other point in the system. In the absence of constraining forces, the material manifestations of energy tend to move in a way that reduces the density of the energy. That fact means that the system evolves to redistribute its internal energy in a way that equalizes the energy density throughout the system.

    Thus we gain a glimpse of how the Universe came into existence and still does. At the Creation Point space emerges and expands. The process of coming into existence creates energy, but the created energy would not move off the point unless some phenomenon forced it to do so. That phenomenon, the second law of thermodynamics, manifests itself in the observation that any concentration of energy tends to spread itself into regions of lower density. Thus heat flows from hot to cold, gas flows from high density to low density, etc.

    We thus infer that something must do work upon a system in order to reverse that redistribution of heat energy. If a system evolves naturally in one direction, then certainly we must do something artificial to make it evolve in the opposite direction. Energy moves spontaneously from high density to low density, so in order to make energy move from low density to high density we must do work on the system that moves it. But how much work must we do? To answer that question we must understand the relation between work and heat.

    By the end of the Eighteenth Century scientists knew that chemical reactions produce or absorb heat. In 1780 Antoine-Laurent de Lavoisier (1743 Aug 26 – 1794 May 08) and Pierre-Simon, marquis de Laplace (1749 Mar 23 – 1827 Mar 05) showed that the heat released in a reaction equals the heat that they had to put into reversing that reaction. In 1840 Germain Henri Hess (1802 Aug 07 – 1850 Nov 30) demonstrated the fact that Lavoisier and Laplace’s discovery remains true to chemistry, regardless of how many steps a process takes, thus establishing a precursor to the first law of thermodynamics (which physicists call the law of conservation of energy).

    Benjamin Thompson, Count Rumford (1753 Mar 26 – 1814 Aug 21), brought heat into the realm of physics through a famous experiment that he conducted at the end of the 1700's. In 1798 he published "An Experimental Enquiry Concerning the Source of the Heat which is Excited by Friction", in which he described the experiment he performed to establish the fact that heat does not consist of the fluid called caloric, but, rather, exists as a form of motion. He had workers at a German arsenal immerse a cannon barrel in a tub of water and bore it out with a blunt tool turned by a mechanism driven by a horse. Two and a half hours after the experiment began, the initially cold water in the tub had begun to boil. Rumford reasoned that the cannon barrel would contain only a limited amount of caloric (if such existed) but that frictional heat would have no limit: the experiment confirmed that heat does not consist of caloric. In 1847 James Prescott Joule (1818 Dec 24 – 1889 Oct 11) conducted a refined version of Rumford’s experiment and worked out the mechanical equivalent of heat, the amount of work a system must do to generate one calorie of heat, thereby fully establishing the relationship between heat and work.

    By 1865 Rudolf Clausius had amended the principle of equivalence between heat and work. He reasoned that in a steam engine the parts of the working fluid do work upon each other. The heat used to transform the working fluid from one state to another (internal work) becomes unavailable to do external work, such as pushing on a piston. The first law of thermodynamics allows us to express that fact by representing the change in the total energy in a system as a simple sum of the changes in the heat (internal energy) and the work (external energy),

dE=pW+pQ,

(Eq’n 1)

in which the barred dee represents the inexact differential, a differential whose value depends upon factors not included in the differential itself.

    If we have a heat engine that uses an expanding and contracting gas to do work, then we have a straightforward way to calculate the amount of work that the engine does. We assume that we control the volume of the gas directly, which we can do by enclosing the gas inside a cylinder fitted with a sliding piston whose seal against the cylinder’s inner wall prevents gas from escaping. We do work upon the gas by reducing its volume, pushing the piston against the resistance of the pressure in the gas, and gain work from the gas by expanding its volume, so we have

pW=-pdV,

(Eq’n 2)

in which a positive number, reflecting an decrease in volume due to compression of the gas, represents work done upon the system, thereby increasing its energy.

    Of course, we see that inexact differential as an incomplete differential. For the product of two variables we have the complete differential as

d(pV)=pdV+Vdp.

(Eq’n 3)

Note that in that equation the first term on the right side of the equality sign represents the change in the gas’s energy that comes from adding a new volume dV at a constant pressure: it thus represents an increase in the energy available to do work, which we denote with a positive ðW. Comparing that equation with Equations 1 and 2 implies that the total energy in the gas equals the product of the gas’s volume and its pressure. We also get the enthalpy of the system as

d(E+pV)=Vdp=pQ.

(Eq’n 4)

    That equation tells us that whatever changes the gas’s pressure without changing its volume corresponds to a change in the gas’s heat content. Equation 4 thus expresses Amonton’s law.

    Because we cannot change the values of intensive parameters directly, we can only change the pressure, without changing the system’s volume, by adding heat to the system or removing it. We accomplish that transfer of heat by putting our system into thermal contact with a heat reservoir, a second system so large that its exchange of heat with our first system changes its temperature by a minuscule, negligible amount.

    We know that temperature provides us with a number that measures the average kinetic energy contained in the particles that constitute a thermal system. And we know that changing the amount of heat in a system changes the average kinetic energy of the particles. However, if we conceive temperature as a kind of potential to make heat move, just as we conceive pressure as a potential to make work move from one system to another, then we must conceive heat as moving into or out of our system at that potential, which means that the increment of heat stands in direct proportion to the temperature of the body in which it exists. That statement means that we can write the increment of heat as

pQ=TdS,

(Eq’n 5)

in which S represents an extensive parameter that we call entropy.

    We can also write that equation as

(Eq’n 6)

Imagine that we have two heat reservoirs, A and B, at absolute temperatures TA and TB and imagine that we draw a small amount of heat pQ out of reservoir B, thereby reducing its entropy by pQ/TB, and put that heat into reservoir A, thereby increasing its entropy by pQ/TA. If we calculate the amount by which the entropy changes for the whole system, we get

(Eq’n 7)

If we have B hotter than A (so that the heat transfer can proceed spontaneously, according to the second law), then the change in entropy equals a positive number and the total entropy of the system increases; if we have B colder than A (so that we must do work on the system to make the heat transfer proceed, because the second law forbids it to proceed spontaneously), then the change in entropy equals a negative number and the total entropy of the system decreases. Now we have a mathematical criterion that encodes the second law: if some process would increase the entropy of a thermodynamic system, then that process can proceed spontaneously; if it would decrease the entropy of the system, it cannot proceed spontaneously and can only occur if some external operator does work on the system.

    So far, so clever. But does the word entropy actually denote some real physical entity, just as the word volume denotes the three-dimensional analogue of extent in space? To answer that question we turn our attention to an ideal gas so that we can exploit the equation of state – pV=NkT.

    The two terms in that equation describe the kinetic energy content of the gas, the left-side term describing it as work and the right-side term describing it as heat. We can thus describe the total heat contained in the gas as Q=NkT. Using that equation to make the appropriate substitution for the temperature in Equation 6 gives us

(Eq’n 8)

That equation integrates readily to

(Eq’n 9)

for the indefinite integral. We then get

(Eq’n 10)

for the definite integral, with which we carry out actual calculations.

    The conservation of energy law and the finite-value theorem tell us that Nature cannot divide any quantity of energy into infinitesimal bits. Nature can only subdivide energy into minuscule, but finite units. If we designate one of those units with the Greek letter epsilon, we can rewrite Equation 10 as

(Eq’n 11)

in which we calculate the difference in entropy between a system that contains one epsilon unit of heat and the same N-particle system containing heat in the amount equal to Q.

    Now we want to determine what the argument of the natural logarithm represents. We have taken a certain number of units of energy and raised that number to a power equal to the number of particles in the system. For convenience we define p=Q/ε, so we have the argument of the logarithm as p raised to the N-th power. That number equals the number of ways in which we can arrange N p-sided dice with each die showing only one of its faces. We thus imagine our N particles laid out with each showing a certain number of energy units. But that image gives us a new problem, one that we can address by looking at the fact that, among others, it describes a state in which each and every particle in the system contains all of the system’s energy.

    That particular state seems to describe a blatant violation of the law of conservation of energy. We can test that idea by imagining that we select a particle and measure its energy. We find that, indeed, the particle carries all of the system’s energy. We then measure the energy in another particle and find that it equals zero. We get the same result for all of the other remaining particles. Thus we infer that the system somehow behaved in a way that upheld the conservation law.

    We now can understand that the original state of the system ensured that whatever particle we chose, that particle would carry all of the system’s energy. The act of measuring the particle’s energy then altered the system, putting it into a state in which each of the remaining particles carried no energy, thereby upholding the conservation law. In other words, the system originally occupied an indeterminate state that became determinate when we made a measurement on it. Thus we have a precursor of Heisenberg’s indeterminacy principle and of the quantum theory.

We now want to consider the situation in which the system contains no energy, when p=0. As we have seen, entropy appears to give us a kind of measure of a system’s capacity to hold heat at a given temperature. If we have p=1, then ΔS=0. If we accept p=1 as defining a base state for all systems and define Ω=pN, then we can rewrite Equation 11 as

(Eq’n 12)

which we recognize as Boltzmann’s equation describing the entropy of a system in terms of the number of ways of building the system. As for the possibility of having a system in which p=0, that comes under the third law of thermodynamics, which I cover in a separate essay.

    So now we need to answer our original question – how much work must we do to move a certain amount of heat from a cold body to a hot body? We know that the heat won’t get redistributed as we desire unless the redistribution increases the entropy of the entire system, changing the entropy by zero at minimum. We can only make the redistribution proceed if the work that we do upon the system turns into heat and, thereby, adds entropy to the system.

    Look again at Equation 7 and the situation it describes. It tells us how much the entropy of the system changes as heat goes from part B to part A and, thus, how much entropy we must add to the system through the work we do upon the system. The work that we do adds heat to the system at the temperature TA, so for the minimum work we must do (for dS=0) we have

(Eq’n 13)

That equation leads to

(Eq’n 14)

We can also reverse the meaning of that equation. In addition to telling us the minimum work that we must do upon the system to move pQ from cold part B to hot part A, it tells us the maximum amount of work we can make the system do as pQ moves from A to B. In that latter case we take dE=pQ+pW and calculate the maximum possible efficiency of a heat engine as

(Eq’n 15)

Thus we see how the law of entropy enables us to use the second law of thermodynamics in technical calculations involving the manipulation of heat and work.

habg

Back to Contents