Back to Contents

We want to calculate the entropy possessed by an ideal gas consisting of N particles of the same chemical species contained in a volume V. We divide that volume into two parts, V1 and V2, separated from each other by a closed valve. The gas has the same pressure and temperature in both parts of the container, thereby ensuring that when we open the valve no gross motion of the gas or of its properties will ensue. We now want to calculate the entropy of the gas before and after we open the valve.

We begin by constructing the partition function of the gas,

(Eq=n 1)

in which pi represents the linear momentum carried by the i-th particle and xi represents the position occupied by the i-th particle. That integral factors readily into the product of two integrals

(Eq=n 2)

Because we have assumed that we have an ideal gas we have also tacitly assumed that the particles interact so weakly, if at all, that throughout the gas we effectively have U(xi)=0, so the second integral in Equation 2 becomes

(Eq=n 3)

That fact allows us to rewrite Equation 2 as

(Eq=n 4)

in which

(Eq=n 5)

represents the partition function associated with a single particle in the gas. Thus we have

(Eq=n 6)

From that equation we can derive thermodynamic averages for the system that it describes. We calculate the mean gas pressure as

(Eq=n 7)

which simply recapitulates the ideal gas law, pV=NkT. We also calculate the overall mean kinetic energy of the gas as

(Eq=n 8)

which makes the mean per particle energy

(Eq=n 9)

which is what we inferred for the Maxwellian distribution. And for the entropy of the system we calculate

(Eq=n 10)

in which the last term on the right comes from multiplying Equation 8 by beta.

Returning our attention to our original system, we see that before we open the valve we have two separate systems whose entropies conform to

(Eq=n 11)

and

(Eq=n 12)

When we open the valve we effectively combine the two volumes into one volume, V=V1+V2, and the gases within them into one body of gas with particle population N=N1+N2. We calculate the entropy of that system as

(Eq=n 13)

If we subtract the sum of Equations 11 and 12 from that equation, we will obtain a description of how much we changed the entropy of the system when we opened the valve:

(Eq=n 14)

The formula on the right side of that equation does not equal zero, but it should. In opening the valve we have not brought about a redistribution of the system= s properties and if we were to close the valve again, the system= s state would be indistinguishable from its initial state. In connecting the system= s two parts, then, by opening the valve we have a possibly reversible process and those always involve no change in the system= s entropy. The disagreement between that fact and Equation 14 bears the name of Gibbs= paradox, after Josiah Willard Gibbs (1839 Feb 11 B 1903 Apr 28) who discovered it in 1876 while studying the entropy of mixing for his paper AOn the Equilibrium of Heterogeneous Substances@, in which he laid the mathematical foundation of modern chemical thermodynamics and physical chemistry. Now we must find a way to resolve that paradox.

We must have made a subtle error in constructing the partition function for our gas, so we need to examine the premises upon which we founded our construction. The indices that appear in Equations 1 and 2 offer a clue to a tacit assumption that has affected the calculation. In assigning index numbers to the particles we have tacitly assumed into our premises a statement that we can distinguish the particles one from another. In accordance with that statement, we must infer that interchanging two of the particles puts the gas into a state physically distinct from its state prior to the interchange. That is not a passively reversible process and thus it requires a change in the system= s entropy, a fact that we see reflected in Equation 14. We can see that fact more clearly if we imagine that we have somehow painted the particles in space V1 white and the particles in space V2 black: when we open the valve the gas becomes gray as particles diffuse in both directions and simply closing the valve again won= t ungray the gas.

But we have also assumed into our original premises the statement that opening the valve does not impose upon the gas any physically distinct change of state. Thus, we have also tacitly assumed into our premises the statement that the particles are indistinguishable from each other. Ignore the contradiction for a moment and consider the following to see what that last statement means for our calculation:

We have N different boxes, which might, for example, represent cells in phase space, laid out in a row and we have N identical particles that we have labeled with index numbers (think of billiard balls with their painted-on numbers). We have N different ways in which we can put particle #1 into one of the boxes. Under the restriction that we can put only one particle in a box, we have N-1 ways in which we can put particle #2 into one of the boxes for every way in which we can put particle #1 into a box; thus, we have N(N-1) ways in which we can put two of our particles into two of the N boxes. Next we have N-2 ways to put particle #3 into a box, N-3 ways to put particle #4 into a box, and so on. We thus discover that we have N! different ways in which we can put N distinguishable particles into N boxes, one particle to a box.

Now erase the index numbers from the particles. We now have no way to distinguish any of the states of particles occupying boxes that we had before, so we actually have only one such state. That fact means that if we have a physical system whose description encodes the number of different ways in which the particles form that system, then claiming that indistinguishable particles are distinguishable leads us to overestimate the number of states by

(Eq=n 15)

the factorial which we represent in Stirling=s formula.

The partition function that we obtained for Equation 4 consists of N identical integrals multiplied together. Each of those integrals, shown in Equation 5, represents a Maxwell distribution applied to a single particle, which we use to calculate the probability of the particle possessing a given value of any of the system=s properties. If we remove from our premises the tacit assumption that our particles are distinguishable from each other, then we must correct our partition function by dividing it by N!; thus, we have Equation 4 as

(Eq=n 16)

Equation 6 then becomes

(Eq=n 17)

In adding the term NlnN-N to that equation I left out the term involving the natural logarithm of the square root of N, which is negligible compared to N.

The extra term in that equation (NlnN-N) does not change the calculations in Equations 7 and 8, as, indeed, it should not. But it does change our calculation of the entropy. With the extra term Equations 11 and 12 become

(Eq=n 18)

and

(Eq=n 19)

and Equation 13 becomes

(Eq=n 20)

Subtracting the sum of Equations 18 and 19 from equation 20 thus gives us

(Eq=n 21)

because we must have

(Eq=n 22)

in accordance with Equation 7 in light of our assumption that the gas has the same temperature and pressure in the volumes V1 and V2. That gives us exactly what we should have.

The Law of Mass Action

This analysis will bring into contact with the realm of Chemistry, which means that we want to take a step beyond particles merely bouncing off each other and start talking about particles sticking to each other to form different kinds of particles. In this analysis I want to use the Helmholtz free energy, F(V,T), as the organizing principle. We define the Helmholtz free energy as the thermodynamic potential whose variables consist of temperature, volume, and particle numbers, which gives us the complete differential

(Eq=n 23)

in which μi represents the chemical potential of the i-th species of particle. That differential comes with a minimum principle: if we put a system into thermal contact with a heat reservoir that keeps the system=s temperature constant, then any unconstrained internal parameter of the system will evolve to an equilibrium state that minimizes the Helmholtz free energy of the system. We call it free energy because the work done in the system=s evolution to equilibrium comes from the heat reservoir.

We define energy thermodynamically as the consequence to a system of changing that system=s extensive parameters (e.g. entropy, volume, number of particles, etc.), so we can write the differential as

(Eq=n 24)

Typically the system under consideration consists of a gas-filled cylinder that encloses a piston and feeds into one or more valves through which gas may be let into or out of the cylinder: we take that as our typical example because it was the modeling of such systems that led Nineteenth-Century physicists, starting with Sadi Carnot, to develop thermodynamics as a fully mathematized branch of physics through their studies of various kinds of thermodynamic energy. Once we have a complete description of the energy we can then devise various thermodynamic potentials by making one or more of the intensive parameters of the system subject to change. For the Helmholtz potential we make the temperature the variable under our control, so we have

(Eq=n 25)

That gives us

(Eq=n 26)

by way of Equation 10.

Assume that our gas consists of several different species of particles whose chemical names/symbols we represent generically as Ci. We can then describe a chemical reaction within the gas by

(Eq=n 27)

so, for example, in a gas consisting initially of a mixture of hydrogen and nitrogen we have

(Eq=n 28)

which acknowledges the fact that some of the hydrogen and nitrogen will combine to form ammonia. As before, we assume that the gas is hot enough and thin enough that we can describe the motions of its particles with classical dynamics. In such a reaction the change in the number of particles of a given species, dNi, must be proportional to bi for that particular species in order to keep the reaction properly balanced.

Once we establish our initial mixture of hydrogen and nitrogen in our cylinder we don=t allow its energy or its volume to change. When the system comes to equilibrium its entropy has reached its maximum possible value, so we know that dS=0. Equation 24 then gives us

(Eq=n 29)

because dE=0 and dV=0. We thus obtain the criterion for chemical equilibrium in the gas,

(Eq=n 30)

As noted above, we have dNi=αbi for some α representing the number of reactions occurring in a given interval, so that criterion becomes

(Eq=n 31)

after we divide out the proportionality factor.

Now we want to calculate the chemical potentials of the gas=s components. From Equation 25 we have

(Eq=n 32)

because dT=0 (because our system touches a heat reservoir) and dV=0. We want to use Equation 26 to carry that calculation forward, so now we need to set up the partition function of our gas. Because the particles in the gas interact only weakly with each other we get the partition function of the gas by multiplying together the partition functions of the components of the gas; that is,

(Eq=n 33)

which gives us

(Eq=n 34)

We have, of course,

(Eq=n 35)

Finally we integrate Equation 32 to get

(Eq=n 36)

because the integration only yields a constant and subtracting that constant from itself when we evaluate the integral yields zero. So we also have from Equation 32

(Eq=n 37)

If we look at only one reaction (α=1), we have

(Eq=n 38)

in which

(Eq=n 39)

the standard free-energy change of the reaction. We then have

(Eq=n 40)

By calculating the antilogarithm of that equation we get the equation that expresses the law of mass action;

(Eq=n 41)

In that equation the equilibrium constant, KN(V,T), is not constant at all: it is constant only relative to the numbers of particles in the gas, but is a function of the volume and the temperature of the system by way of its dependence upon the particles= partition functions.

We can also derive that equation more directly by feigning to calculate the probabilities of the reaction occurring in both directions. In order for the reaction described by Equation 28 to occur the reactants must come into contact with each other. The probability that they will do so stands in direct proportion to their density, which, in turn, stands in direct proportion to the number of particles of each species in the volume containing the gas. In this case we combine the probabilities by multiplying them together (if we need the same kind of molecule to participate more than once, as hydrogen does in forming ammonia, then we raise its numbers to the power of the b-factor), so for the reactants to come together to create the products we have the probability as

(Eq=n 42)

in which bj and Nj refer only to the reactants. The proportionality constant, which we represent with K-(V,T), depends only on the volume of the container and the temperature of the gas. We can calculate a similar probability for the products coming together and reproducing the reactants (i.e. two ammonia molecules producing a nitrogen molecule and three hydrogen molecules) and it conforms to

(Eq=n 43)

in which bi and Ni refer only to the products of the reaction.

When the system reaches equilibrium those probabilities come to equality, P+=P-. If we divide that equality by Equation 43, we get

(Eq=n 44)

which corresponds to Equation 41.

In that latter derivation we made no explicit reference to the indistinguishability of the particles of a given species, but in the derivation that led to Equation 41 we had to include an explicit reference to the indistinguishability of the particles. That explicit reference gave Equation 34 its second term on the right side of the equality sign. If not for that second term, Equation 34 would describe a system that could only reach equilibrium at absolute zero. The difference between the two deductive paths shows up in the fact that the first gives us an explicit description of the equilibrium constant in terms of the partition function of the gas and the second merely establishes the existence of an equilibrium constant without yielding an explicit formula for calculating its value.

habg

Back to Contents