The Law of Entropy

I: Abiotic Organic Synthesis

Back to Contents

    If we conceive life as a set of chemical processes (and at the most fundamental level we must so conceive it), then we must also conceive life as consisting of a set of thermodynamic processes. That fact brings the three fundamental laws of thermodynamics into play in biology, especially in molecular biology.

    The first law of thermodynamics, the law of conservation of energy, tells us that no known or anticipated process can create or destroy energy. As energy gets transferred from body to body, it may change form (from radiation to chemical bonds to heat, for example), but the total amount of it neither increases nor decreases. In a closed system the total amount of energy remains the same.

    The second law of thermodynamics, the law of entropy, tells us that energy in a closed system undergoes transformations and transfers that only increase the net entropy of the system.

    And the third law of thermodynamics, the Nernst theorem, tells us that as the temperature of a system approaches absolute zero, the system’s heat capacity also approaches zero. In essence that law means that no thermodynamic system can ever reach a temperature of absolute zero. Scientists have brought some small systems very close to absolute zero, but they will never make any system actually reach a perfect absolute zero.

    Of those three, the second law vexes us the most. One common statement of the law of entropy has it that no system will evolve from a simple form to a more complex form. The complexity of life, even at the most primitive level, seems to mock that statement, defying us to explain how life could have emerged spontaneously from a chemical system, however vast, filled with only simple molecules. To work out such an explanation we must first look deeper into the concept of entropy and the law governing it.

    We actually have two versions of the second law, because we have two versions of thermodynamics. The more modern version tends to get more play in discussions of evolutionary chemistry, as it should, but we need to see both versions in order to gain the insights we need to understand how life could have come to exist and to have done so spontaneously.

    The older version, associated with classical thermodynamics (the thermodynamics of engineers), came from the work of Rudolf Clausius (1822 Jan 02 – 1888 Aug 24). Its usual statement proclaims that heat cannot of itself go from a cold body into a hot body. Clausius derived that law from his analysis of steam engines, but the law goes well beyond that simple application. Indeed, in 1876 the Italian physicist Adolfo Bartoli deduced the fact the light exerts pressure, using an imaginary experiment in which he invoked Clausius’ law. The fact that Bartoli got it right tells us that the law of entropy stands as one of the most fundamental laws of Reality, one that every phenomenon in Nature must obey.

    But classical thermodynamics includes no assumptions about the fundamental nature of matter. It doesn’t acknowledge the fact that matter consists of atoms in the dozens of species that we find on Mendeleev’s Table of the Chemical Elements. Thus Clausius’ version of the law of entropy has only limited application in modern chemistry.

    The newer version of the law of entropy comes from statistical thermodynamics (the thermodynamics of physicists and chemists), the version of thermodynamics based on the idea that matter consists of atoms. James Clerk Maxwell (1831 Jun 13 – 1879 Nov 05) and Ludwig Boltzmann (1844 Feb 20 – 1906 Sep 05) began the development of this version of thermodynamics by applying Newtonian dynamics to very large numbers of particles and then using the methods of statistical analysis to carry out the relevant calculations. In the statistical version of thermodynamics the law of entropy states that a system always manifests the form of greatest probability available to it. Boltzmann put that statement into mathematical form in an equation which states that the entropy of a system stands in direct proportion to the natural logarithm of the number of ways in which the system’s components can be assembled into one particular manifestation.

    We see Boltzmann’s formula reflected in the fact that, if we have a system with a large number of components, we expect to see the system manifested in the form that has the largest number of ways of being assembled. Consider a system that consists of sixty-four identical checkers laid out on the standard eight-by-eight checkerboard. If we assemble the system by tossing the checkers onto the board, we don’t expect to see a stack of sixty-four checkers sitting on one square, even if we discount the effect of gravity. Instead, we expect to see an arrangement of checkers close to one in which the board has one checker on each square. We have only one way in which we can pile the checkers into a single stack, so we have sixty-four ways in which we can create a manifestation of the system displaying a 64-checker stack on one of the squares on the board. At the opposite extreme we have vastly many more ways to create a manifestation of the system that has one checker on each and every square: we have 64 different ways to lay down the first checker, 63 ways to lay down the second checker, 62 ways to lay down the third checker, and so on, so the total number of ways to create the array equals sixty-four factorial (written 64!, it equals the product obtained by multiplying together the first sixty-four positive integers, a product that approximately equals 1.27x1089). If we calculate the corresponding entropies, we find that the uniform distribution of checkers has a little over two hundred times the entropy of the single-stack arrangement of the checkers.

    Maxwell conducted a similar analysis when he worked out the mathematical description of an ideal gas, which consists of perfectly spherical, perfectly elastic particles that approximate the atoms that we find in most common gases. He didn’t focus his attention on the distribution of the particles in space: it doesn’t take much inference to deduce the uniformity of the particles’ distribution. Instead, he worked out the distribution of kinetic energy, manifested as heat, among the particles. He discovered that the average energy per particle correlates with the absolute temperature of the gas (temperature measured from absolute zero) and he derived an equation describing the proportion of particles within the gas that carries a given amount of energy. He found that even if someone started with a collection of particles that all carry the same energy, that gas would evolve, through collisions among the particles, into a mixture of hot (high energy) and cold (low energy) particles. If we think of the per particle energy as analogous to the squares on a checkerboard, a different energy for each square, then the maximum entropy state, what physicists call the Maxwellian distribution, corresponds, more or less, to the uniform distribution of the checkers on that board.

    When we look at the algebraic formula describing the Maxwellian distribution we see that in a real gas, even at moderate temperatures, a small number of molecules carry enough energy that any collisions will put them into an excited state that makes them susceptible to chemical reaction. That fact gives the gas an appearance of complexity that seems to belie the statement that the Maxwellian distribution represents a state of maximum entropy. The popular understanding of the law of entropy leads to the reduction, even elimination, of complexity.

    So how can we best describe entropy and the law that governs it? The second law of thermodynamics, which says that any chemical system will evolve in a way that maximizes the entropy, gives us a form of the principle of least action; in this case chemical and thermodynamic systems obey the principle of least action density. Because we can deduce the principle of least action from a simple application of the theory of Special Relativity to the transformation of linear momentum and energy, we know that no phenomenon can violate the second law of thermodynamics. In no isolated system will we ever see the entropy of the system decrease spontaneously. Rather, spontaneous increases in entropy will accompany the system’s evolution toward the maximum uniformity in the distribution of its components.

    That idea of uniform distribution underlies the idea that the law of entropy forbids the spontaneous rise of complexity. We see examples of the leveling effect of the law of entropy all around us. If you spill a small amount of cream into the center of a cup of coffee, you will soon have a cup full of a beige liquid. But if you were to start with a cup of beige liquid, you could wait for the lifetime of a galaxy and never see the liquid become a dollop of white floating in a puddle of brown. As noted in the checkerboard example, the most uniform distribution has the highest probability of occurrence. The probability that the random jostling of particles in the cup leading to the complete separation of the cream from the coffee comes so close to zero that we expect not to see such a separation in trillions of years. We understand that the white-in-brown situation has more complexity than does the beige situation, so we understand that complexity will not arise spontaneously out of uniformity. We then feel compelled to ask If complexity spontaneously decays, then how can life exist?

    We must offer a caveat by noting that Earth, as an abode of life, does not represent a closed system. It draws energy from a heat reservoir (the sun) at about 6000 Kelvin (Celsius degrees above absolute zero) and it discharges energy into a cold reservoir (interstellar space) at 2.725 Kelvin. In classical thermodynamics we calculate the entropy in a parcel of gas by dividing the amount of heat in the parcel by the gas’s absolute temperature. Thus every joule of heat in a high-temperature gas represents less entropy than does a joule of heat in a low-temperature gas. Because the amount by which entropy changes equals the amount of heat moved into or out of a body divided by the absolute temperature of the body, the difference between those temperatures represents a huge increase in entropy, so small decreases in entropy on Earth can occur, swamped by the increase. We see a common example of this possibility in our kitchens.

    In the common culinary refrigerator we have a cold body (the interior and its contents) kept at a temperature lower than its surroundings against the tendency of heat to flow into the device. Here we see an example of how Clausius’ version of the law of entropy explains what happens. Heat will not spontaneously flow from a cold body into a hot body, just as water will not spontaneously flow from a lower altitude to a higher altitude: but we can make it flow uphill by using a pump. Likewise we can force heat to flow from cold to hot through the use of a heat pump. So the refrigerator, that little island of low temperature, can only exist as a cold body in a warm environment because we feed electric power to the heat pump that keeps it cold. Of course, the processes that generate the electricity that drives the heat pump also generate large amounts of entropy, more than enough to compensate the small decrease associated with the operation of the refrigerator.

    Thus we have the entropy that physicists deal with. To that we must now add the entropy involved in chemical reactions.

    As a simple example of a chemical system consider the reaction of carbon dioxide with hydrogen to produce methane and water vapor. Imagine filling a container, say a steel tank, with only carbon dioxide and hydrogen. If someone returns some time later and conducts a chemical analysis of the tank’s contents, they will find carbon dioxide and hydrogen certainly, but they will also find small amounts of methane and water vapor. Conducting that experiment with the carbon dioxide and hydrogen at high temperature and pressure in the presence of a catalyst (such as nickel) increases the amount of methane and water vapor produced in accordance with the Sabatier reaction, which was discovered by French chemist Paul Sabatier (1854 Nov 05 - 1941 Aug 14).

    In the tank the reaction

CO2 + 4H2 CH4 + 2H2O

occurs. That formula tells us that one mole of carbon dioxide and four moles of hydrogen combine to yield one mole of methane and two moles of water (one mole of any substance equals the mass, in grams, equal to the atomic weight of the molecule; for example, one mole of carbon dioxide ponders 12+16+16=44 grams). Chemists have determined the molar entropies of those four substances as

CO2:

S1=

51.1 cal/mole-degree

H2:

S2=

31.21 cal/mole-degree

CH4:

S3=

44.5 cal/mole-degree

H2O:

S4=

45.1 cal/mole-degree

at 298 Kelvin (25 Celsius, which corresponds to 77 Fahrenheit). In making one mole of methane at 298K we thus have

CO2+4H2

S1+4S2=

51.1+124.84=

175.94 cal/degree

CH4+2H2O

S3+2S4=

44.5+90.2=

134.7 cal/degree

Putting those sums together tells us that the production of methane reduces the entropy of the system, in violation of the second law of thermodynamics. We might thus infer that the reaction cannot occur, but before we make that inference we need to consider several other factors.

    Three of the substances in the above chemical reaction consist of molecules made of atoms of two different chemical elements stuck together. To take one of those molecules apart into its constituent elements takes energy, so in creating one mole of a substance from its constituent elements we usually get back a quantity of energy, in the form of heat, that chemists call the enthalpy of formation. For the three compound substances in our reaction chemists have determined the enthalpies of formation of the vapor forms as

CO2:

ÄH1=

-94,050 cal/mole

CH4:

ÄH2=

-17,890 cal/mole

H2O:

ÄH3=

-57,790 cal/mole

We don’t put hydrogen on that list because chemists define the enthalpy of formation in part by setting it equal to zero for each of the chemical elements. That fact of definition doesn’t affect the theory behind chemical reactions because, in working out those theories, chemists use differences between enthalpies and don’t use the enthalpies themselves.

    In the wild melee of particles flying about in a gas some molecules will carry enough kinetic energy that a collision with other molecules will break the molecules apart. When a molecule of carbon dioxide gets its oxygen atoms knocked off it can then absorb two hydrogen molecules to become a molecule of methane while the oxygen atoms each absorb a hydrogen molecule to become a molecule of water. Because the reactions occur through molecular collisions the energy that we have to take into account occurs as heat, the randomized kinetic energy of the particles in the gas. To convert one mole of carbon dioxide (plus four moles of hydrogen) into one mole of methane (plus two moles of water vapor) the system loses 94,050 calories of heat, but gains back 17,890+(2x57,790)=133,470 calories: the system gains a net 39,420 calories of heat added to the gas. If that reaction occurs at 298K, the reaction adds 132.28 calories per degree of entropy to the system, which provides more than enough to compensate the deficiency that we calculated above.

    As noted above, the entropy calculation enables chemists to calculate the degree to which reactants become products in a chemical reaction. When the concentrations of reactants and products reach the corresponding ratio, the system achieves chemical equilibrium, the state in which the net change of the system’s entropy equals zero. But we don’t get a static equilibrium, in which the rate of reaction slows and then halts. Instead, we get a dynamic equilibrium, in which the rate of reaction slows as the reaction uses up reactants and the rate of the reverse reaction (converting methane and water vapor into carbon dioxide and hydrogen) increases until the two rates equal each other. To a chemist measuring the concentrations of the substances in the tank the reaction appears to achieve a static equilibrium, but to someone tracking the molecules (if such a thing were possible) the equilibrium clearly involves a reaction and its reverse canceling each other out.

    That latter fact provides the basis for Le Chatelier’s principle. In 1885 Henri Louis Le Chatelier (1850 Oct 08 – 1936 Sep 17) published a statement to the effect that if we impose upon a chemical system in equilibrium a change in the concentration of the chemicals, the temperature of the system, the volume of the system, or the partial pressures of the gaseous components of the system, then the equilibrium of the system will change in a way that opposes the imposed change.

    To gain some understanding of how Le Chatelier’s principle works consider what happens if we remove water from the mixture of gases in our experimental tank. To that end imagine putting inside the tank a tube with a small fan to push gas through it. Inside the tube we have two metal plates, one made cold and the other made warm. As gas moves through the tube water vapor condenses onto the cold plate and drips into a container that sequesters the liquid so that it cannot return to the gas as vapor. The gas then moves over the warm plate, which reheats it so that the temperature inside the tank won’t change. As gas continues to flow through the tube, then, the concentration of water vapor in the tank diminishes.

    Reducing the concentration of water vapor in the tank reduces the probability that an excited methane molecule will meet the water molecules it needs to turn back into carbon dioxide and thereby reduces the rate at which the reverse reaction occurs. Thus the system slips out of equilibrium, the forward reaction proceeding faster than does the reverse reaction. The forward reaction slows as the reactants get used up. The reverse reaction speeds up in response: even though we continue to remove water vapor, the concentration of methane increases, thereby increasing the probability that an excited methane molecule will encounter the oxygen atoms in the residual water vapor that it needs to become a carbon dioxide molecule. When the processes equalize, the system comes into a new state of equilibrium, one in which the concentrations of the components differ from the concentrations reached at the original equilibrium.

    Le Chatelier’s principle thus tells us how to alter the circumstances of a reaction to shift its equilibrium in a way that we want. Because that fact enables us to manipulate different contributions to the entropy, we can actually contrive a method that takes a relatively high-entropy substance (the three-atom molecule of carbon dioxide) and produces a relatively low-entropy substance (the five-atom molecule of methane). As our next example shows, nature can also contrive reactions that take high-entropy reactants and yield low-entropy products.

    When scientists got around to hypothesizing about how life originated they had to determine the conditions under which the origin of life occurred; in particular, they had to determine what kind of atmosphere Earth had at the time. Earth, prior to the origin of life, certainly could not have had the oxygen-rich atmosphere that it has today: oxygen reacts so readily with other substances that any free oxygen that might have existed in the primordial atmosphere would have quickly disappeared into various inert compounds. Free oxygen exists in today’s atmosphere only because plants produce it; but plants did not exist on the newly-formed Earth, so neither did free oxygen.

    In the 1920's Alexander Ivanovich Oparin (1894 Mar 02 [Old Style, Feb 18] – 1980 Apr 21) and J.B.S. Haldane (1892 Nov 05 – 1964 Dec 01) hypothesized that, instead of an oxidizing atmosphere, primordial Earth had a reducing atmosphere, similar to the atmosphere of Jupiter as revealed by spectroscopic analysis. That atmosphere consisted primarily of hydrogen, ammonia, methane, carbon dioxide, and water vapor. Oparin and Haldane hypothesized that in that atmosphere conditions favored chemical reactions that produced complex organic compounds from simple inorganic reactants.

    In 1952 Stanley Lloyd Miller (1930 Mar 07 – 2007 May 20), under the guidance of Harold Urey (1893 Apr 29 – 1981 Jan 25), conducted the first experimental test of that hypothesis. Simply put, the apparatus consisted of two glass flasks connected to each other through two lines of glass tubing, all of it completely and thoroughly sterilized to ensure that already-existing life would not taint the experiment. In the lower flask, resting on an electric heater, Miller put a small pool of water. The upper flask held two electrodes, between which high-voltage electricity would create sparks. The tube leading down from the sparking flask ran through a water jacket that cooled the gases as they returned to the pool flask, thereby preventing the average temperature in the apparatus from rising. Miller began the experiment by removing all of the air from the apparatus, replacing it with a mixture of hydrogen, ammonia (NH3), and methane, and then turning on the electricity.

    Driven by the temperature difference between the pool and the water jacket, the gases circulated through the apparatus and, more importantly, through the spark flashing between the electrodes in the sparking flask. Miller’s experiment thus mimicked, in miniature, processes operating in Earth’s primordial atmosphere, with the sparks playing the role of lightning. After the first day the water in the pool flask had turned noticeably pink. After the experiment had run for a week, the water had turned deep red and turbid and Miller found a strange reddish-brown sludge coating and oozing down the inside of the sparking flask.

    Analysis of the sludge revealed that it consisted of a mixture of many species of complex organic molecules. Miller found aminonitriles prominent among those species. When dissolved in water an aminonitrile undergoes double hydrolysis, loses an ammonia molecule, and becomes an amino acid, one of the building blocks of proteins. As Miller put it, "Just turning on the spark in a basic pre-biotic experiment will yield eleven out of twenty amino acids."

    How does that happen?

    First some intermediate reactions must occur. We already know that the presence of methane and water vapor in the gas mixture will lead to the production of carbon dioxide. Energized by the sparks, some carbon dioxide molecules lost an oxygen atom, which remained in the mixture as atomic oxygen (as distinct from molecular oxygen, the familiar O2), and became molecules of carbon monoxide. Ammonia then combined with carbon monoxide to produce hydrogen cyanide (HCN) and water. The combination of ammonia and methane also yielded hydrogen cyanide along with hydrogen. Other intermediate compounds, such as acetylene (C2H2), cyanogen (C2N2), and ethane (CH3CH3), also came into being, absorbing energy from the sparks as needed.

    To us the gas looks much more complex than does the four-compound gas that Miller put into his apparatus at the beginning of the experiment. But the statistical version of thermodynamics tells us that the most probable state that we should expect to see, the state of maximum entropy, consists of the maximum number of chemical species that the system can feasibly produce. For a given set of atoms put into the apparatus in the original four kinds of molecules, those four kinds of molecules would evolve into a mixture that has the most ways to assemble those atoms into the same set of molecules: that mixture would contain more than four kinds of molecules, up to a limit, consisting of all the species that can come out of the original four by way of simple chemical reactions. The high-entropy mixture would consist largely of small molecules (two, three, or four atoms apiece) and progressively fewer of the larger molecules (five, six, or more atoms apiece).

    In Miller’s experiment the evolution of the gas mixture did not occur spontaneously. Something had to shake the dice to make them display a new pattern and Miller hypothesized that his system needed the energy in the spark to act as that something. To prove and verify that hypothesis Miller set up and ran a control experiment, an experiment identical to his main experiment in all respects except one – it lacked the spark. After the experiment had run for a week the gas in the control apparatus remained mostly unchanged while in the other apparatus ten to fifteen percent of the carbon introduced in the methane had reappeared in more complex molecules.

    Among the processes that occur in the gases we find methane reacting with atomic oxygen to yield formaldehyde (HCHO). Interaction with an additional methane molecule also produces acetaldehyde (CH3CHO). Along with those aldehydes the system also produced the related acids, formic acid (HCOOH, which gives insects their sting) and acetic acid (CH3COOH, which gives vinegar its bite), and the related alcohols, methanol (CH3OH, methyl alcohol) and ethanol (C2H5OH, ethyl alcohol).

    As the aldehydes floated in the mix they might have a chemical encounter with ammonia and release a water molecule, then react with hydrogen cyanide to yield an aminonitrile. For the two simplest aldehydes, described above, we have

HCHO+NH3+HCN (NH2)HCH(CN)+H2O

CH3CHO+NH3+HCN (NH2)CH3CH(CN)+H2O,

which reactions produce glycinenitrile from formaldehyde and alaninenitrile from acetaldehyde. The aminonitriles condensed into the sludge on the flask wall, thereby removing themselves from the gas mixture and bringing Le Chatelier’s principle into play to create more aminonitrile. Further, the aldehydes can react with water to produce simple sugars, such as ribose (CH2OH(HCOH)3CHO), one of the building blocks of nucleic acids.

    When dissolved in water an aminonitrile molecule undergoes a double hydrolysis and loses an ammonia molecule to become an amino acid. Thus the aminonitriles shown in the above reactions become glycine and alanine respectively;

(NH2)HCH(CN)+2H2O NH3+(NH2)HCH(COOH)

(NH2)CH3CH(CN)+2H2O NH3+(NH2)CH3CH(COOH).

Miller found substantial amounts of those two, the simplest of the amino acids, in his experiment.

    Once Miller had verified the Oparin-Haldane hypothesis, showing that lightning blasting Earth’s primordial atmosphere would have created the complex organic molecules necessary for life, other scientists repeated his experiment. Some of those experimenters altered the experiment, trying other energy sources to see whether they can activate the chemical reactions. Ultraviolet light worked well in the experiment and on the primitive Earth, with no free oxygen in the atmosphere to form an ozone layer, ultraviolet radiation from the sun reached the ground at full strength, so we can infer that ultraviolet radiation instigated the creation of organic compounds and did so faster than lightning did. Over millions of years the oceans became a thin organic broth containing all of the compounds necessary for the creation of life.

    Of course, creating an ocean of broth containing compounds necessary for life and creating a living cell, however primitive, gives us two separate problems. Solving the second of those problems necessitates the solving of the first, certainly, but we must do more to solve the second problem. Specifically, we must lay out a set of natural processes that will assemble the organic molecules into structures that then become a living cell. Those processes must, of course, conform to the law of entropy. Future essays will describe those processes as biochemists currently understand them.

    We have actually gone down this road before. When Isaac Newton derived his law of gravity from analyzing the motions of the planets, he understood that the planets would also affect each other over time through their mutual gravitational interactions. That understanding raised questions about the stability of the solar system, but the mathematics needed to address those questions goes beyond what Newton had developed for the Principia. Newton figured that if he couldn’t solve the problem, then nobody could, with one exception. He assumed that when the solar system starts to destabilize, God dithers the planets back into their proper orbits. Like the Intelligent Design enthusiasts, Newton used presumed irreducible complexity as a proof of the existence of God.

    But in the succeeding centuries mathematicians and astronomers have developed the mathematical means to do what Newton couldn’t. The calculations of celestial mechanics have become so accurate and so precise that astronomers have used the difference between the calculated and observed positions of Uranus to discover Neptune and have used the difference between the calculated and observed positions of Mercury to verify the theory of General Relativity. They still haven’t answered the question of the stability of the solar system, but the work continues to progress, with no appeals to the supernatural.

efefef

Back to Contents