From Eternity to Here: The Quest for the Ultimate Theory of Time

Home > Other > From Eternity to Here: The Quest for the Ultimate Theory of Time > Page 5
From Eternity to Here: The Quest for the Ultimate Theory of Time Page 5

by Sean M. Carroll


  I’m sure Baron Snow was quite the hit at Cambridge cocktail parties. (To be fair, he did later admit that even physicists didn’t really understand the Second Law.)

  Our modern definition of entropy was proposed by Austrian physicist Ludwig Boltzmann in 1877. But the concept of entropy, and its use in the Second Law of Thermodynamics, dates back to German physicist Rudolf Clausius in 1865. And the Second Law itself goes back even earlier—to French military engineer Nicolas Léonard Sadi Carnot in 1824. How in the world did Clausius use entropy in the Second Law without knowing its definition, and how did Carnot manage to formulate the Second Law without even using the concept of entropy at all?

  The nineteenth century was the heroic age of thermodynamics—the study of heat and its properties. The pioneers of thermodynamics studied the interplay between temperature, pressure, volume, and energy. Their interest was by no means abstract—this was the dawn of the industrial age, and much of their work was motivated by the desire to build better steam engines.

  Today physicists understand that heat is a form of energy and that the temperature of an object is simply a measure of the average kinetic energy (energy of motion) of the atoms in the object. But in 1800, scientists didn’t believe in atoms, and they didn’t understand energy very well. Carnot, whose pride was wounded by the fact that the English were ahead of the French in steam engine technology, set himself the task of understanding how efficient such an engine could possibly be—how much useful work could you do by burning a certain amount of fuel? He showed that there is a fundamental limit to such extraction. By taking an intellectual leap from real machines to idealized “heat engines,” Carnot demonstrated there was a best possible engine, which got the most work out of a given amount of fuel operating at a given temperature. The trick, unsurprisingly, was to minimize the production of waste heat. We might think of heat as useful in warming our houses during the winter, but it doesn’t help in doing what physicists think of as “work”—getting something like a piston or a flywheel to move from place to place. What Carnot realized was that even the most efficient engine possible is not perfect; some energy is lost along the way. In other words, the operation of a steam engine is an irreversible process.

  So Carnot appreciated that engines did something that could not be undone. It was Clausius, in 1850, who understood that this reflected a law of nature. He formulated his law as “heat does not spontaneously flow from cold bodies to warm ones.” Fill a balloon with hot water and immerse it in cold water. Everyone knows that the temperatures will tend to average out: The water in the balloon will cool down as the surrounding liquid warms up. The opposite never happens. Physical systems evolve toward a state of equilibrium—a quiescent configuration that is as uniform as possible, with equal temperatures in all components. From this insight, Clausius was able to re-derive Carnot’s results concerning steam engines.

  So what does Clausius’ law (heat never flows spontaneously from colder bodies to hotter ones) have to do with the Second Law (entropy never spontaneously decreases)? The answer is, they are the same law. In 1865 Clausius managed to reformulate his original maxim in terms of a new quantity, which he called the “entropy.” Take an object that is gradually cooling down—emitting heat into its surroundings. As this process happens, consider at every moment the amount of heat being lost, and divide it by the temperature of the object. The entropy is then the accumulated amount of this quantity (the heat lost divided by the temperature) over the course of the entire process. Clausius showed that the tendency of heat to flow from hot objects to cold ones was precisely equivalent to the claim that the entropy of a closed system would only ever go up, never go down. An equilibrium configuration is simply one in which the entropy has reached its maximum value, and has nowhere else to go; all the objects in contact are at the same temperature.

  If that seems a bit abstract, there is a simple way of summing up this view of entropy: It measures the uselessness of a certain amount of energy.27 There is energy in a gallon of gasoline, and it’s useful—we can put it to work. The process of burning that gasoline to run an engine doesn’t change the total amount of energy; as long as we keep careful track of what happens, energy is always conserved.28 But along the way, that energy becomes increasingly useless. It turns into heat and noise, as well as the motion of the vehicle powered by that engine, but even that motion eventually slows down due to friction. And as energy transforms from useful to useless, its entropy increases all the while.

  The Second Law doesn’t imply that the entropy of a system can never decrease. We could invent a machine that separated out the milk from a cup of coffee, for example. The trick, though, is that we can only decrease the entropy of one thing by creating more entropy elsewhere. We human beings, and the machines that we might use to rearrange the milk and coffee, and the food and fuel each consume—all of these also have entropy, which will inevitably increase along the way. Physicists draw a distinction between open systems—objects that interact significantly with the outside world, exchanging entropy and energy—and closed systems—objects that are essentially isolated from external influences. In an open system, like the coffee and milk we put into our machine, entropy can certainly decrease. But in a closed system—say, the total system of coffee plus milk plus machine plus human operators plus fuel and so on—the entropy will always increase, or at best stay constant.

  THE RISE OF ATOMS

  The great insights into thermodynamics of Carnot, Clausius, and their colleagues all took place within a “phenomenological” framework. They knew the big picture but not the underlying mechanisms. In particular, they didn’t know about atoms, so they didn’t think of temperature and energy and entropy as properties of some microscopic substrate; they thought of each of them as real things, in and of themselves. It was common in those days to think of energy in particular as a form of fluid, which could flow from one body to another. The energy-fluid even had a name: “caloric.” And this level of understanding was perfectly adequate to formulating the laws of thermodynamics.

  But over the course of the nineteenth century, physicists gradually became convinced that the many substances we find in the world can all be understood as different arrangements of a fixed number of elementary constituents, known as “atoms.” (The physicists actually lagged behind the chemists in their acceptance of atomic theory.) It’s an old idea, dating back to Democritus and other ancient Greeks, but it began to catch on in the nineteenth century for a simple reason: The existence of atoms could explain many observed properties of chemical reactions, which otherwise were simply asserted. Scientists like it when a single simple idea can explain a wide variety of observed phenomena.

  These days it is elementary particles such as quarks and leptons that play the role of Democritus’s atoms, but the idea is the same. What a modern scientist calls an “atom” is the smallest possible unit of matter that still counts as a distinct chemical element, such as carbon or nitrogen. But we now understand that such atoms are not indivisible; they consist of electrons orbiting the atomic nucleus, and the nucleus is made of protons and neutrons, which in turn are made of different combinations of quarks. The search for rules obeyed by these elementary building blocks of matter is often called “fundamental” physics, although “elementary” physics would be more accurate (and arguably less self-aggrandizing). Henceforth, I’ll use atoms in the established nineteenth-century sense of chemical elements, not the ancient Greek sense of elementary particles.

  The fundamental laws of physics have a fascinating feature: Despite the fact that they govern the behavior of all the matter in the universe, you don’t need to know them to get through your everyday life. Indeed, you would be hard-pressed to discover them, merely on the basis of your immediate experiences. That’s because very large collections of particles obey distinct, autonomous rules of behavior, which don’t really depend on the smaller structures underneath. The underlying rules are referred to as “microscopic” or simply “fundamental,
” while the separate rules that apply only to large systems are referred to as “macroscopic” or “emergent.” The behavior of temperature and heat and so forth can certainly be understood in terms of atoms: That’s the subject known as “statistical mechanics.” But it can equally well be understood without knowing anything whatsoever about atoms: That’s the phenomenological approach we’ve been discussing, known as “thermodynamics.” It is a common occurrence in physics that in complex, macroscopic systems, regular patterns emerge dynamically from underlying microscopic rules. Despite the way it is sometimes portrayed, there is no competition between fundamental physics and the study of emergent phenomena; both are fascinating and crucially important to our understanding of nature.

  One of the first physicists to advocate atomic theory was a Scotsman, James Clerk Maxwell, who was also responsible for the final formulation of the modern theory of electricity and magnetism. Maxwell, along with Boltzmann in Austria (and following in the footsteps of numerous others), used the idea of atoms to explain the behavior of gases, according to what was known as “kinetic theory.” Maxwell and Boltzmann were able to figure out that the atoms in a gas in a container, fixed at some temperature, should have a certain distribution of velocities—this many would be moving fast, that many would be moving slowly, and so on. These atoms would naturally keep banging against the walls of the container, exerting a tiny force each time they did so. And the accumulated impact of those tiny forces has a name: It is simply the pressure of the gas. In this way, kinetic theory explained features of gases in terms of simpler rules.

  ENTROPY AND DISORDER

  But the great triumph of kinetic theory was its use by Boltzmann in formulating a microscopic understanding of entropy. Boltzmann realized that when we look at some macroscopic system, we certainly don’t keep track of the exact properties of every single atom. If we have a glass of water in front of us, and someone sneaks in and (say) switches some of the water molecules around without changing the overall temperature and density and so on, we would never notice. There are many different arrangements of particular atoms that are indistinguishable from our macroscopic perspective. And then he noticed that low-entropy objects are more delicate with respect to such rearrangements. If you have an egg, and start exchanging bits of the yolk with bits of the egg white, pretty soon you will notice. The situations that we characterize as “low-entropy” seem to be easily disturbed by rearranging the atoms within them, while “high-entropy” ones are more robust.

  Figure 6: Ludwig Boltzmann’s grave in the Zentralfriedhof, Vienna. The inscribed equation, S = k log W, is his formula for entropy in terms of the number of ways you can rearrange microscopic components of a system without changing its macroscopic appearance. (See Chapter Eight for details.)

  So Boltzmann took the concept of entropy, which had been defined by Clausius and others as a measure of the uselessness of energy, and redefined it in terms of atoms:

  Entropy is a measure of the number of particular microscopic arrangements of atoms that appear indistinguishable from a macroscopic perspective.29

  It would be difficult to overemphasize the importance of this insight. Before Boltzmann, entropy was a phenomenological thermodynamic concept, which followed its own rules (such as the Second Law). After Boltzmann, the behavior of entropy could be derived from deeper underlying principles. In particular, it suddenly makes perfect sense why entropy tends to increase:

  In an isolated system entropy tends to increase, because there are more ways to be high entropy than to be low entropy.

  At least, that formulation sounds like it makes perfect sense. In fact, it sneaks in a crucial assumption: that we start with a system that has a low entropy. If we start with a system that has a high entropy, we’ll be in equilibrium—nothing will happen at all. That word start sneaks in an asymmetry in time, by privileging earlier times over later ones. And this line of reasoning takes us all the way back to the low entropy of the Big Bang. For whatever reason, of the many ways we could arrange the constituents of the universe, at early times they were in a very special, lo w-entropy configuration.

  This caveat aside, there is no question that Boltzmann’s formulation of the concept of entropy represented a great leap forward in our understanding of the arrow of time. This increase in understanding, however, came at a cost. Before Boltzmann, the Second Law was absolute—an ironclad law of nature. But the definition of entropy in terms of atoms comes with a stark implication: entropy doesn’t necessarily increase, even in a closed system; it is simply likely to increase. (Overwhelmingly likely, as we shall see, but still.) Given a box of gas evenly distributed in a high-entropy state, if we wait long enough, the random motion of the atoms will eventually lead them all to be on one side of the box, just for a moment—a “statistical fluctuation.” When you run the numbers, it turns out that the time you would have to wait before expecting to see such a fluctuation is much larger than the age of the universe. It’s not something we have to worry about, as a practical matter. But it’s there.

  Some people didn’t like that. They wanted the Second Law of Thermodynamics, of all things, to be utterly inviolate, not just something that holds true most of the time. Boltzmann’s suggestion met with a great deal of controversy, but these days it is universally accepted.

  ENTROPY AND LIFE

  This is all fascinating stuff, at least to physicists. But the ramifications of these ideas go far beyond steam engines and cups of coffee. The arrow of time manifests itself in many different ways—our bodies change as we get older, we remember the past but not the future, effects always follow causes. It turns out that all of these phenomena can be traced back to the Second Law. Entropy, quite literally, makes life possible.

  The major source of energy for life on Earth is light from the Sun. As Clausius taught us, heat naturally flows from a hot object (the Sun) to a cooler object (the Earth). But if that were the end of the story, before too long the two objects would come into equilibrium with each other—they would attain the same temperature. In fact, that is just what would happen if the Sun filled our entire sky, rather than describing a disk about one degree across. The result would be an unhappy world indeed. It would be completely inhospitable to the existence of life—not simply because the temperature was high, but because it would be static. Nothing would ever change in such an equilibrium world.

  In the real universe, the reason why our planet doesn’t heat up until it reaches the temperature of the Sun is because the Earth loses heat by radiating it out into space. And the only reason it can do that, Clausius would proudly note, is because space is much colder than Earth.30 It is because the Sun is a hot spot in a mostly cold sky that the Earth doesn’t just heat up, but rather can absorb the Sun’s energy, process it, and radiate it into space. Along the way, of course, entropy increases; a fixed amount of energy in the form of solar radiation has a much lower entropy than the same amount of energy in the form of the Earth’s radiation into space.

  This process, in turn, explains why the biosphere of the Earth is not a static place.31 We receive energy from the Sun, but it doesn’t just heat us up until we reach equilibrium; it’s very low-entropy radiation, so we can make use of it and then release it as high-entropy radiation. All of which is possible only because the universe as a whole, and the Solar System in particular, have a relatively low entropy at the present time (and an even lower entropy in the past). If the universe were anywhere near thermal equilibrium, nothing would ever happen.

  Nothing good lasts forever. Our universe is a lively place because there is plenty of room for entropy to increase before we hit equilibrium and everything grinds to a halt. It’s not a foregone conclusion—entropy might be able to simply grow forever. Alternatively, entropy may reach a maximum value and stop. This scenario is known as the “heat death” of the universe and was contemplated as long ago as the 1850s, amidst all the exciting theoretical developments in thermodynamics. William Thomson, Lord Kelvin, was a British physicist and
engineer who played an important role in laying the first transatlantic telegraph cable. But in his more reflective moments, he mused on the future of the universe:

  The result would inevitably be a state of universal rest and death, if the universe were finite and left to obey existing laws. But it is impossible to conceive a limit to the extent of matter in the universe; and therefore science points rather to an endless progress, through an endless space, of action involving the transformation of potential energy into palpable motion and hence into heat, than to a single finite mechanism, running down like a clock, and stopping for ever.32

  Here, Lord Kelvin has put his finger quite presciently on the major issue in these kinds of discussions, which we will revisit at length in this book: Is the capacity of the universe to increase in entropy finite or infinite? If it is finite, then the universe will eventually wind down to a heat death, once all useful energy has been converted to high-entropy useless forms of energy. But if the entropy can increase without bound, we are at least allowed to contemplate the possibility that the universe continues to grow and evolve forever, in one way or another.

  In a famous short story entitled simply “Entropy,” Thomas Pynchon had his characters apply the lessons of thermodynamics to their social milieu.

  “Nevertheless,” continued Callisto, “he found in entropy, or the measure of disorganization of a closed system, an adequate metaphor to apply to certain phenomena in his own world. He saw, for example, the younger generation responding to Madison Avenue with the same spleen his own had once reserved for Wall Street: and in American ‘consumerism’ discovered a similar tendency from the least to the most probable, from differentiation to sameness, from ordered individuality to a kind of chaos. He found himself, in short, restating Gibbs’ prediction in social terms, and envisioned a heat-death for his culture in which ideas, like heat-energy, would no longer be transferred, since each point in it would ultimately have the same quantity of energy; and intellectual motion would, accordingly, cease.”33

 

‹ Prev