From Eternity to Here: The Quest for the Ultimate Theory of Time

Home > Other > From Eternity to Here: The Quest for the Ultimate Theory of Time > Page 25
From Eternity to Here: The Quest for the Ultimate Theory of Time Page 25

by Sean M. Carroll


  How long does it take to generate that much entropy by converting useful solar energy into useless radiated heat? The answer, once again plugging in the temperature of the Sun and so forth, is: about 1 year. Every year, if we were really efficient, we could take an undifferentiated mass as large as the entire biosphere and arrange it in a configuration with as small an entropy as we can imagine. In reality, life has evolved over billions of years, and the total entropy of the “Sun + Earth (including life) + escaping radiation” system has increased by quite a bit. So the Second Law is perfectly consistent with life as we know it—not that you were ever in doubt.

  LIFE IN MOTION

  It’s good to know that life doesn’t violate the Second Law of Thermodynamics. But it would also be nice to have a well-grounded understanding of what “life” actually means. Scientists haven’t yet agreed on a single definition, but there are a number of features that are often associated with living organisms: complexity, organization, metabolism, information processing, reproduction, response to stimuli, aging. It’s difficult to formulate a set of criteria that clearly separates living beings—algae, earthworms, house cats—from complex nonliving objects—forest fires, galaxies, personal computers. In the meantime, we are able to analyze some of life’s salient features, without drawing a clear distinction between their appearance in living and nonliving contexts.

  One famous attempt to grapple with the concept of life from a physicist’s perspective was the short book What Is Life? written by none other than Erwin Schrödinger. Schrödinger was one of the inventors of quantum theory; it’s his equation that replaces Newton’s laws of motion as the dynamical description of the world when we move from classical mechanics to quantum mechanics. He also originated the Schrödinger’s Cat thought experiment to highlight the differences between our direct perceptions of the world and the formal structure of quantum theory.

  After the Nazis came to power, Schrödinger left Germany, but despite winning the Nobel Prize in 1933 he had difficulty in finding a permanent position elsewhere, largely because of his colorful personal life. (His wife Annemarie knew that he had mistresses, and she had lovers of her own; at the time Schrödinger was involved with Hilde March, wife of one of his assistants, who would eventually bear a child with him.) He ultimately settled in Ireland, where he helped establish an Institute for Advanced Studies in Dublin.

  In Ireland Schrödinger gave a series of public lectures, which were later published as What Is Life? He was interested in examining the phenomenon of life from the perspective of a physicist, and in particular an expert on quantum mechanics and statistical mechanics. Perhaps the most remarkable thing about the book is Schrödinger’s deduction that the stability of genetic information over time is best explained by positing the existence of some sort of “aperiodic crystal” that stored the information in its chemical structure. This insight helped inspire Francis Crick to leave physics in favor of molecular biology, eventually leading to his discovery with James Watson of the double-helix structure of DNA.157

  But Schrödinger also mused on how to define “life.” He made a specific proposal in that direction, which comes across as somewhat casual and offhand, and perhaps hasn’t been taken as seriously as it might have been:

  What is the characteristic feature of life? When is a piece of matter said to be alive? When it goes on ‘doing something’, exchanging material with its environment, and so forth, and that for a much longer period than we would expect an inanimate piece of matter to ‘keep going’ under similar circumstances. 158

  Admittedly, this is a bit vague; what exactly does it mean to “keep going,” how long should we “expect” it to happen, and what counts as “similar circumstances”? Furthermore, there’s nothing in this definition about organization, complexity, information processing, or any of that.

  Nevertheless, Schrödinger’s idea captures something important about what distinguishes life from non-life. In the back of his mind, he was certainly thinking of Clausius’s version of the Second Law: objects in thermal contact evolve toward a common temperature (thermal equilibrium). If we put an ice cube in a glass of warm water, the ice cube melts fairly quickly. Even if the two objects are made of very different substances—say, if we put a plastic “ice cube” in a glass of water—they will still come to the same temperature. More generally, nonliving physical objects tend to wind down and come to rest. A rock may roll down a hill during an avalanche, but before too long it will reach the bottom, dissipate energy through the creation of noise and heat, and come to a complete halt.

  Schrödinger’s point is simply that, for living organisms, this process of coming to rest can take much longer, or even be put off indefinitely. Imagine that, instead of an ice cube, we put a goldfish into our glass of water. Unlike the ice cube (whether water or plastic), the goldfish will not simply equilibrate with the water—at least, not within a few minutes or even hours. It will stay alive, doing something, swimming, exchanging material with its environment. If it’s put into a lake or a fish tank where food is available, it will keep going for much longer.

  And that, suggests Schrödinger, is the essence of life: staving off the natural tendency toward equilibration with one’s surroundings. At first glance, most of the features we commonly associate with life are nowhere to be found in this definition. But if we start thinking about why organisms are able to keep doing something long after nonliving things would wind down—why the goldfish is still swimming long after the ice cube would have melted—we are immediately drawn to the complexity of the organism and its capacity for processing information. The outward sign of life is the ability of an organism to keep going for a long time, but the mechanism behind that ability is a subtle interplay between numerous levels of hierarchical structure.

  We would like to be a little more specific than that. It’s nice to say, “living beings are things that keep going for longer than we would otherwise expect, and the reason they can keep going is because they’re complex,” but surely there is more to the story. Unfortunately, it’s not a simple story, nor one that scientists understand very well. Entropy certainly plays a big role in the nature of life, but there are important aspects that it doesn’t capture. Entropy characterizes individual states at a single moment in time, but the salient features of life involve processes that evolve through time. By itself, the concept of entropy has only very crude implications for evolution through time: It tends to go up or stay the same, not go down. The Second Law says nothing about how fast entropy will increase, or the particular methods by which entropy will grow—it’s all about Being, not Becoming.159

  Nevertheless, even without aspiring to answer all possible questions about the meaning of “life,” there is one concept that undoubtedly plays an important role: free energy. Schrödinger glossed over this idea in the first edition of What Is Life?, but in subsequent printings he added a note expressing his regret for not giving it greater prominence. The idea of free energy helps to tie together entropy, the Second Law, Maxwell’s Demon, and the ability of living organisms to keep going longer than nonliving objects.

  FREE ENERGY, NOT FREE BEER

  The field of biological physics has witnessed a dramatic rise in popularity in recent years. That’s undoubtedly a good thing—biology is important, and physics is important, and there are a great number of interesting problems at the interface of the two fields. But it’s also no surprise that the field lay relatively fallow for as long as it did. If you pick up an introductory physics textbook and compare it with a biological physics text, you’ll notice a pronounced shift in vocabulary.160 Conventional introductory physics books are filled with words like force and momentum and conservation, while biophysics books feature words like entropy and information and dissipation.

  This difference in terminology reflects an underlying difference in philosophy. Ever since Galileo first encouraged us to ignore air resistance when thinking about how objects fall in a gravitational field, physics has traditionally go
ne to great lengths to minimize friction, dissipation, noise, and anything else that would detract from the unimpeded manifestation of simple microscopic dynamical laws. In biological physics, we can’t do that; once you start ignoring friction, you ignore life itself. Indeed, that’s an alternative definition worth contemplating: Life is organized friction.

  But, you are thinking, that doesn’t sound right at all. Life is all about maintaining structure and organization, whereas friction creates entropy and disorder. In fact, both perspectives capture some of the underlying truth. What life does is to create entropy somewhere, in order to maintain structure and organization somewhere else. That’s the lesson of Maxwell’s Demon.

  Let’s examine what that might mean. Back when we first talked about the Second Law in Chapter Two, we introduced the distinction between “useful” and “useless” energy: Useful energy can be converted into some kind of work, while useless energy is useless. One of the contributions of Josiah Willard Gibbs was to formalize these concepts, by introducing the concept of “free energy.” Schrödinger didn’t use that term in his lectures because he worried that the connotations were confusing: The energy isn’t really “free” in the sense that you can get it for nothing; it’s “free” in the sense that it’s available to be used for some purpose.161 (Think “free speech,” not “free beer,” as free-software guru Richard Stallman likes to say.) Gibbs realized that he could use the concept of entropy to cleanly divide the total amount of energy into the useful part, which he called “free,” and the useless part:162

  total energy = free energy + useless (high-entropy) energy.

  When a physical process creates entropy in a system with a fixed total amount of energy, it uses up free energy; once all the free energy is gone, we’ve reached equilibrium.

  That’s one way of thinking about what living organisms do: They maintain order in their local environment (including their own bodies) by taking advantage of free energy, degrading it into useless energy. If we put a goldfish in an otherwise empty container of water, it can maintain its structure (far from equilibrium with its surroundings) for a lot longer than an ice cube can; but eventually it will die from starvation. But if we feed the goldfish, it can last for a much longer time even than that. From a physics point of view, food is simply a supply of free energy, which a living organism can take advantage of to power its metabolism.

  From this perspective, Maxwell’s Demon (along with his box of gas) serves as an illuminating paradigm for how life works. Consider a slightly more elaborate version of the Demon story. Let’s take the divided box of gas and embed it in an “environment,” which we model by an arbitrarily large collection of stuff at a constant temperature—what physicists call a “heat bath.” (The point is that the environment is so large that its own temperature won’t be affected by interactions with the smaller system in which we are interested, in this case the box of gas.) Even though the molecules of gas stay inside the walls of the box, thermal energy can pass in and out; therefore, even if the Demon were to segregate the gas effectively into one cool half and one hot half, the temperature would immediately begin to even out through interactions with the surrounding environment.

  We imagine that the Demon would really like to keep its particular box far from equilibrium—it wants to do its best to keep the left side of the box at a high temperature and the right side at a low temperature. (Note that we have turned the Demon into a protagonist, rather than a villain.) So it has to do its traditional sorting of molecules according to their velocities, but now it has to keep doing that in perpetuity, or otherwise each side will equilibrate with its environment. By our previous discussion, the Demon can’t do its sorting without affecting the outside world; the process of erasing records will inevitably generate entropy. What the Demon requires, therefore, is a continual supply of free energy. It takes in the free energy (“food”), then takes advantage of that free energy to erase its records, generating entropy in the process and degrading the energy into uselessness; the useless energy is then discarded as heat (or whatever). With its newly erased notepad, the Demon is ready to keep its box of gas happily displaced from equilibrium, at least until it fills the notepad once more, and the cycle repeats itself.

  Figure 51: Maxwell’s Demon as a paradigm for life. The Demon maintains order—a separation of temperatures—in the box, against the influence of the environment, by processing information through the transformation of free energy into high-entropy heat.

  This charming vignette obviously fails to encapsulate everything we mean by the idea of “life,” but it succeeds in capturing an essential part of the bigger picture. Life strives to maintain order in the face of the demands of the Second Law, whether it’s the actual body of the organism, or its mental state, or the works of Ozymandias. And it does so in a specific way: by degrading free energy in the outside world in the cause of keeping itself far from thermal equilibrium. And that’s an operation, as we have seen, that is tightly connected to the idea of information processing. The Demon carries out its duty by converting free energy into information about the molecules in its box, which it then uses to keep the temperature in the box from evening out. At some very basic level, the purpose of life boils down to survival—the organism wants to preserve the smooth operation of its own complex structure.163 Free energy and information are the keys to making it happen.

  From the point of view of natural selection, there are many reasons why a complex, persistent structure might be adaptively favored: An eye, for example, is a complex structure that clearly contributes to the fitness of an organism. But increasingly complex structures require that we turn increasing amounts of free energy into heat, just to keep them intact and functioning. This picture of the interplay of energy and information therefore makes a prediction: The more complex an organism becomes, the more inefficient it will be at using energy for “work” purposes—simple mechanical operations like running and jumping, as opposed to the “upkeep” purposes of keeping the machinery in good working condition. And indeed, that’s true; in real biological organisms, the more complex ones are correspondingly less efficient in their use of energy.164

  COMPLEXITY AND TIME

  There are any number of fascinating topics at the interface of entropy, information, life, and the arrow of time that we don’t have a chance to discuss here: aging, evolution, mortality, thinking, consciousness, social structures, and countless more. Confronting all of them would make this a very different book, and our primary goals are elsewhere. But before returning to the relatively solid ground of conventional statistical mechanics, we can close this chapter with one more speculative thought, the kind that may hopefully be illuminated by new research in the near future.

  As the universe evolves, entropy increases. That is a very simple relationship: At early times, near the Big Bang, the entropy was very low, and it has grown ever since and will continue to grow into the future. But apart from entropy, we can also characterize (at least roughly) the state of the universe at any one moment in time in terms of its complexity, or by the converse of complexity, its simplicity. And the evolution of complexity with time isn’t nearly that straightforward.

  There are a number of different ways we could imagine quantifying the complexity of a physical situation, but there is one measure that has become widely used, known as the Kolmogorov complexity or algorithmic complexity.165 This idea formalizes our intuition that a simple situation is easy to describe, while a complex situation is hard to describe. The difficulty we have in describing a situation can be quantified by specifying the shortest possible computer program (in some given programming language) that would generate a description of that situation. The Kolmogorov complexity is just the length of that shortest possible computer program.

  Consider two strings of numbers, each a million characters long. One string consists of nothing but 8’s in every digit, while the other is some particular sequence of digits with no discernible pattern within them:

 
; The first of these is simple—it has a low Kolmogorov complexity. That’s because it can be generated by a program that just says, “Print the number 8 a million times.” The second string, however, is complex. Any program that prints it out has to be at least one million characters long, because the only way to describe this string is to literally specify every single digit. This definition becomes helpful when we consider numbers like pi or the square root of two—they look superficially complex, but there is actually a short program in either case that can calculate them to any desired accuracy, so their Kolmogorov complexity is quite low.

  The complexity of the early universe is low, because it’s very easy to describe. It was a hot, dense state of particles, very smooth over large scales, expanding at a certain rate, with some (fairly simple to specify) set of tiny perturbations in density from place to place. From a coarse-grained perspective, that’s the entire description of the early universe; there’s nothing else to say. Far in the future, the complexity of the universe will also be very low: It will just be empty space, with an increasingly dilute gruel of individual particles. But in between—like right now—things look extremely complicated. Even after coarse-graining, there is no simple way of expressing the hierarchical structures described by gas, dust, stars, galaxies, and clusters, much less all of the interesting things going on much smaller scales, such as our ecosystem here on Earth.

  So while the entropy of the universe increases straightforwardly from low to high as time goes by, the complexity is more interesting: It goes from low, to relatively high, and then back down to low again. And the question is: Why? Or perhaps: What are the ramifications of this form of evolution? There are a whole host of questions we can think to ask. Under what general circumstances does complexity tend to rise and then fall again? Does such behavior inevitably accompany the evolution of entropy from low to high, or are other features of the underlying dynamics necessary? Is the emergence of complexity (or “life”) a generic feature of evolution in the presence of entropy gradients? What is the significance of the fact that our early universe was simple as well as low-entropy? How long can life survive as the universe relaxes into a simple, high-entropy future?166

 

‹ Prev