From Eternity to Here: The Quest for the Ultimate Theory of Time

Home > Other > From Eternity to Here: The Quest for the Ultimate Theory of Time > Page 53
From Eternity to Here: The Quest for the Ultimate Theory of Time Page 53

by Sean M. Carroll


  152 This scenario can be elaborated on further. Imagine that the box was embedded in a bath of thermal gas at some temperature T, and that the walls of the box conducted heat, so that the molecule inside was kept in thermal equilibrium with the gas outside. If we could continually renew our information about which side of the box the molecule was on, we could keep extracting energy from it, by cleverly inserting the piston on the appropriate side; after the molecule lost energy to the piston, it would gain the energy back from the thermal bath. What we’ve done is to construct a perpetual motion machine, powered only by our hypothetical limitless supply of information. (Which drives home the fact that information never just comes for free.) Szilárd could even quantify precisely how much energy could be extracted from a single bit of information: kT log 2, where k is Boltzmann’s constant.

  153 It’s interesting how, just as much of the pioneering work on thermodynamics in the early nineteenth century was carried out by practical-minded folks who were interested in building better steam engines, much of the pioneering work on information theory in the twentieth century has been carried out by practical-minded folks who were interested in building better communications systems and computers.

  154 We can go further than this. Just as Gibbs came up with a definition of entropy that referred to the probability that a system was in various different states, we can define the “information entropy” of a space of possible messages in terms of the probability that the message takes various forms. The formulas for the Gibbs entropy and the information entropy turn out to be identical, although the symbols in them have slightly different meanings.

  155 For recent overviews, see Morange (2008) or Regis (2009).

  156 The argument that follows comes from Bunn (2009), which was inspired by Styer (2008). See also Lineweaver and Egan (2008) for details and additional arguments.

  157 Crick (1990).

  158 Schrödinger (1944), 69.

  159 From Being to Becoming is the title of a popular book (1980) by Belgian Nobel Laureate Ilya Prigogine, who helped pioneer the study of “dissipative structures” and self-organizing systems in statistical mechanics. See also Prigogine (1955), Kauffman (1993), and Avery (2003).

  160 A good recent book is Nelson (2007).

  161 He would have been even more wary in modern times; a Google search on “free energy” returns a lot of links to perpetual-motion schemes, along with some resources on clean energy.

  162 Informally speaking, the concepts of “useful” and “useless” energy certainly predate Gibbs; his contribution was to attach specific formulas to the ideas, which were later elaborated on by German physicist Hermann von Helmholtz. In particular, what we are calling the “useless” energy is (in Helmholtz’s formulation) simply the temperature of the body times its entropy. The free energy is then the total internal energy of the body minus that quantity.

  163 In the 1950s, Claude Shannon built “The Ultimate Machine,” based on an idea by Marvin Minsky. In its resting state, the machine looked like a box with a single switch on one face. If you were to flip the switch, the box would buzz loudly. Then the lid would open and a hand would reach out, flipping the switch back to its original position, and retreat back into the box, which became quiet once more. One possible moral of which is: Persistence can be a good in its own right.

  164 Specifically, more massive organisms—which typically have more moving parts and are correspondingly more complex—consume free energy at a higher rate per unit mass than less massive organisms. See, for example, Chaisson (2001).

  165 This and other quantitative measures of complexity are associated with the work of Andrey Kolmogorov, Ray Solomonoff, and Gregory Chaitin. For a discussion, see, for example, Gell-Mann (1994).

  166 For some thoughts on this particular question, see Dyson (1979) or Adams and Laughlin (1999).

  10. RECURRENT NIGHTMARES

  167 Nietzsche (2001), 194. What is it with all the demons, anyway? Between Pascal’s Demon, Maxwell’s Demon, and Nietzsche’s Demon, it’s beginning to look more like Dante’s Inferno than a science book around here. Earlier in The Gay Science (189), Nietzsche touches on physics explicitly, although in a somewhat different context: “We, however, want to become who we are—human beings who are new, unique, incomparable, who give themselves laws, who create themselves! To that end we must become the best students and discoverers of everything lawful and necessary in the world: we must become physicists in order to be creators in this sense—while hitherto all valuations and ideals have been built on ignorance of physics or in contradiction to it. So, long live physics! And even more, long live what compels us to it—our honesty!”

  168 Note that, if each cycle were truly a perfect copy of the previous cycles, you would have no memory of having experienced any of the earlier versions (since you didn’t have such a memory before, and it’s a perfect copy). It’s not clear how different such a scenario would be than if the cycle occurred only once.

  169 For more of the story, see Galison (2003). Poincaré’s paper is (1890).

  170 Another subtlety is that, while the system is guaranteed to return to its starting configuration, it is not guaranteed to attain every possible configuration. The idea that a sufficiently complicated system does visit every possible state is equivalent to the idea that the system is ergodic, which we discussed in Chapter Eight in the context of justifying Boltzmann’s approach to statistical mechanics. It’s true for some systems, but not for all systems, and not even for all interesting ones.

  171 It’s my book, so Pluto still counts.

  172 Roughly speaking, the recurrence time is given by the exponential of the maximum entropy of the system, in units of the typical time it takes for the system to evolve from one state to the next. (We are assuming some fixed definition of when two states are sufficiently different as to count as distinguishable.) Remember that the entropy is the logarithm of the number of states, and an exponential undoes a logarithm; in other words, the recurrence time is simply proportional to the total number of possible states the system can be in, which makes perfect sense if the system spends roughly equal amounts of time in each allowed state.

  173 Poincaré (1893).

  174 Zermelo (1896a).

  175 Boltzmann (1896).

  176 Zermelo (1896b); Boltzmann (1897).

  177 Boltzmann (1897).

  178 “At least” three ways, because the human imagination is pretty clever. But there aren’t that many choices. Another one would be that the underlying laws of physics are intrinsically irreversible.

  179 Boltzmann (1896).

  180 We’re imagining that the spirit of the recurrence theorem is valid, not the letter of it. The proof of the recurrence theorem requires that the motions of particles be bounded—perhaps because they are planets moving in closed orbits around the Sun, or because they are molecules confined to a box of gas. Neither case really applies to the universe, nor is anyone suggesting that it might. If the universe consisted of a finite number of particles moving in an infinite space, we would expect some of them to simply move away forever, and recurrences would not happen. However, if there are an infinite number of particles in an infinite space, we can have a fixed finite average density—the number of particles per (for example) cubic light-year. In that case, fluctuations of the form illustrated here are sure to occur, which look for all the world like Poincaré’s recurrences.

  181 Boltzmann (1897). He made a very similar suggestion in a slightly earlier paper (1895), where he attributed it to his “old assistant, Dr. Schuetz.” It is unclear whether this attribution should be interpreted as a generous sharing of credit, or a precautionary laying of blame.

  182 Note that Boltzmann’s reasoning actually goes past the straightforward implications of the recurrence theorem. The crucial point now is not that any particular low-entropy starting state will be repeated infinitely often in the future—although that’s true—but that anomalously low-entropy states of all sorts will eventually ap
pear as random fluctuations.

  183 Epicurus is associated with Epicureanism, a philosophical precursor to utilitarianism. In the popular imagination, “epicurean” conjures up visions of hedonism and sensual pleasure, especially where food and drink are concerned; while Epicurus himself took pleasure as the ultimate good, his notion of “pleasure” was closer to “curling up with a good book ” than “partying late into the night” or “gorging yourself to excess.”

  Much of the original writing by the Atomists has been lost; Epicurus, in particular, wrote a thirty-seven-volume treatise on nature, but his only surviving writings are three letters reproduced in Diogenes Laertius’s Lives of the Philosophers. The atheistic implications of their materialist approach to philosophy were not always popular with later generations.

  184 Lucretius (1995), 53.

  185 A careful quantitative understanding of the likelihood of different kinds of fluctuations was achieved only relatively recently, in the form of something called the “fluctuation theorem” (Evans and Searles, 2002). But the basic idea has been understood for a long time. The probability that the entropy of a system will take a random jump downward is proportional to the exponential of minus the change in entropy. That’s a fancy way of saying that small fluctuations are common, and large fluctuations are extremely rare.

  186 It’s tempting to think, But it’s incredibly unlikely for a featureless collection of gas molecules in equilibrium to fluctuate into a pumpkin pie, while it’s not that hard to imagine a pie being created in a world with a baker and so forth. True enough. But as hard as it is to fluctuate a pie all by itself, it’s much more difficult to fluctuate a baker and a pumpkin patch. Most pies that come to being under these assumptions—an eternal universe, fluctuating around equilibrium—will be all by themselves in the universe. The fact that the world with which we are familiar doesn’t seem to work that way is evidence that something about these assumptions is not right.

  187 Eddington (1931). Note that what really matters here is not so much the likelihood of significant dips in the entropy of the entire universe, but the conditional question: “Given that one subset of the universe has experienced a dip in entropy, what should we expect of the rest of the universe?” As long as the subset in question is coupled weakly to everything else, the answer is what you would expect, and what Eddington indicated: The entropy of the rest of the universe is likely to be as high as ever. For discussions (at a highly mathematical level) in the context of classical statistical mechanics, see the books by Dembo and Zeitouni (1998) or Ellis (2005). For related issues in the context of quantum mechanics, see Linden et al. (2008).

  188 Albrecht and Sorbo (2004).

  189 Feynman, Leighton, and Sands (1970).

  190 This discussion draws from Hartle and Srednicki (2007). See also Olum (2002), Neal (2006), Page (2008), Garriga and Vilenkin (2008), and Bousso, Freivogel, and Yang (2008).

  191 There are a couple of closely related questions that arise when we start comparing different kinds of observers in a very large universe. One is the “simulation argument” (Bostrom 2003), which says that it should be very easy for an advanced civilization to make a powerful computer that simulates a huge number of intelligent beings, and therefore we are most likely to be living in a computer simulation. Another is the “doomsday argument” (Leslie, 1990; Gott, 1993), which says that the human race is unlikely to last for a very long time, because if it did, those of us (now) who live in the early days of human civilization would be very atypical observers. These are very provocative arguments; their persuasive value is left up to the judgment of the reader.

  192 See Neal (2006), who calls this approach “Full Non-indexical Conditioning.” “Conditioning” means that we make predictions by asking what the rest of the universe looks like when certain conditions hold (e.g., that we are an observer with certain properties); “full” means that we condition over every single piece of data we have, not only coarse features like “we are an observer”; and “non-indexical” means that we consider absolutely every instance in which the conditions are met, not just one particular instance that we label as “us.”

  193 Boltzmann’s travelogue is reprinted in Cercignani (1998), 231. For more details of his life and death, see that book as well as Lindley (2001).

  11. QUANTUM TIME

  194 Quoted in von Baeyer (2003), 12-13.

  195 This is not to say that the ancient Buddhists weren’t wise, but their wisdom was not based on the failure of classical determinism at atomic scales, nor did they anticipate modern physics in any meaningful way, other than the inevitable random similarities of word choice when talking about grand cosmic concepts. (I once heard a lecture claming that the basic ideas of primordial nucleosynthesis were prefigured in the Torah; if you stretch your definitions enough, eerie similarities are everywhere.) It is disrespectful to both ancient philosophers and modern physicists to ignore the real differences in their goals and methods in an attempt to create tangible connections out of superficial resemblances.

  196 More recently, dogs have also been recruited for the cause. See Orzel (2009).

  197 We’re still glossing over one technicality—the truth is actually one step more complex (as it were) than this description would have you believe, but it’s not a complication that is necessary for our present purposes. Quantum amplitudes are really complex numbers, which means they are combinations of two numbers: a real number, plus an imaginary number. (Imaginary numbers are what you get when you take the square root of a negative real number; so “imaginary two” is the square root of minus four, and so on.) A complex number looks like a + bi, where a and b are real numbers and “i” is the square root of minus one. If the amplitude associated with a certain option is a + bi, the probability it corresponds to is simply a2 + b2, which is guaranteed to be greater than or equal to zero. You will have to trust me that this extra apparatus is extremely important to the workings of quantum mechanics—either that, or start learning some of the mathematical details of the theory. (I can think of less rewarding ways of spending your time, actually.)

  198 The fact that any particular sequence of events assigns positive or negative amplitudes to the two final possibilities is an assumption we are making for the purposes of our thought experiment, not a deep feature of the rules of quantum mechanics. In any real-world problem, details of the system being considered will determine what precisely the amplitudes are, but we’re not getting our hands quite that dirty at the moment. Note also that the particular amplitudes in these examples take on the numerical values of plus or minus 0.7071—that’s the number which, when squared, gives you 0.5.

  199 At a workshop attended by expert researchers in quantum mechanics in 1997, Max Tegmark took an admittedly highly unscientific poll of the participants’ favored interpretation of quantum mechanics (Tegmark, 1998). The Copenhagen interpretation came in first with thirteen votes, while the many-worlds interpretation came in second with eight. Another nine votes were scattered among other alternatives. Most interesting, eighteen votes were cast for “none of the above/undecided.” And these are the experts.

  200 So what does happen if we hook up a surveillance camera but then don’t examine the tapes? It doesn’t matter whether we look at the tapes or not; the camera still counts as an observation, so there will be a chance to observe Ms. Kitty under the table. In the Copenhagen interpretation, we would say, “The camera is a classical measuring device whose influence collapses the wave function.” In the many-worlds interpretation, as we’ll see, the explanation is “the wave function of the camera becomes entangled with the wave function of the cat, so the alternative histories decohere.”

  201 Many people have thought about changing the rules of quantum mechanics so that this is no longer the case; they have proposed what are called “hidden variable theories” that go beyond the standard quantum mechanical framework. In 1964, theoretical physicist John Bell proved a remarkable theorem: No local theory of hidden variables can possibly
reproduce the predictions of quantum mechanics. This hasn’t stopped people from investigating nonlocal theories—ones where distant events can affect each other instantaneously. But they haven’t really caught on; the vast majority of modern physicists believe that quantum mechanics is simply correct, even if we don’t yet know how to interpret it.

  2 02 There is a slightly more powerful statement we can actually make. In classical mechanics, the state is specified by both position and velocity, so you might guess that the quantum wave function assigns probabilities to every possible combination of position and velocity. But that’s not how it works. If you specify the amplitude for every possible position, you are done—you’ve completely determined the entire quantum state. So what happened to the velocity? It turns out that you can write the same wave function in terms of an amplitude for every possible velocity, completely leaving position out of the description. These are not two different states; they are just two different ways of writing exactly the same state. Indeed, there is a cookbook recipe for translating between the two choices, known in the trade as a “Fourier transform.” Given the amplitude for every possible position, you can do a Fourier transform to determine the amplitude for any possible velocity, and vice versa. In particular, if the wave function is an eigenstate, concentrated on one precise value of position (or velocity), its Fourier transform will be completely spread out over all possible velocities (or positions).

  203 Einstein, Podolsky, and Rosen (1935).

  204 Everett (1957). For discussion from various viewpoints, see Deutsch (1997), Albert (1992), or Ouellette (2007).

  205 Note how crucial entanglement is to this story. If there were no entanglement, the outside world would still exist, but the alternatives available to Miss Kitty would be completely independent of what was going on out there. In that case, it would be perfectly okay to attribute a wave function to Miss Kitty all by herself. And thank goodness; that’s the only reason we are able to apply the formalism of quantum mechanics to individual atoms and other simple isolated systems. Not everything is entangled with everything else, or it would be impossible to say much about any particular subsystem of the world.

 

‹ Prev