Book Read Free

The Fabric of the Cosmos: Space, Time, and the Texture of Reality

Page 63

by Brian Greene


  More generally, a set of physical laws provides us with an algorithm for evolving an initial state of a physical system at time t 0 to some other time t + t 0 . Concretely, this algorithm can be viewed as a map U(t) which takes as input S(t 0 ) and produces S(t + t 0 ), that is: S(t + t 0 ) = U(t)S(t 0 ). We say that the laws giving rise to U(t) are time-reversal symmetric if there is a map T satisfying U(—t) = T -1 U(t)T. In English, this equation says that by a suitable manipulation of the state of the physical system at one moment (accomplished by T ), evolution by an amount t forward in time according to the laws of the theory (accomplished by U(t)) is equivalent to having evolved the system t units of time backward in time (denoted by U(—t)). For instance, if we specify the state of a system of particles at one moment by their positions and velocities, then T would keep all particle positions fixed and reverse all velocities. Evolving such a configuration of particles forward in time by an amount t is equivalent to having evolved the original configuration of particles backward in time by an amount t. (The factor of T -1 undoes the velocity reversal so that, at the end, not only are the particle positions what they would have been t units of time previously, but so are their velocities.)

  For certain sets of laws, the T operation is more complicated than it is for Newtonian mechanics. For example, if we study the motion of charged particles in the presence of an electromagnetic field, the reversal of particle velocities would be inadequate for the equations to yield an evolution in which the particles retrace their steps. Instead, the direction of the magnetic field must also be reversed. (This is required so that the v × B term in the Lorentz force law equation remains unchanged.) Thus, in this case, the T operation encompasses both of these transformations. The fact that we have to do more than just reverse all particle velocities has no impact on any of the discussion that follows in the text. All that matters is that particle motion in one direction is just as consistent with the physical laws as particle motion in the reverse direction. That we have to reverse any magnetic fields that happen to be present to accomplish this is of no particular relevance.

  Where things get more subtle is the weak nuclear interactions. The weak interactions are described by a particular quantum field theory (discussed briefly in Chapter 9), and a general theorem shows that quantum field theories (so long as they are local, unitary, and Lorentz invariant—which are the ones of interest) are always symmetric under the com bined operations of charge conjugation C (which replaces particles by their antiparticles), parity P (which inverts positions through the origin), and a bare-bones time-reversal operation T (which replaces t by — t ). So, we could define a T operation to be the product CPT, but if T invariance absolutely requires the CP operation to be included, T would no longer be simply interpreted as particles retracing their steps (since, for example, particle identities would be changed by such T— particles would be replaced by their antiparticles—and hence it would not be the original particles retracing their steps). As it turns out, there are some exotic experimental situations in which we are forced into this corner. There are certain particle species (K-mesons, B-mesons) whose repertoire of behaviors is CPT invariant but is not invariant under T alone. This was established indirectly in 1964 by James Cronin, Val Fitch, and their collaborators (for which Cronin and Fitch received the 1980 Nobel Prize) by showing that the K-mesons violated CP symmetry (ensuring that they must violate T symmetry in order not to violate CPT ). More recently, T symmetry violation has been directly established by the CPLEAR experiment at CERN and the KTEV experiment at Fermilab. Roughly speaking, these experiments show that if you were presented with a film of the recorded processes involving these meson particles, you'd be able to determine whether the film was being projected in the correct forward time direction, or in reverse. In other words, these particular particles can distinguish between past and future. What remains unclear, though, is whether this has any relevance for the arrow of time we experience in everyday contexts. After all, these are exotic particles that can be produced for fleeting moments in high-energy collisions, but they are not a constituent of familiar material objects. To many physicists, including me, it seems unlikely that the time nonreversal invariance evidenced by these particles plays a role in answering the puzzle of time's arrow, so we shall not discuss this exceptional example further. But the truth is that no one knows for sure.

  3. I sometimes find that there is reluctance to accept the theoretical assertion that the eggshell pieces would really fuse back together into a pristine, uncracked shell. But the time-reversal symmetry of nature's laws, as elaborated with greater precision in the previous endnote, ensures that this is what would happen. Microscopically, the cracking of an egg is a physical process involving the various molecules that make up the shell. Cracks appear and the shell breaks apart because groups of molecules are forced to separate by the impact the egg experiences. If those molecular motions were to take place in reverse, the molecules would join back together, re-fusing the shell into its previous form.

  4. To keep the focus on modern ways of thinking about these ideas, I am skipping over some very interesting history. Boltzmann's own thinking on the subject of entropy went through significant refinements during the 1870s and 1880s, during which time interactions and communications with physicists such as James Clerk Maxwell, Lord Kelvin, Josef Loschmidt, Josiah Willard Gibbs, Henri Poincaré, S. H. Burbury, and Ernest Zermelo were instrumental. In fact, Boltzmann initially thought he could prove that entropy would always and absolutely be nondecreasing for an isolated physical system, and not that it was merely highly unlikely for such entropy reduction to take place. But objections raised by these and other physicists subsequently led Boltzmann to emphasize the statistical/probabilistic approach to the subject, the one that is still in use today.

  5. I am imagining that we are using the Modern Library Classics edition of War and Peace, translated by Constance Garnett, with 1,386 text pages.

  6. The mathematically inclined reader should note that because the numbers can get so large, entropy is actually defined as the logarithm of the number of possible arrangements, a detail that won't concern us here. However, as a point of principle, this is important because it is very convenient for entropy to be a so-called extensive quantity, which means that if you bring two systems together, the entropy of their union is the sum of their individual entropies. This holds true only for the logarithmic form of entropy, because the number of arrangements in such a situation is given by the product of the individual arrangements, so the logarithm of the number of arrangements is additive.

  7. While we can, in principle, predict where each page will land, you might be concerned that there is an additional element that determines the page ordering: how you gather the pages together in a neat stack. This is not relevant to the physics being discussed, but in case it bothers you, imagine that we agree that you'll pick up the pages, one by one, starting with the one that's closest to you, and then picking up the page closest to that one, and so on. (And, for example, we can agree to measure distances from the nearest corner of the page in question.)

  8. To succeed in calculating the motion of even a few pages with the accuracy required to predict their page ordering (after employing some algorithm for stacking them in a pile, such as in the previous note), is actually extremely optimistic. Depending on the flexibility and weight of the paper, such a comparatively "simple" calculation could still be beyond today's computational power.

  9. You might worry that there is a fundamental difference between defining a notion of entropy for page orderings and defining one for a collection of molecules. After all, page orderings are discrete—you can count them, one by one, and so although the total number of possibilities might be large, it's finite. To the contrary, the motion and position of even a single molecule are continuous—you can't count them one by one, and so there is (at least according to classical physics) an infinite number of possibilities. So how can a precise counting of molecular rearrangements be carri
ed out? Well, the short response is that this is a good question, but one that has been answered fully—so if that's enough to ease your worry, feel free to skip what follows. The longer response requires a bit of mathematics, so without background this may be tough to follow completely. Physicists describe a classical, many-particle system, by invoking phase space, a 6N-dimensional space (where N is the number of particles) in which each point denotes all particle positions and velocities (each such position requires three numbers, as does each velocity, accounting for the 6N dimensionality of phase space). The essential point is that phase space can be carved up into regions such that all points in a given region correspond to arrangements of the speeds and velocities of the molecules that have the same, overall, gross features and appearance. If the molecules' configuration were changed from one point in a given region of phase space to another point in the same region, a macroscopic assessment would find the two configurations indistinguishable. Now, rather than counting the number of points in a given region—the most direct analog of counting the number of different page rearrangements, but something that will surely result in an infinite answer—physicists define entropy in terms of the volume of each region in phase space. A larger volume means more points and hence higher entropy. And a region's volume, even a region in a higher-dimensional space, is something that can be given a rigorous mathematical definition. (Mathematically, it requires choosing something called a measure, and for the mathematically inclined reader, I'll note that we usually choose the measure which is uniform over all microstates compatible with a given macrostate—that is, each microscopic configuration associated with a given set of macroscopic properties is assumed to be equally probable.)

  10. Specifically, we know one way in which this could happen: if a few days earlier the CO 2 was initially in the bottle, then we know from our discussion above that if, right now, you were to simultaneously reverse the velocity of each and every CO 2 molecule, and that of every molecule and atom that has in any way interacted with the CO 2 molecules, and wait the same few days, the molecules would all group back together in the bottle. But this velocity reversal isn't something that can be accomplished in practice, let alone something that is likely to happen of its own accord. I might note, though, that one can prove mathematically that if you wait long enough, the CO 2 molecules will, of their own accord, all find their way back into the bottle. A result proven in the 1800s by the French mathematician Joseph Liouville can be used to establish what is known as the Poincaré recurrence theorem. This theorem shows that, if you wait long enough, a system with a finite energy and confined to a finite spatial volume (like CO 2 molecules in a closed room) will return to a state arbitrarily close to its initial state (in this case, CO 2 molecules all situated in the Coke bottle). The catch is how long you'd have to wait for this to happen. For systems with all but a small number of constituents, the theorem shows you'd typically have to wait far in excess of the age of the universe for the constituents to, of their own accord, regroup in their initial configuration. Nevertheless, as a point of principle, it is provocative to note that with endless patience and longevity, every spatially contained physical system will return to how it was initially configured.

  11. You might wonder, then, why water ever turns into ice, since that results in the H 2 O molecules becoming more ordered, that is, attaining lower, not higher, entropy. Well, the rough answer is that when liquid water turns into solid ice, it gives off energy to the environment (the opposite of what happens when ice melts, when it takes in energy from the environment), and that raises the environmental entropy. At low enough ambient temperatures, that is, below 0 degrees Celsius, the increase in environmental entropy exceeds the decrease in the water's entropy, so freezing becomes entropically favored. That's why ice forms in the cold of winter. Similarly, when ice cubes form in your refrigerator's freezer, their entropy goes down but the refrigerator itself pumps heat into the environment, and if that is taken account of, there is a total net increase of entropy. The more precise answer, for the mathematically inclined reader, is that spontaneous phenomena of the sort we're discussing are governed by what is known as free energy. Intuitively, free energy is that part of a system's energy that can be harnessed to do work. Mathematically, free energy, F, is defined by F = U — TS, where U stands for total energy, T stands for temperature, and S stands for entropy. A system will undergo a spontaneous change if that results in a decrease of its free energy. At low temperatures, the drop in U associated with liquid water turning into solid ice outweighs the decrease in S (outweighs the increase in — TS ), and so will occur. At high temperatures (above 0 degrees Celsius), though, the change of ice to liquid water or gaseous steam is entropically favored (the increase in S outweighs changes to U) and so will occur.

  12. For an early discussion of how a straightforward application of entropic reasoning would lead us to conclude that memories and historical records are not trustworthy accounts of the past, see C. F. von Weizsäcker in The Unity of Nature (New York: Farrar, Straus, and Giroux, 1980), 138-46, (originally published in Annalen der Physik 36 (1939). For an excellent recent discussion, see David Albert in Time and Chance (Cambridge, Mass.: Harvard University Press, 2000).

  13. In fact, since the laws of physics don't distinguish between forward and backward in time, the explanation of having fully formed ice cubes a half hour earlier, at 10 p.m., would be precisely as absurd—entropically speaking—as predicting that by a half hour later, by 11:00 p.m., the little chunks of ice would have grown into fully formed ice cubes. To the contrary, the explanation of having liquid water at 10 p.m. that slowly forms small chunks of ice by 10:30 p.m. is precisely as sensible as predicting that by 11:00 p.m. the little chunks of ice will melt into liquid water, something that is familiar and totally expected. This latter explanation, from the perspective of the observation at 10:30 p.m., is perfectly temporally symmetric and, moreover, agrees with our subsequent observations.

  14. The particularly careful reader might think that I've prejudiced the discussion with the phrase "early on" since that injects a temporal asymmetry. What I mean, in more precise language, is that we will need special conditions to prevail on (at least) one end of the temporal dimension. As will become clear, the special conditions amount to a low entropy boundary condition and I will call the "past" a direction in which this condition is satisfied.

  15. The idea that time's arrow requires a low-entropy past has a long history, going back to Boltzmann and others; it was discussed in some detail in Hans Reichenbach, The Direction of Time (Mineola, N.Y.: Dover Publications, 1984), and was championed in a particularly interesting quantitative way in Roger Penrose, The Emperor's New Mind (New York: Oxford University Press, 1989), pp. 317ff.

  16. Recall that our discussion in this chapter does not take account of quantum mechanics. As Stephen Hawking showed in the 1970s, when quantum effects are considered, black holes do allow a certain amount of radiation to seep out, but this does not affect their being the highest-entropy objects in the cosmos.

  17. A natural question is how we know that there isn't some future constraint that also has an impact on entropy. The bottom line is that we don't, and some physicists have even suggested experiments to detect the possible influence that such a future constraint might have on things that we can observe today. For an interesting article discussing the possibility of future and past constraints on entropy, see Murray Gell-Mann and James Hartle, "Time Symmetry and Asymmetry in Quantum Mechanics and Quantum Cosmology," in Physical Origins of Time Asymmetry, J. J. Halliwell, J. Pérez-Mercader, W. H. Zurek, eds. (Cambridge, Eng.: Cambridge University Press, 1996), as well as other papers in Parts 4 and 5 of that collection.

  18. Throughout this chapter, we've spoken of the arrow of time, referring to the apparent fact that there is an asymmetry along the time axis (any observer's time axis) of spacetime: a huge variety of sequences of events is arrayed in one order along the time axis, but the reverse ordering of such events seldom, if
ever, occurs. Over the years, physicists and philosophers have divided these sequences of events into subcategories whose temporal asymmetries might, in principle, be subject to logically independent explanations. For example, heat flows from hot objects to cooler ones, but not from cool objects to hot ones; electromagnetic waves emanate outward from sources like stars and lightbulbs, but seem never to converge inward on such sources; the universe appears to be uniformly expanding, and not contracting; and we remember the past and not the future (these are called the thermodynamic, electromagnetic, cosmological, and psychological arrows of time, respectively). All of these are time-asymmetric phenomena, but they might, in principle, acquire their time asymmetry from completely different physical principles. My view, one that many share (but others don't), is that except possibly for the cosmological arrow, these temporally asymmetric phenomena are not fundamentally different, and ultimately are subject to the same explanation—the one we've described in this chapter. For example, why does electromagnetic radiation travel in expanding outward waves but not contracting inward waves, even though both are perfectly good solutions to Maxwell's equations of electromagnetism? Well, because our universe has low-entropy, coherent, ordered sources for such outward waves—stars and lightbulbs, to name two—and the existence of these ordered sources derives from the even more ordered environment at the universe's inception, as discussed in the main text. The psychological arrow of time is harder to address since there is so much about the microphysical basis of human thought that we've yet to understand. But much progress has been made in understanding the arrow of time when it comes to computers—undertaking, completing, and then producing a record of a computation is a basic computational sequence whose entropic properties are well understood (as developed by Charles Bennett, Rolf Landauer, and others) and fit squarely within the second law of thermodynamics. Thus, if human thought can be likened to computational processes, a similar thermodynamic explanation may apply. Notice, too, that the asymmetry associated with the fact that the universe is expanding and not contracting is related to, but logically distinct from, the arrow of time we've been exploring. If the universe's expansion were to slow down, stop, and then turn into a contraction, the arrow of time would still point in the same direction. Physical processes (eggs breaking, people aging, and so on) would still happen in the usual direction, even though the universe's expansion had reversed.

 

‹ Prev