Book Read Free

From Eternity to Here: The Quest for the Ultimate Theory of Time

Page 18

by Sean M. Carroll


  As they designed the experiment, Wu became convinced of the project’s fundamental importance. In a later recollection, she explained vividly what it is like to be caught up in the excitement of a crucial moment in science:

  Following Professor Lee’s visit, I began to think things through. This was a golden opportunity for a beta-decay physicist to perform a crucial test, and how could I let it pass?—That Spring, my husband, Chia-Liu Yuan, and I had planned to attend a conference in Geneva and then proceed to the Far East. Both of us had left China in 1936, exactly twenty years earlier. Our passages were booked on the Queen Elizabeth before I suddenly realized that I had to do the experiment immediately, before the rest of the Physics Community recognized the importance of this experiment and did it first. So I asked Chia-Liu to let me stay and go without me.

  As soon as the Spring semester ended in the last part of May, I started work in earnest in preparing for the experiment. In the middle of September, I finally went to Washington, D.C., for my first meeting with Dr. Ambler. . . . Between experimental runs in Washington, I had to dash back to Columbia for teaching and other research activities. On Christmas eve, I returned to New York on the last train; the airport was closed because of heavy snow. There I told Professor Lee that the observed asymmetry was reproducible and huge. The asymmetry parameter was nearly -1. Professor Lee said that this was very good. This result is just what one should expect for a two-component theory of the neutrino. 120

  Your spouse and a return to your childhood home will have to wait—Science is calling! Lee and Yang were awarded the Nobel Prize in Physics in 1957; Wu should have been included among the winners, but she wasn’t.

  Once it was established that the weak interactions violated parity, people soon noticed that the experiments seemed to be invariant if you combined a parity transformation with charge conjugation C, exchanging particles with antiparticles. Moreover, this seemed to be a prediction of the theoretical models that were popular at the time. Therefore, people who were surprised that P is violated in nature took some solace in the idea that combining C and P appeared to yield a good symmetry.

  It doesn’t. In 1964, James Cronin and Val Fitch led a collaboration that studied our friend the neutral kaon. They found that the kaon decayed in a way that violated parity, and that the antikaon decayed in a way that violated parity slightly differently. In other words, the combined transformation of reversing parity and trading particles for antiparticles is not a symmetry of nature.121 Cronin and Fitch were awarded the Nobel Prize in 1980.

  At the end of the day, all of the would-be symmetries C, P, and T are violated in Nature, as well as any combination of two of them together. The obvious next step is to inquire about the combination of all three: CPT. In other words, if we take some process observed in nature, switch all the particles with their antiparticles, flip right with left, and run it backward in time, do we get a process that obeys the laws of physics? At this point, with everything else being violated, we might conclude that a stance of suspicion toward symmetries of this form is a healthy attitude, and guess that even CPT is violated.

  Wrong again! (It’s good to be the one both asking and answering the questions.) As far as any experiment yet performed can tell, CPT is a perfectly good symmetry of Nature. And it’s more than that; under certain fairly reasonable assumptions about the laws of physics, you can prove that CPT must be a good symmetry—this result is known imaginatively as the “CPT Theorem.” Of course, even reasonable assumptions might be wrong, and neither experimentalists nor theorists have shied away from exploring the possibility of CPT violation. But as far as we can tell, this particular symmetry is holding up.

  I argued previously that it was often necessary to fix up the operation of time reversal to obtain a transformation that was respected by nature. In the case of the Standard Model of particle physics, the requisite fixing-up involves adding charge conjugation and parity inversion to our time reversal. Most physicists find it more convenient to distinguish between the hypothetical world in which C, P, and T were all individually invariant, and the real world, in which only the combination CPT is invariant, and therefore proclaim that the real world is not invariant under time reversal. But it’s important to appreciate that there is a way to fix up time reversal so that it does appear to be a symmetry of Nature.

  CONSERVATION OF INFORMATION

  We’ve seen that “time reversal” involves not just reversing the evolution of a system, playing each state in the opposite order in time, but also doing some sort of transformation on the states at each time—maybe just reversing the momentum or flipping a row on our checkerboards, or maybe something more sophisticated like exchanging particles with antiparticles.

  In that case, is every sensible set of laws of physics invariant under some form of “sophisticated time reversal”? Is it always possible to find some transformation on the states so that the time-reversed evolution obeys the laws of physics?

  No. Our ability to successfully define “time reversal” so that some laws of physics are invariant under it depends on one other crucial assumption: conservation of information. This is simply the idea that two different states in the past always evolve into two distinct states in the future—they never evolve into the same state. If that’s true, we say that “information is conserved,” because knowledge of the future state is sufficient to figure out what the appropriate state in the past must have been. If that feature is respected by some laws of physics, the laws are reversible , and there will exist some (possibly complicated) transformations we can do to the states so that time-reversal invariance is respected.122

  To see this idea in action, let’s return to checkerboard world. Checkerboard D, portrayed in Figure 39, looks fairly simple. There are some diagonal lines, and one vertical column of gray squares. But something interesting happens here that didn’t happen in any of our previous examples: The different lines of gray squares are “interacting” with one another. In particular, it would appear that diagonal lines can approach the vertical column from either the right or the left, but when they get there they simply come to an end.

  Figure 39: A checkerboard with irreversible dynamics. Information about the past is not preserved into the future.

  That is a fairly simple rule and makes for a perfectly respectable set of “laws of physics.” But there is a radical difference between checkerboard D and our previous ones: This one is not reversible. The space of states is, as usual, just a list of white and gray squares along any one row, with the additional information that the square is part of a right-moving diagonal, a left-moving diagonal, or a vertical column. And given that information, we have no problem at all in evolving the state forward in time—we know exactly what the next row up will look like, and the row after that, and so on.

  But if we are told the state along one row, we cannot evolve it backward in time. The diagonal lines would keep going, but from the time-reversed point of view, the vertical column could spit out diagonal lines at completely random intervals (corresponding, from the point of view portrayed in the figure, to a diagonal hitting the vertical column of grays and being absorbed). When we say that a physical process is irreversible, we mean that we cannot construct the past from knowledge of the current state, and this checkerboard is a perfect example of that.

  In a situation like this, information is lost. Knowing the state at one time, we can’t be completely sure what the earlier states were. We have a space of states—a specification of a row of white and gray squares, with labels on the gray squares indicating whether they move up and to the right, up and to the left, or vertically. That space of states doesn’t change with time; every row is a member of the same space of states, and any possible state is allowed on any particular row. But the unusual feature of checkerboard D is that two different rows can evolve into the same row in the future. Once we get to that future state, the information of which past configurations got us there is irrevocably lost; the evolution is irreversible.
/>
  In the real world, apparent loss of information happens all the time. Consider two different states of a glass of water. In one state, the water is uniform and at the

  Figure 40: Apparent loss of information in a glass of water. A future state of a glass of cool water could have come either from the same state of cool water, or from warm water with an ice cube.

  same cool temperature; in the other, we have warm water but also an ice cube. These two states can evolve into the future into what appears to be the same state: a glass of cool water.

  We’ve encountered this phenomenon before: It’s the arrow of time. Entropy increases as the ice melts into the warm water; that’s a process that can happen but will never un-happen. The puzzle is that the motion of the individual molecules making up the water is perfectly invariant under time reversal, while the macroscopic description in terms of ice and liquid is not. To understand how reversible underlying laws give rise to macroscopic irreversibility, we must return to Boltzmann and his ideas about entropy.

  8

  ENTROPY AND DISORDER

  Nobody can imagine in physical terms the act of reversing the order of time. Time is not reversible.

  —Vladimir Nabokov, Look at the Harlequins!

  Why is it that discussions of entropy and the Second Law of Thermodynamics so often end up being about food? Here are some popular (and tasty) examples of the increase of entropy in irreversible processes:

  • Breaking eggs and scrambling them.

  • Stirring milk into coffee.

  • Spilling wine on a new carpet.

  • The diffusion of the aroma of a freshly baked pie into a room.

  • Ice cubes melting in a glass of water.

  To be fair, not all of these are equally appetizing; the ice-cube example is kind of bland, unless you replace the water with gin. Furthermore, I should come clean about the scrambled-eggs story. The truth is that the act of cooking the eggs in your skillet isn’t a straightforward demonstration of the Second Law; the cooking is a chemical reaction that is caused by the introduction of heat, which wouldn’t happen if the eggs weren’t an open system. Entropy comes into play when we break the eggs and whisk the yolks together with the whites; the point of cooking the resulting mixture is to avoid salmonella poisoning, not to illustrate thermodynamics.

  The relationship between entropy and food arises largely from the ubiquity of mixing. In the kitchen, we are often interested in combining together two things that had been kept separate—either two different forms of the same substance (ice and liquid water) or two altogether different ingredients (milk and coffee, egg whites and yolks). The original nineteenth-century thermodynamicists were extremely interested in the dynamics of heat, and the melting ice cube would have been of foremost concern to them; they would have been less fascinated by processes where all the ingredients were at the same temperature, such as spilling wine onto a carpet. But clearly there is some underlying similarity in what is going on; an initial state in which substances are kept separate evolves into a final state in which they are mixed together. It’s easy to mix things and hard to unmix them—the arrow of time looms over everything we do in the kitchen.

  Why is mixing easy and unmixing hard? When we mix two liquids, we see them swirl together and gradually blend into a uniform texture. By itself, that process doesn’t offer much clue into what is really going on. So instead let’s visualize what happens when we mix together two different kinds of colored sand. The important thing about sand is that it’s clearly made of discrete units, the individual grains. When we mix together, for example, blue sand and red sand, the mixture as a whole begins to look purple. But it’s not that the individual grains turn purple; they maintain their identities, while the blue grains and the red grains become jumbled together. It’s only when we look from afar (“macroscopically”) that it makes sense to think of the mixture as being purple; when we peer closely at the sand (“microscopically”) we see individual blue and red grains.

  The great insight of the pioneers of kinetic theory—Daniel Bernoulli in Swit zerland, Rudolf Clausius in Germany, James Clerk Maxwell and William Thomson in Great Britain, Ludwig Boltzmann in Austria, and Josiah Willard Gibbs in the United States—was to understand all liquids and gases in the same way we think of sand: as collections of very tiny pieces with persistent identities. Instead of grains, of course, we think of liquids and gases as composed of atoms and molecules. But the principle is the same. When milk and coffee mix, the individual milk molecules don’t combine with the individual coffee molecules to make some new kind of molecule; the two sets of molecules simply intermingle. Even heat is a property of atoms and molecules, rather than constituting some kind of fluid in its own right—the heat contained in an object is a measure of the energy of the rapidly moving molecules within it. When an ice cube melts into a glass of water, the molecules remain the same, but they gradually bump into one another and distribute their energy evenly throughout the molecules in the glass.

  Without (yet) being precise about the mathematical definition of “entropy,” the example of blending two kinds of colored sand illustrates why it is easier to mix things than to unmix them. Imagine a bowl of sand, with all of the blue grains on one side of the bowl and the red grains on the other. It’s pretty clear that this arrangement is somewhat delicate—if we disturb the bowl by shaking it or stirring with a spoon, the two colors will begin to mix together. If, on the other hand, we start with the two colors completely mixed, such an arrangement is robust—if we disturb the mixture, it will stay mixed. The reason is simple: To separate out two kinds of sand that are mixed together requires a much more precise operation than simply shaking or stirring. We would have to reach in carefully with tweezers and a magnifying glass to move all of the red grains to one side of the bowl and all of the blue grains to the other. It takes much more care to create the delicate unmixed state of sand than to create the robust mixed state.

  That’s a point of view that can be made fearsomely quantitative and scientific, which is exactly what Boltzmann and others managed to do in the 1870s. We’re going to dig into the guts of what they did, and explore what it explains and what it doesn’t, and how it can be reconciled with underlying laws of physics that are perfectly reversible. But it should already be clear that a crucial role is played by the large numbers of atoms that we find in macroscopic objects in the real world. If we had only one grain of red sand and one grain of blue sand, there would be no distinction between “mixed” and “unmixed.” In the last chapter we discussed how the underlying laws of physics work equally well forward or backward in time (suitably defined). That’s a microscopic description, in which we keep careful track of each and every constituent of a system. But very often in the real world, where large numbers of atoms are involved, we don’t keep track of nearly that much information. Instead, we make simplifications—thinking about the average color or temperature or pressure, rather than the specific position and momentum of each atom. When we think macroscopically, we forget (or ignore) detailed information about every particle—and that’s where entropy and irreversibility begin to come into play.

  SMOOTHING OUT

  The basic idea we want to understand is “how do macroscopic features of a system made of many atoms evolve as a consequence of the motion of the individual atoms?” (I’ll use “atoms” and “molecules” and “particles” more or less interchangeably, since all we care is that they are tiny things that obey reversible laws of physics, and that you need a lot of them to make something macroscopic.) In that spirit, consider a sealed box divided in two by a wall with a hole in it. Gas molecules can bounce around on one side of the box and will usually bounce right off the central wall, but every once in a while they will sneak through to the other side. We might imagine, for example, that the molecules bounce off the central wall 995 times out of 1,000, but one-half of 1 percent of the time (each second, let’s say) they find the hole and move to the other side.

  Figure 41: A
box of gas molecules, featuring a central partition with a hole. Every second, each molecule has a tiny chance to go through the hole to the other side.

  This example is pleasingly specific; we can examine a particular instance in detail and see what happens.123 Every second, each molecule on the left side of the box has a 99.5 percent chance of staying on that side, and a 0.5 percent chance of moving to the other side; likewise for the right side of the box. This rule is perfectly time-reversal invariant; if you made a movie of the motion of just one particle obeying this rule, you couldn’t tell whether it was being run forward or backward in time. At the level of individual particles, we can’t distinguish the past from the future.

  In Figure 42 we have portrayed one possible evolution of such a box; time moves upward, as always. The box has 2,000 “air molecules” in it, and starts at time t = 1 with 1,600 molecules on the left-hand side and only 400 on the right. (You’re not supposed to ask why it starts that way—although later, when we replace “the box” with “the universe,” we will start asking such questions.) It’s not very surprising what happens as we sit there and let the molecules bounce around inside the box. Every second, there is a small chance that any particular molecule will switch sides; but, because we started with a much larger number of molecules on the one side, there is a general tendency for the numbers to even out. (Exactly like temperature, in Clausius’s formulation of the Second Law.) When there are more molecules on the left, the total number of molecules that shift from left to right will usually be larger than the number that shift from right to left. So after 50 seconds we see that the numbers are beginning to equal out, and after 200 seconds the distribution is essentially equal.

 

‹ Prev