The God Particle: If the Universe Is the Answer, What Is the Question?
Page 23
Concepts of probability are well known to actuarial experts today. But they were upsetting to physicists trained in classical physics in the early part of the century (and remain upsetting to many people today). Newton described a deterministic world. If you threw a rock, launched a rocket, or introduced a new planet to a solar system, you could predict where it would go with total certainty, at least in principle, as long as you knew the forces and the initial conditions. Quantum theory said no: initial conditions are inherently uncertain. You get only probabilities for predictions of whatever you want to measure: a particle's location, its energy, velocity, or whatever. The Born interpretation of Schrödinger was unsettling to physicists, who in the three centuries since Galileo and Newton had come to accept determinism as a way of life. Quantum theory threatened to transform them into high-level actuaries.
A SURPRISE ON A MOUNTAINTOP
In 1927 the English physicist Paul Dirac was trying to extend quantum theory, which at the time appeared to be at odds with Einstein's special theory of relativity. The two theories had already been introduced to each other by Sommerfeld. Dirac, intent on making the two theories happily compatible, supervised the marriage and its consummation. In doing so, he found an elegant new equation for the electron (curiously, we call it the Dirac equation). Out of this powerful equation comes the postdictum that electrons must have spin and must produce magnetism. Recall the g-factor from the beginning of the chapter. Dirac's calculations showed that the strength of the electron's magnetism as measured by g was 2.0. (It was much later that refinements led to the precise value given earlier.) More! Dirac (age twenty-four or so) found that in obtaining the electron-wave solution to his equation, there was another solution with bizarre implications. There had to be another particle with properties identical to those of the electron but with opposite electric charge. Mathematically, this is a simple concept. As every little kid knows, the square root of four is plus two, but it is also minus two because minus two times minus two is also four: 2 × 2 = 4, and −2 × −2 = 4. So there are two solutions. The square root of four is plus or minus two.
The problem was that the symmetry implied by Dirac's equation meant that for every particle there must exist another particle with the same mass but opposite charge. So Dirac, a conservative gentleman who was so uncharismatic as to have generated legends, struggled with his negative solution and eventually predicted that nature must contain positive electrons as well as negative electrons. Someone coined the word antimatter. This antimatter should be all over the place, yet no one had ever spotted any.
In 1932, a young Cal Tech physicist named Carl Anderson built a cloud chamber designed to register and photograph subatomic particles. A powerful magnet surrounded his apparatus to bend the path of the particles, giving a measure of their energy. Anderson bagged a bizarre new particle— or, rather, the track of one—in the cloud chamber. He called this strange new object a positron, because it was identical to an electron except that it had a positive charge instead of a negative charge. Anderson's publication made no reference to Dirac's theory, but the connection was soon made. He had found a new form of matter the antiparticle that had popped out of the Dirac equation a few years earlier. The tracks were made by cosmic rays, radiation from particles that strike our atmosphere from the far reaches of our galaxy. Anderson, to get even better data, transported his apparatus from Pasadena to the top of a mountain in Colorado, where the air is thin and the cosmic rays are more intense.
A front-page photograph of Anderson in the New York Times, announcing the discovery, was an inspiration to the young Lederman, his first exposure to the romantic adventure of schlepping equipment to the top of a high mountain to make important scientific measurements. Antimatter turned out to be a very big deal, inextricably involved in the lives of particle physicists, and I promise to say more about it in later chapters. Another quantum-theory success.
UNCERTAINTY AND ALL THAT
In 1927 Heisenberg invented his uncertainty relations, which put the cap on the great scientific revolution we call quantum theory. In truth, quantum theory wasn't wrapped up until the 1940s. Indeed, in its quantum field theory version, its evolution continues today, and the theory will not be complete until it is fully combined with gravitation. But for our purposes the uncertainty principle is a good place to end. Heisenberg's uncertainty relations are a mathematical consequence of the Schrödinger equation. They could also have been the logical postulates, or assumptions, of the new quantum mechanics. Since Heisenberg's ideas are crucial to understanding just how new the quantum world is, we need to dwell a bit here.
Quantum designers insist that only measurements, dear to the hearts of experimenters, count. All we can ask of a theory is to predict the results of events that can be measured. This sounds like an obvious point, but forgetting it leads to the so-called paradoxes that popular writers without culture are fond of exploiting. And, I should add, it is in the theory of measurement that the quantum theory meets its past, present, and no doubt future critics.
Heisenberg announced that our simultaneous knowledge of a particle's location and its motion is limited and that the combined uncertainty of these two properties must exceed ... nothing other than Planck's constant, h, which we first met in the formula E = hf. Our measurements of the particle's location and its motion (actually, its momentum) are reciprocally related to each other. The more we know about one, the less we know about the other. The Schrödinger equation gives us probabilities for these factors. If we devise an experiment that pinpoints the location of the electron—say it's at some coordinate with an extremely small uncertainty of position—the spread in the possible values of the momentum is correspondingly large according to Heisenberg's relation. The product of the two uncertainties (we can assign them numbers) is always greater than Planck's ubiquitous h. Heisenberg's relations dispose, once and for all, of the classical picture of orbits. The very concept of location or place is now less definite. Let's go back to Newton and to something we can visualize.
Suppose we have a straight road on which a Hyundai is tooling along at some respectable speed. We decide that we are going to measure its location at some instant of time as it whizzes past us. We also want to know how fast it is going. In Newtonian physics, pinpointing the position and velocity of an object at a specific time allows one to predict precisely where it will be at any future time. However, when we assemble our rulers and clocks, our flashbulbs and cameras, we find that the more carefully we measure the position, the poorer our ability to measure the speed and vice versa. (Recall that the speed is the change of position divided by the time.) However, in classical physics we can continually improve on our accuracy in both quantities to arbitrary precision. We simply ask some government agency for more funds to build better equipment.
In the atomic domain, by contrast, Heisenberg proposed a basic unknowability that cannot be reduced by any amount of equipment, ingenuity, or federal funding. He proposed that it is a fundamental property of nature that the product of the two uncertainties always exceeds Planck's constant. Strange as this may sound, there is a firm physical basis for this uncertainty in measurability of the microworld. For example, let's try to nail down the position of an electron. To do so, you must "see" it. That is, you have to bounce light, a beam of photons, off the electron. Okay, there! Now you see the electron. You know its location at a moment in time. But a photon glancing off the electron changes the electron's state of motion. One measurement undermines the other. In quantum mechanics, measurement inevitably produces change because you are dealing with atomic systems, and your measuring tools cannot be any smaller, gentler, or kinder. Atoms are one ten-billionth of a centimeter in radius and weigh a millionth of a billion-billionth of a gram, so it doesn't take much to influence them profoundly. By contrast, in a classical system, one can make sure that the act of measuring barely influences the system being measured. Suppose we want to measure water temperature. We don't change the temperature of a lake, say, by
dipping a small thermometer into it. But dipping a fat thermometer into a thimble of water would be stupid since the thermometer would change the temperature of the water. In atomic systems, quantum theory says, we must include the measurement as part of the system.
THE AGONY OF THE DOUBLE SLIT
The most famous and most instructive example of the counterintuitive nature of quantum theory is the double-slit experiment. This experiment was first carried out by Thomas Young, a physician, in 1804 and was heralded as experimental proof of the wave nature of light. The experimenter aimed a beam of, say, yellow light at a wall in which he had cut two very fine parallel slits a very short distance apart. A distant screen caught the light that squirted through the slits. When Young covered one of the slits, a simple, bright, slightly broadened image of the other slit was projected on the screen. But when both slits were uncovered, the result was surprising. A careful examination of the light area on the screen revealed a series of equally spaced bright and dark fringes. Dark fringes are places where no light arrives.
The fringes are proof, said Young, that light is a wave. Why? They are part of an interference pattern, which occurs when waves of any kind bump into each other. When two water waves, for example, collide crest to crest, they reinforce each other, creating a bigger wave. When they collide trough to crest, they cancel each other out. The wave flattens.
Young's interpretation of the double-slit experiment was that at certain locations the wavelike disturbances from the two slits arrive on the screen in just the right phases to cancel each other out: a peak of the light wave from slit one arrives exactly at a trough of light from slit two. A dark fringe results. Such cancellations are quintessential indicators of wave interference. When two peaks or two troughs coincide at the screen, we get a bright fringe. The fringe pattern was accepted as proof that light was a wave phenomenon.
Now in principle the same experiment can be carried out with electrons. In a way this is what Davisson did at Bell Labs. Using electrons, the experiment also results in an interference pattern. The screen is covered with tiny Geiger counters, which click when an electron hits. The Geiger counter detects particles. To check that the counters are working, we put a thick piece of lead over slit two: no electrons can penetrate. Now all Geiger counters click if we wait long enough for some thousands of electrons to pass through the remaining open slit. But when two slits are open, some columns of Geiger counters never click!
Wait a minute. Hold it. When one slit is closed, the electrons, squirting through the other slit, spread out, some going to the left, some straight, some to the right, causing a roughly uniform pattern of clicks across the screen, just as Young's yellow light resulted in a broad bright line in his one-slit experiment. In other words, the electrons behave, logically enough, like particles. But if we remove the lead and let some of the electrons go through slit two, the pattern changes and no electrons reach those columns of Geiger counters corresponding to the dark fringe locations. Now the electrons are acting like waves. Yet we know they are particles because the counters are clicking.
Maybe, you might argue, two or more electrons are passing simultaneously through the slits and simulating a wave interference pattern. To verify that no two electrons are passing simultaneously through the slits, we reduce the rate of electrons to one per minute. Same patterns. Conclusion: electrons going through slit one "know" that slit two is open or closed because they change their patterns in each case.
How do we come up with this idea of "smart" electrons? Put yourself in the place of the experimenter. You have an electron gun, so you know you're shooting particles at the slits. You also know that you end up with particles at the destination, the screen, because the Geiger counters click. A click means particle. So, whether we have one slit or two slits open, we begin and end with particles. However; where the particles land depends on whether one or two slits are open. So a particle going through slit one seems to know whether slit two is open or closed, because it appears to change its path depending on that information. If slit two is closed, it says to itself, "Okay, I can land anywhere on the screen." If slit two is open, it says, "Uh-oh, I have to avoid certain bands on the screen in order to create a fringe pattern." Since particles can't "know," our wave-particle ambiguity has created a logical crisis.
Quantum mechanics says we can predict the probability of the electrons' passage through slits and subsequent arrival at the screen. The probability is a wave, and waves exhibit two-slit interference patterns. When both slits are open, the xy probability waves can interfere to result in zero probability (y = 0) at certain places on the screen. The anthropomorphic complaint of the previous paragraph is a classical hangover; in the quantum world, "How does the electron know which slit to go through?" is not a question that can be answered by measurement. The detailed point-by-point trajectory of the electron is not being observed, and therefore the question "Which slit did the electron go through?" is not an operational question. Heisenberg's uncertainty relations also solve our hangup by pointing out that if you try to measure the electron's trajectory between the electron gun and the wall, you totally change the motion of the electron and destroy the experiment. We can know the initial conditions (electron fired from gun); we can know the results (electron hits some position on screen); we cannot know the path from A to B unless we are prepared to screw up the experiment. This is the spooky nature of the new world in the atom.
The quantum mechanics solution, that is, Don't worry! We can't measure it, is logical enough, but not satisfying to most human minds, which strive to understand the details of the world around us. For some tortured souls, this quantum unknowability is still too high a price to pay. Our defense: this is the only theory we know now that works.
NEWTON VS. Schrödinger
A new intuition must be cultivated. We spend years teaching physics students classical physics, then turn around and teach them quantum theory. It takes graduate students two or more years to develop quantum intuition. (You, lucky reader are expected to perform this pirouette in the space of just one chapter.)
The obvious question is, which is correct? Newton's theory or Schrödinger's? The envelope, please. And the winner is ... Schrödinger! Newton's physics was developed for big things; it doesn't work inside the atom. Schrödinger's theory was designed for micro-phenomena. Yet when the Schrödinger equation is applied to macroscopic situations it gives results identical to Newton's.
Let's look at a classic example. The earth orbits the sun. An electron orbits—to use the old Bohr language—a nucleus. The electron, however, is constrained to specific orbits. Are there only certain allowable quantum orbits for the planet earth around the sun? Newton would say no, the planet can orbit wherever it wants. But the correct answer is yes. We can apply the Schrödinger equation to the earth-sun system. Schrödinger's equation would give the usual discrete set of orbits, but there would be a huge number of them. In using the equation, you'd plug the mass of the earth (instead of the mass of the electron) into the denominator, so the orbital spacings out where the earth is, say, 93 million miles from the sun, would end up so small—say, one every billionth of a billionth of an inch—as to be in effect continuous. For all practical purposes, you end up with the Newtonian result that all orbits are allowed. When you take the Schrödinger equation and apply it to macro objects, it changes in front of your very eyes to... F = ma! Or thereabouts. It was Roger Boscovich, by the way, in the eighteenth century who surmised that Newton's formulas were simply approximations that were good over large distances but wouldn't survive in the microworld. So our graduate students do not have to discard their mechanics books. They may get a job with NASA or the Chicago Cubs, plotting rocket reentry trajectories or pop-ups with good old Newtonian equations.
In quantum theory, the concept of orbits, or of what the electron is doing in the atom or in a beam, is not useful. What matters is the result of a measurement, and here quantum methods can only predict the probability of any possible result. If y
ou measure where the electron is, say in the hydrogen atom, your result could be a number, the distance of the electron from the nucleus. You do this, not by measuring a single electron but by repeating the measurement many times. You get a different result each time, and finally you draw a curve graphing all the results. It is this graph that can be compared to the theory. The theory cannot predict the result of any given measurement. It is a statistical thing. Going back to my cloth-cutter analogy, if we know that the average height of freshmen at the University of Chicago is 5 foot 7, the next new freshman might still be 5 foot 3 or 6 foot 1. We cannot predict the height of the next freshman; we can only draw a kind of actuarial curve.
Where it gets spooky is in predictions of a particle's passage through a barrier or the decay time of a radioactive atom. We prepare an identical setup many times. We shoot a 5.00 MeV electron at a 5.50 MeV potential barrier. We predict that 45 times out of 100 it will penetrate. But we can't ever be sure what a given electron will do. One gets through; the next one, identical in every way, does not. Identical experiments have different results. That's the quantum world. In classical science we stress the importance of replicating experiments. In the quantum world, we can replicate everything except the result.