Dancing With Myself
Page 17
Young took light from a point source and allowed it to pass through two parallel slits in a screen. When this light was allowed to fall on a second screen held behind the first, a pattern of dark and light, known as an interference pattern, was seen. Young’s result is explained easily enough if light is a form of wave, but is incomprehensible if light is assumed to be made up of particles. And in the 1860’s, Maxwell had gone further, showing that light as a form of wave motion appeared as a natural solution of his general theory of electromagnetism. Thus in 1905, when Einstein published his paper on the photoelectric effect, no one but Einstein was willing to concede that light could be anything but waves. And no one was willing to throw overboard the wave theory of light on the word of a twenty-six-year-old unknown, still working for the Swiss Patent Office.
While Einstein was analyzing the photoelectric effect and re-introducing the corpuscular theory of light to physics, other scientists had begun to put together a picture of an atom that was more than the old Greek idea of a simple indivisible particle of matter. Over in Canada, the New Zealand physicist Ernest Rutherford had been studying the new phenomenon of radioactivity, discovered in 1896 by Henri Becquerel. Rutherford found that radioactive material emits charged particles, and when he moved to England in 1907 he began to use those particles to explore the structure of the atom itself. Rutherford found that instead of behaving like a fairly homogeneous sphere of electrical charges, a few billionths of an inch across, the atom had to be made up of a very dense central region, the nucleus, surrounded by an orbiting cloud of electrons. In 1911, Rutherford proposed this new structure for the atom, and pointed out that while the atom was small—a few billionths of an inch—the nucleus was tiny, only about a hundred thousandth as big in radius as the whole atom. In other words, matter, everything from humans to stars, is mostly empty space and moving electric charges.
Our own Solar System is much the same sort of structure, of small isolated planets, far from each other, orbiting the central massive body of the Sun. And to many people this analogy, though no more than an analogy, proved irresistible—so irresistible that it was taken to extreme and implausible lengths. The nuclei were imagined to be really like suns, and the electrons like planets (despite the fact that all electrons appear to be identical, and all planets to be different). Each electron might have its own tiny life forms living on it, and its own infinitesimal people. And the same argument could be taken the other way. Perhaps, it was suggested, our own Solar System was no more than an atom in the hind leg of a super-dog, barking in a super-universe.
That was perhaps a tongue-in-cheek comment that no scientist took too seriously, but there was a problem with the Rutherford atom, much more fundamental than the breakdown of a rather far-fetched analogy. The inside of the atom, in fact, must be far stranger and more alien than any miniature Solar System. For if electrons were orbiting a nucleus, then Maxwell’s general theory of electromagnetism insisted that the electrons ought to radiate energy. But if they did lose energy, then according to all classical rules they would quickly have to fall into the nucleus and the atom would collapse.
This didn’t happen.
Why didn’t it? Why was the atom a stable structure?
That question was addressed by the Danish physicist, Niels Bohr, who at the time was working with Rutherford’s group in Manchester, England. He applied the “quantization” notion—that things occur in discrete pieces, rather than continuous forms—to the structure of atoms.
In the Bohr atom, which he introduced in 1913, electrons do move in orbits around the nucleus, just like a miniature solar system. But the reason they don’t spiral in is because they can only lose energy in chunks—quanta—rather than continuously. Electrons are permitted orbits of certain energies, and can move between one and the other when they emit or absorb light and other radiation; but they can’t occupy intermediate positions, because to get there they would need to emit or absorb some fraction of a quantum of energy, and by definition, fractions of quanta don’t exist. The permitted energy losses in Bohr’s theory were again governed by the wavelengths of the emitted radiation and by Planck’s constant.
It sounded crazy, but it worked. With his simple model, applied to the hydrogen atom, Bohr was able to calculate the right emitted wavelengths (known as the emission spectrum) for hydrogen. No earlier theory had been able to do that. Thus, although the idea that electrons orbit the nucleus, rather than reside outside the nucleus with a certain amount of energy, was found misleading and subsequently dropped, the quantum nature of the electron energy levels within the atom remained and proved of central importance. Electrons jump from one level to another, and as they do so they give off or absorb quanta of radiation.
How long does it take an electron to move from one quantum level to another quantum level?2
Here we seem to catch our first glimpse of speeds faster than light. According to Bohr’s theory (and subsequent ones), we can’t speak of an electron “moving” from one state to another. If it moved, in any conventional sense of the word, then there would have to be intermediate positions that corresponded to some intermediate energy. There is no evidence of any such intermediate, fractional, energies. The electron “jumps,” disappearing from one state and appearing in another without any discernible transition. It’s meaningless to ask how fast it went.
It may seem natural to ask if there might really be a whole sequence of transition states, ones that we are simply unable to observe. Quantum theory is quite insistent upon this point: if we can’t observe it, it is not a part of physics; it belongs to the realm of metaphysics. “Only questions about the results of experiments have a real significance and it is only such questions that theoretical physics has to consider”—Dirac. This is in contrast to the nineteenth century view of physics, in which Nature was thought to be evolving like some great machine, with all questions about that machine permitted and potentially answerable, even if we could not give answers with present theories.
The next step came in 1923, and it was made by Louis de Broglie. He knew that Einstein had associated particles (photons) with light waves. He asked if wave properties ought to be assigned to particles, such as electrons and protons, and if so, how. He found that the Bohr “orbits” of electrons in atoms were just right for a whole number of waves to fit into the available space. He also suggested that there should be direct evidence that particles like electrons can be diffracted, just as a light wave is diffracted by an aperture or an object in its path.
His suggestion proved to be entirely correct. If waves behaved like particles, particles also behaved like waves.
The situation in 1924 can now be summarized as follows:
Radiation came in quanta—discrete bundles of energy.
Those quanta, the photons, interacted with matter as though they were particles, but everything else suggested that radiation was a form of wave.
The structure of the atom could be explained by assuming that the permissible positions for electrons corresponded to well-defined, discrete energy values.
Particles show wavelike properties, just as waves seem to be composed of particles.
One fundamental constant, Planck’s constant, is central to all these different phenomena.
The stage was set for the development of a complete form of quantum mechanics, one that would allow all the phenomena of the subatomic world to be tackled with a single theory.
It was also set for an unprecedented confusion about the nature of physical reality, and a debate that still goes on today.
Before we go any farther, let us note a few things about the five points just made. First, the papers which presented these ideas were not difficult technically. They did not demand a knowledge of advanced mathematics to be comprehensible. Some of the mathematics used in these ground-breaking papers has a simple, almost homemade look to it. (This was equally true of Einstein’s papers on special relativ
ity.) However, the new ideas were very difficult conceptually, since they required the reader to throw away many cherished and long-held “facts” about the nature of the universe. In their place, scientists were asked to entertain notions that were not just unfamiliar—they seemed positively perverse. Energy was previously supposed to be a continuously variable quantity, with no such thing as a “smallest” piece of emitted energy—but now people were asked to think that energy came in separate units of precise denomination, like coins or postage stamps. And light was waves, electrons were particles; there should be no way they could both be both, depending on how you looked at them or (as William Bragg jokingly suggested) on which day of the week it happened to be.
The other thing to note is the power of hindsight.
It is easy for us today to sit back and pick out the half-dozen fundamental papers that paved the way for the development of quantum theory, just as it is easy to point to seminal papers by Heisenberg, Schrödinger, Born, and Dirac that created the theory. But at the time, the right path for progress was not clear at all. The crucial papers and the ideas that worked were not the only ones being produced at the time. Everyone had more or less equal access to the same experimental results, and hundreds of attempts were made to reconcile them with existing, nineteenth century physics. Many of those attempts employed the full arsenal of nineteenth century theory, and they were impressive in their complexity and in the mathematical skills that they displayed.
It called for superhuman brains to sit in the middle of all that action and distinguish the real advances from the scores of other well-intentioned but unsuccessful attempts, or from totally alien and harebrained crank theories. A handful of physicists were able to see what was significant, and build upon it. If we think that the developments of quantum theory are hard to follow, we should reflect on how much harder it was to create.
The same strangeness of thought patterns was there in all the creators of quantum theory—the whiz kids, Heisenberg and Wolfgang Pauli and Dirac3 and Pascual Jordan, all in their middle twenties; the wise advisers Bohr and Einstein and Born, men in their mid-forties; the young mathematicians Hermann Weyl and John von Neumann; and the odd man out—Schrödinger, thirty-nine when he published his famous equation in 1926, and an old man for such a fundamental new contribution to physics.
(Of that original legendary group from the mid-20’s, I met only one. Paul Dirac, the most powerful theorist of the lot, taught the first course that I ever took in quantum theory. In retrospect, I think of it as a Chinese meal course. Dirac would derive his results, and they were all so clear and logical that they seemed self-evident. Then I would go away, and a couple of hours later I would try to reconstruct his logic. It was gone. What was self-evident to him was not so to me.
I also met him on several occasions socially, though that may be the wrong word. Dirac was a famously shy man. At cocktail parties in the senior common room at St. Johns College, Cambridge, the professors and the graduate students got together a couple of times a year, to drink sherry and make polite conversation. Dirac was always very affable, but he had no particular store of social chit-chat. And we were too in awe of him, and too afraid of looking like idiots, to ask anything about quantum theory, or about the people he had worked with in developing it. Talk about wasted opportunity!)
The logical next step in this article would be to go on with the historical development of the theory. We could show how in 1925 Schrödinger employed the apparent wave-particle duality to come up with a basic equation that can be applied to almost all quantum mechanics problems; how Heisenberg, using the fact that atoms emit and absorb energy only in finite and well-determined pieces, was able to produce another set of procedures that could also be applied to almost every problem; and how Dirac, Jordan, and others were able to show that the two approaches were just different ways of representing a single construct, that construct being quantum theory in its most general form.
We will not proceed that way. Interesting as it is, it would take too long. Instead we will take advantage of hindsight. We will move at once to what writers from Dirac to Feynman have agreed is the single most important experiment—the one which is totally inexplicable without quantum theory, and the one which is the source of endless argument and discussion within quantum theory.
It is an experiment that Thomas Young would recognize at once.
3.THE KEY EXPERIMENT
We start, as did Thomas Young, with a pair of slits in a screen. Instead of light, this time we have a source of electrons (or some other atomic particle). Electrons that go through a slit hit a sensitive film, or some other medium that records their arrival. As Louis de Broglie predicted, when we do the parallel slit experiment and look at the pattern, we see a wavelike interference effect, showing that the electron has wave properties.
The problem is, to get that pattern it is necessary to assume that each single electron goes partly through both slits.
This sounds like gibberish. The obvious thing to say is, it can’t go through both, and it should be easy enough to see which one it did go through: you simply watch the slit, to see if the electron passes by. If you do that, you will always observe either one electron, or no electron—and when you make such an observation, the interference pattern goes away.
Quantum theory says that before you made the observation, the electron was partly heading through one slit, partly through the other. In quantum theory language, it had components of both states, meaning that there was a chance it would go through one, and a chance it would go through the other. Your act of observation forced it to pick one of those states. And the one it will pick cannot be known in advance—it is decided completely randomly. The probability of the electron ending in one particular state may be larger than for another state, but there is absolutely no way to know, in advance, which state will be found when we make the observation.
This “probabilistic interpretation” of what goes on in the parallel slit experiment and elsewhere in quantum theory was introduced by Born in 1926. It is usually referred to as the “Copenhagen interpretation,” even though Born worked at Göttingen. (That was because Niels Bohr, of Copenhagen, included Born’s idea as part of a general package of methods of quantum theory, allowing anyone to solve problems of atoms and molecules.) The Copenhagen interpretation said, in effect, that the theory can never tell you exactly where something is, before you make a measurement to determine where it is. Prior to the observation, the object existed only as a sort of cloud of probability, maximum in one place but extending over the whole of space. And it is the observation itself that forces the particle to make a choice (I think I am speaking anthropomorphically) as to where it is.4
Many people found this idea of “quantum indeterminacy” incomprehensible, and many who understood it, hated it—including Einstein and Schrödinger. Schrödinger said: “I don’t like it, and I’m sorry I ever had anything to do with it,” and “Had I known that we were not going to get rid of this damned quantum jumping, I never would have involved myself in this business.”
But one man’s poison may be another man’s bread and butter. Science fiction writers have regarded the probabilistic element of quantum theory as providing not a problem, but a great deal of license. The logic runs as follows: If there is a finite probability of an electron or other subatomic particle being anywhere in the universe, then since humans are made up of subatomic particles, there must also be a finite probability that we are not here, but somewhere else. Now, if we could just make a quantum jump to one of those other places, we would have achieved travel—instantaneously.
This is a case where the term “finite probability” is true, but totally misleading. If you work out the probability of a simultaneous quantum jump (presumably you want to all go simultaneously, and to the same place—no fun arriving in a distributed condition, or half of you a day late) you get a number with so many zeros after the decimal that the universe will end
long before you could write them all down. As in so many things in science, calculation kills off what seems like a good idea until you look at the numbers.
It’s perhaps a good thing that this probabilistic aspect of quantum mechanics was emphasized only after people already knew that the methods gave results corresponding to experiment. Otherwise, the lack of determinacy that the theory implies might have been enough to stop any further work along those lines. As it was, Einstein always rejected the indeterminacy, arguing that somewhere behind it all there had to be a theory without random elements. (Perhaps Einstein’s most quoted line is: “God does not play dice.” He also said, in the same letter to Max Born, “Quantum mechanics is certainly imposing. But an inner voice tells me that it is not yet the real thing.”)
Can we dispose of the idea that the electron is in a mixed-state condition before we observe it, perhaps on some logical grounds?
Suppose that atoms, protons or electrons were as big as tennis balls, or at least weighed as much, as the average citizen was apparently quite ready to believe forty years ago. We could still perform the double slit experiment, and we could watch which slit any particular electron went through. Moreover, a single photon, reflecting off a passing electron, would be enough to give us that information. It then sounds totally ridiculous that the infinitesimal disturbance produced by the impact of one photon could cause a profound change in the results of the experiment.
What does quantum theory tell us in such a case?
It provides an answer that many people find intellectually very disturbing. When we observe large objects, says the theory, the disturbance caused by the measuring process is small compared with the object being measured, and we will then obtain the result predicted by classical physics. However, when the disturbing influence (e.g., a photon) is comparable in size with the object being observed (e.g., an electron), then the classical rules go out of the window, and we must accept the world view offered by quantum theory. Quantum theory thus provides an absolute scale to the universe—a thing is small, if quantum theory must be used to calculate its behavior.