by Lee Smolin
Rule 1 is continuous and deterministic; Rule 2 by contrast is abrupt and probabilistic. The state jumps abruptly just after the measurement, but quantum mechanics predicts only probabilities for the different outcomes, and hence for which state the system jumps to.
Most people are perplexed when they learn about these two rules. As we discussed before, the situation is genuinely puzzling. The first thing that puzzles them is the measurement problem: What’s so special about a measurement? Aren’t measuring devices and the people who use them made of atoms, to which Rule 1 applies?
Rule 1, by dictating how a quantum system changes in time, plays the same essential role in the theory that Newton’s laws of motion played in pre-quantum physics. Like Newton’s laws, Rule 1 is deterministic. It takes an input state and evolves it to a definite output state at a later time. This means it takes input states which are constructed as superpositions to output states which are similarly constructed from superpositions. Probability plays no role.
But measurements, as described by Rule 2, do not evolve superpositions to other superpositions. When you measure some quantity, like pet preference or position, you get a definite value. And afterward the state is the one corresponding to that definite value. So even if the input state is a superposition of states with definite values of some observable quantity, the output state is not, as it corresponds to just one value.
Rule 2 does not tell you what the definite value is; it only predicts probabilities for the different possible outcomes to occur. But these probabilities are not spurious; they are part of what quantum mechanics predicts. Rule 2 is essential, because that is how probabilities enter quantum mechanics. And probabilities are essential in many cases; they are what experimentalists measure.
However, quantum mechanics requires that Rule 1 and Rule 2 never be applied to the same process, because the two rules contradict each other. This means we must always distinguish measurements from other processes in nature.
Yet if we are realists, then measurements are just physical processes, and there is nothing special that should distinguish them fundamentally from anything else that happens in nature. Thus, it is very hard to justify giving a special role to measurements within realism. Hence, it is hard to square quantum mechanics with realism.
* * *
—
AT THE END OF THE DAY, the question will be this: Can we live with these contradictions and puzzles, or do we want and expect more from science?
SIX
The Triumph of Anti-Realism
Quantum theory does not describe physical reality. What it does is provide an algorithm for computing probabilities for the macroscopic events (“detector clicks”) that are the consequences of our experimental interventions. This strict definition of the scope of quantum theory is the only interpretation ever needed, whether by experimenters or theorists.
—CHRIS FUCHS AND ASHER PERES
The person who first understood that quantum physics would require a radically new theory based on a duality of waves and particles was Albert Einstein. Einstein was a realist to the core. Yet the quantum revolution he sparked culminated twenty years later in a theory that requires that measurements be singled out and treated differently than all other processes—a distinction that, as I discussed in the last chapter, is foreign to realism. The resolution, according to most of the pioneers of the quantum world, was to give up realism. How did this abandonment of realism come to happen?
The idea of a duality of wave and particle first appeared in Einstein’s studies of the nature of light in the early years of the twentieth century. By that time physicists had considered theories in which light is a particle and theories in which light is a wave, but always one or the other. Newton considered the wave theory and rejected it in favor of a theory in which light is conveyed by a stream of particles traveling from objects to the eye. (Some ancient thinkers had them going the other way, which led to trouble explaining why we don’t see in the dark.) Newton’s reason for this choice was interesting: he thought that particles did a better job of explaining why light travels in straight lines. Waves, he knew, could bend as they diffract around obstacles, and he didn’t think light could do that. Newton’s particle theory of light reigned until an English scientist named Thomas Young showed in the early years of the nineteenth century that light did indeed bend and diffract at the edges of obstacles and as it passed through slits. Young was a medical doctor who contributed to several areas of science and medicine as well as Egyptology. He was an expert in a broad range of fields, something that the rapid expansion of the sciences was shortly to make impossible. He was sometimes called “the last person to know everything,” but his greatest accomplishment was his wave theory of light, which, together with the experimental evidence he provided for diffraction, led to the overthrow of Newton’s particle theory.
One of the examples Young considered was the double slit experiment, which is illustrated in figure 5. Water waves originating from the left pass a breakwall broken by two slits, on the way to a beach on the right. The waves from the two slits interfere with each other: the height of the water at each point to the right of the wall is a combination of waves propagating from the two slits. When the peaks of the two waves coincide, you see reinforcement—the combined wave is at its highest; but when the peak of one wave arrives in coincidence with the trough of the other, they cancel each other out. The result is the pattern graphed at the right, which is called an interference pattern. The key thing to understand and remember is that the interference pattern is the result of waves arriving from the two slits.
Thomas Young was able to construct the analogue of a double slit apparatus for light, and he saw an interference pattern. This made a strong case for light being a wave.
FIGURE 5. The double slit experiment, which shows that light behaves as a wave.
Further support for the idea that light is a wave came from the Scottish physicist James Clerk Maxwell, who showed around 1860 that light is a wave shimmying through the electric and magnetic fields that fill space as they convey forces between charges and magnets.
Einstein accepted Maxwell’s hypothesis but added one of his own, which was that the energy carried by light waves comes in discrete packets, which he called photons. Thus was born the idea that light has a dual nature—it travels like a wave but conveys energy in discrete units like a particle. Einstein tied together the waves and particles by a simple hypothesis, according to which the energy a photon carries is proportional to the frequency of the light wave.
Visible light spans a range of frequencies, within which red light has the lowest frequency. Blue light is almost the highest frequency we can see, vibrating roughly twice as fast as red. Thus, a blue photon carries roughly twice the energy of a red photon.
What led Einstein to make such a radical proposal? He knew of experiments which could distinguish the effect of increasing the intensity of a beam of light from the effects of changing its color or frequency. This was done by shining light on metal, which caused some of the electrons in the metal to jump out, making an electric current that could be detected by a simple instrument an electrician might use.
The experiments measured how much energy the jumping electrons acquired from the light shining on the metal. The results showed that if you want to increase the energy each electron gets, you have to turn up the light’s frequency. Dialing up the intensity has little or no effect; this merely raises the number of photons falling on the metal, without changing the energy the electron acquires from individual photons. This accords with Einstein’s hypothesis that the electrons take energy from light by absorbing photons, whose energy is each proportional to the light’s frequency.
Electrons are normally imprisoned in a metal. The energy a photon gives to an electron is like atomic bail: it liberates the electron, allowing it to travel free of the metal. But that bail is set at a certain amount. Photons which c
arry too little energy have no effect. If the electron is to escape, it has to get its energy from a single photon; it cannot collect up a lot of small increments. Hence, red light doesn’t suffice to get a current started, but even a few photons of blue light will liberate some electrons, because each photon carries enough to bail out an electron.
The fact that no amount of red light, no matter how intense, will suffice to liberate an electron, while even a tiny amount of blue light succeeds, was to Einstein a big hint that the energy of light is carried in discrete packets, each unit proportional to the frequency. An even more direct hint came from measurements carried out in 1902 that showed that, once the threshold for bail was met, the liberated electron flew away with an energy proportional to how far the frequency was over the threshold. This was called the photoelectric effect, and Einstein was the only one who correctly interpreted it as signaling a revolution in science. This was one of four papers he wrote in his miracle year of 1905, when he was twenty-six and working in a patent office.
At that time the reigning theory of light was Maxwell’s, namely that light is a wave moving through the electric and magnetic fields. Einstein knew Maxwell’s theory intimately, having carried Maxwell’s book in his pack for a year he spent hiking the mountains as a teenage dropout. No one understood better than Einstein that, great as it was, Maxwell’s wave theory of light could not explain the photoelectric effect. For if Maxwell were right, the energy a wave conveys to an electron would increase with intensity, which is exactly what the experiments were not seeing.
The photoelectric effect was not the only clue. The generation of Einstein’s teachers had developed the study of light given off by hot bodies, such as the glow of red-hot charcoal. There were beautiful experimental results, which the theorists hoped to explain, which showed that the colors of the emitted light change as the charcoal is heated up. In 1900, theoretical physicist Max Planck explained the result through a derivation that featured one of the most creative misunderstandings in the history of science. To get a glimpse into this comedy, you need to know that even at the turn of the twentieth century, the scientific consensus among physicists, which Planck shared, was that there are no atoms—rather, matter is completely continuous. There were a few prominent theorists who believed in atoms, among them Ludwig Boltzmann of Vienna. Boltzmann developed a method for deriving the properties of gases by treating them as collections of atoms.
Planck, even though he was a skeptic of the atomic hypothesis, borrowed the methods Boltzmann used to study gases and applied them to the properties of light.* Without meaning to do so, he effectively described light as a gas made up of photons, rather than atoms. Navigating in deep waters unfamiliar to him, he found he could get an answer that agreed with experiments if he took the energy of each photon to be proportional to the frequency of the light.
Planck didn’t believe in atoms of light any more than he believed in atoms of matter. So he didn’t understand that he had made the revolutionary discovery that light is made of particles. But Einstein believed in both, and, almost single-handedly, he understood that the success of Planck’s theory rested on treating light as a gas of photons. When he learned about the photoelectric effect, he immediately thought of applying to it the proportionality between the energy of a photon and the frequency of light that had appeared in Planck’s work. So it was he, and not Planck, who was given the good fortune of making one of the great discoveries in the history of science: that light has a dual nature, part particle and part wave.
At first Einstein’s proposal was greeted with a high degree of skepticism. After all, there was still the double slit experiment to contend with, which clearly showed light traveled through both slits, like a wave. Somehow, light is both wavelike and particle-like. Einstein was to wrestle with this apparent contradiction for the rest of his life. But by 1921 some detailed predictions he’d made in his 1905 paper had been confirmed, and Einstein was awarded the Nobel Prize for the photoelectric effect.
As a footnote to this story, we can mention that another of the four papers Einstein wrote that year gave the final, convincing proof that matter is made of atoms. Atoms were too small to see even with the best microscopes at that time. So Einstein focused his attention on objects just big enough to see through a microscope: pollen grains. These were known to dance unceasingly when suspended in water, which was at the time a great mystery. Einstein explained that the dance was due to the grains colliding with the water molecules, which are themselves constantly moving.*
The other two papers Einstein wrote in that momentous year presented his theory of relativity and the iconic relation between mass and energy: E=mc2.
If we want to find an analogue of what Einstein achieved in that single year, we can only look at Newton. Einstein launched two revolutions—relativity and the quantum. Of the latter he had wrested from nature two precious insights: the dual nature of light, and the relation between the energy of the particle and the frequency of the wave, which ties together the two sides of the duality.
FIGURE 6. BROWNIAN MOTION Brownian motion is the random motion of molecules and other small particles found in nature. Einstein explained that the motion results from the frequent collisions of molecules making up the air or water, and was able to predict how the magnitude of the effects depends on the density of the atoms.
Einstein’s fourth paper, which proved the existence of atoms, said nothing about the quantum nature of light. But it contained two mysteries, which it would take the quantum theory to resolve. How could atoms be stable? And why do atoms of the same chemical element behave identically?
While the theorists had been squabbling over whether atoms existed, experimentalists had been busy separating their constituents. First to be identified was the electron, which was revealed to carry a negative charge and to have a tiny mass, about one two-thousandth of that of a hydrogen atom. The chemical elements were understood to be classified by how many electrons they contained. Carbon has 6 electrons, uranium 92, for example. Atoms are electrically neutral, so if an atom contains, say, 6 electrons, that means if you remove those electrons you get a structure with 6 positive charges. Since electrons are so light, this structure, which we can call the nucleus, has most of the mass.
In 1911 Ernest Rutherford determined that the nucleus of an atom is tiny, compared to the whole atom. If the atom is a small city, the nucleus is a marble. Shrunk into that tiny volume are all the positive charges and almost all the mass of an atom. The electrons orbit the nucleus in the vast empty space that is most of the atom.
The analogy to the solar system is inevitable. The electrons and the nucleus are oppositely charged, and opposite charges attract through the electrical force. This holds the electrons in orbit around the nucleus. This much is similar to planets being held in orbit around a star due to their mutual gravitational attraction. But the analogy is misleading because it hides the two puzzles I mentioned. Each provides a reason why Newtonian physics, which explains the solar system, cannot explain atoms.
Electrons are charged particles, and Maxwell’s great theory of electromagnetism tells us that a charged particle moving in a circle should give off light continuously. According to Maxwell’s theory, which is to say prior to quantum physics, the light given off should have had the frequency of the orbit. But light carries energy away, so the electron should drop closer to the nucleus as its energy decreases. The result should be a quick spiral into the nucleus, accompanied by a flash of light. If Maxwell’s theory is right, there can be no picture of electrons circling in gentle, stable orbits around the nucleus. This can be called the crisis of the stability of electron orbits.
You might ask why the same problem doesn’t afflict planetary orbits. Planets are electrically neutral, so they don’t give off light in the same way. But, according to general relativity, planets in orbit do radiate energy in gravitational waves and spiral into the sun. It is just that gravity is extremely wea
k, so this process is extraordinarily slow. The effect has been observed in systems consisting of pairs of neutron stars in close orbits. And, very dramatically, gravitational wave antennas have detected the radiation given off by pairs of massive black holes spiraling into each other and merging.
The second problem is why all atoms with a certain number of electrons appear to have identical properties. Two solar systems with six planets each are, beyond that, not generally very similar. The planets will have different orbits and masses and so on. But chemistry works because any two carbon atoms interact with other atoms in exactly the same way. This differs from how oxygen atoms interact, any two of which are also identical to each other. This is the puzzle of the stability of chemical properties. The analogy to the solar system fails because Newtonian physics, which works just fine to explain the solar system, cannot explain why all atoms with six electrons have the same chemical properties.
The answer to both these questions about atoms required applying to atoms the radical new ideas Einstein was developing about the nature of light. This was a bold step of the kind that Einstein was capable of, but even he missed it. The physicist who had the insight was the young Dane Niels Bohr. This insight meant it was Bohr, not Einstein, who would assume the leadership of the revolutionaries who invented quantum mechanics. Throughout his life, Bohr was a radical anti-realist, and it was he, more than anyone else, who was responsible for making the quantum revolution a triumph of anti-realism. Over his career, Bohr fashioned a series of arguments that the behavior of atoms and light could not be understood from a realist perspective.