by Neil Turok
· · ·
THE STRANGE STORY OF the quantum begins with the humble electric light bulb. In the early 1890s, Max Planck, then a professor in Berlin, was advising the German Bureau of Standards on how to make light bulbs more efficient so that they would give out the maximum light for the least electrical power. Max Born later wrote about Planck: “He was by nature and by the tradition of his family conservative, averse to revolutionary novelties and skeptical towards speculations. But his belief in the imperative power of logical thinking based on facts was so strong that he did not hesitate to express a claim contradicting to all tradition, because he had convinced himself that no other resort was possible.” 38
Planck’s task was to predict how much light a hot filament gives out. He knew from Maxwell’s theory that light consists of electromagnetic waves, with each wavelength describing a different colour of light. He had to figure out how much light of each colour a hot object emits. Between 1895 and 1900, Planck made a series of unsuccessful attempts. Eventually, in what he later called an “act of despair,” 39 he more or less worked backward from the data, inferring a new rule of physics: that light waves could accept energy only in packets, or “quanta.” The energy of a packet was given by a new constant of nature, Planck’s constant, times the oscillation frequency of the light wave: the number of times per second the electric and magnetic fields vibrate back and forth as an electromagnetic wave travels past any point in space. The oscillation frequency is given by the speed of light divided by the wavelength of the light. Planck found that with this rule he could perfectly match the experimental measurements of the spectrum of light emitted from hot objects. Much later, Planck’s energy packets became known as photons.
Planck’s rule was less ad hoc than it might at first seem. He was a sophisticated theorist, and well appreciated a powerful formalism that had been developed by the Irish mathematical physicist William Rowan Hamilton in the 1830s, building on earlier ideas of Fermat, Leibniz, and Maupertuis. Whereas Newton had formulated his laws of motion as rules for following a system forward from one moment in time to the next, Hamilton considered all the possible histories of a system, from some initial time to some final time. He was able to show that the actual history of the system, the one that obeyed Newton’s laws, was the one that minimized a certain quantity called the “action.”
Let me try to illustrate Hamilton’s idea with the example of what happens when you’re leaving a supermarket. When you’re done with your grocery shopping, you’re faced with a row of checkouts. The nearest will take you less time to walk to, but there may be more people lined up. The farthest checkout will take longer to walk to but may be empty. You can look to see how many people have baskets or trolleys, how much stuff is in them, and how much is on the belt. And then you choose what you think will be the fastest route.
This is, roughly speaking, the way Hamilton’s principle works. Just as you minimize the time it takes to leave the supermarket, physical systems evolve in time in such a way as to minimize the action. Whereas Newton’s laws describe how a system edges forward in time, Hamilton’s method surveys all the available paths into the future and chooses the best among them.
Hamilton’s new formulation allowed him to solve many problems that could not be solved before. But it was much more than a technical tool: it provided a more integrated picture of reality. It helped James Clerk Maxwell develop his theory of electromagnetism, and it guided Planck to an inspired guess that launched quantum theory. In fact, Hamilton’s version of mechanics had anticipated the future development of quantum theory. Just as you find when leaving the supermarket that there may be several equally good options, Hamilton’s action principle suggests that in some circumstances the world might follow more than one history. Planck was not ready to contemplate such a radical departure from physics’ prior perspectives but, decades later, others would. As we will see in Chapter Four, by the end of the twentieth century all the known laws of physics were expressed in terms of the quantum version of Hamilton’s action principle.
The point of all this for our story is that Planck knew that Hamilton’s action principle was a fundamental formulation of physics. It was therefore natural for him to try to relate his quantization rule to Hamilton’s action. The units in which Hamilton’s action is measured are energy times time. The only time involved in a light wave is the oscillation period, equal to the inverse of the oscillation frequency. So Planck guessed that the energy of an electromagnetic wave times its oscillation period is equal to a whole-number multiple of a new constant of nature, which he called the “action quantum” and which we now call “Planck’s constant.” Because Planck believed that all the laws of physics could be included in the action, he hoped that one day his hypothesis of quantization might become a new universal law. In this guess, he would eventually be proven right.
Two of Planck’s colleagues at Berlin, Ferdinand Kurlbaum and Heinrich Rubens, were leading experimentalists of the time. By 1900, their measurements of the spectrum of light emitted from hot objects had become very accurate. Planck’s new guess for the spectrum, based on his quantization rule, fitted their data beautifully and explained the changes in colour as an object heats up. For this work, Planck came to be regarded as the founder of quantum theory. He tried but failed to go further. He later said: “My unavailing attempts to somehow reintegrate the action quantum into classical theory extended over several years and caused me much trouble.” 40 Physics had to wait for someone young, bold, and brilliant enough to make the next leap.
PLANCK WAS GUIDED TO his result in part by theory and in part by experiment. In 1905, Albert Einstein published a clearer and more incisive theoretical analysis of why the classical description of electromagnetic radiation failed to describe the radiation from hot objects.
The most basic notion in the theory of heat is that of thermal equilibrium. It describes how energy is shared among all the components of a physical system when the system is allowed to settle down. Think of an object that, when cool, is perfectly black in colour, so it absorbs any light that falls on it. Now heat up this object and place it inside a closed, perfectly insulating cavity. The hot object emits radiation, which bounces around inside the cavity until it is reabsorbed. Eventually, an equilibrium will be established in which the rate at which the object emits energy — the quantity Planck wanted to predict — equals the rate at which it absorbs energy. In equilibrium, there must be a perfect balance between emission and absorption, at every wavelength of light. So it turns out that in order to work out the rate of emission of light from a perfectly black object when it is hot, all you need to know is how much radiation of each wavelength there is inside a hot cavity, which has reached equilibrium.
The Austrian physicist Ludwig Boltzmann had developed a statistical method for describing thermal equilibrium. He had shown that in many physical systems, on average, the energy would be shared equally among every component. He called this the “principle of equipartition.” Einstein realized that electromagnetic waves in a cavity should follow this rule, and that this created a problem for the classical theory. The trouble was that Maxwell’s theory allows electromagnetic waves of all wavelengths, down to zero wavelength. There are only so many ways to fit a wave of a given wavelength inside a cavity of a certain size. But for shorter and shorter waves, there are more and more ways to fit the waves in. When we include waves of arbitrarily short wavelength, there are infinitely many different ways to arrange them inside the cavity. According to Boltzmann’s principle, every one of these arrangements will carry the same average amount of energy. Together, they have an infinite capacity to absorb heat, and they will, if you let them, soak up all the energy.
Again, let me try to draw an analogy. Think of a country whose economy has a fixed amount of money in circulation (not realistic, I know!). Imagine there are a fixed number of people, all buying and selling things to and from each other. If the people are all identical (not re
alistic, either!), we would expect a law of equipartition of money. On average, everyone would have the same amount of money: the total amount of money divided by the total number of people.
Now imagine introducing more, smaller people into the country. For example, introduce twice as many people of half the size, four times as many a quarter the size, eight times as many people one-eighth the size, and so on. You just keep adding people, down to zero size, with all of them allowed to buy and sell in exactly the same way. Now, I hope you can see the problem: if you let the tiny people trade freely, because there are so many of them they will absorb all the money and leave nothing for anyone else.
Planck’s rule is like imposing an extra “market regulation” stating that people can trade money only in a certain minimum quantum, which depends inversely on their size. Larger people can trade in cents. People half as big can trade only in amounts of two cents, people half as big again in four cents, and so on. Very small people can trade only in very large amounts — they can buy and sell only very expensive, luxury items. And the smallest people cannot trade at all, because their money quantum would be larger than the entire amount of money in circulation.
With this market regulation rule, an equilibrium would be established. Smaller people are more numerous and have a larger money quantum. So there is a certain size of people that can share all the money between them, and still each have enough for a few quanta so they aren’t affected by the market regulation. In equilibrium, people of this size will hold most of the money. Increase the total money in circulation, and you will decrease the size of the people holding most of the money.
In the same way, Einstein showed, if you imposed Planck’s quantization rule, most of the energy inside a hot cavity would be held by waves just short enough to each hold a few quanta while sharing all the energy between them. Heat up the cavity, and shorter and shorter waves will share the energy in this way. Therefore if a hot body is placed inside the cavity and allowed to reach equilibrium, the wavelength of radiation it emits and absorbs decreases as the cavity heats up.
And this is exactly how hot bodies behave. If you heat up an object like a metal poker, as it gets hotter it glows red, then yellow, then white, and finally blue and violet when it becomes extremely hot. These changes are due to the decrease in wavelength of the light emitted, predicted by Planck’s quantization rule. Human beings have been heating objects in fires for millennia. The colour of the light shining at them was telling them about quantum physics all along.
In fact, as we understand physics today, it is only Planck’s quantization rule that prevents the short wavelength electromagnetic waves from dominating the emission of energy from any hot object, be it a lighted match or the sun. Without Planck’s “market regulation,” the tiniest wavelengths of light would be like the “Dementors” in the Harry Potter books, sucking all the energy out of everything else. The disaster that Planck avoided is referred to as the “ultraviolet catastrophe” of classical physics, because the shortest wavelengths of visible light are violet. (In this context, “ultraviolet” just means “very short wavelength.”)
It is tempting to draw a parallel between this ultraviolet catastrophe in classical physics and what is now happening in our modern digital age. As computers and the internet become increasingly powerful and cheap, the ability to generate, copy, and distribute writing, pictures, movies, and music at almost no cost is creating another ultraviolet catastrophe, an explosion of low-grade, useless information that is in danger of overwhelming any valuable content. Where will it all end? Digital processors are now becoming so small that over the next decade they will approach the limits imposed by the size of atoms. Operating on these scales, they will no longer behave classically and they will have to be accessed and operated in quantum ways. Our whole notion of information will have to change, and our means of creating and sharing it will become much closer to nature. And in nature, the ultraviolet catastrophe is avoided through quantum physics. As I will discuss in the last chapter, quantum computers may open entirely new avenues for us to experience and understand the universe.
EINSTEIN’S 1905 PAPER CLEARLY described the ultraviolet catastrophe in classical physics and how Planck’s quantum rule resolved it. But it went much farther, showing that the quantum nature of light could be independently seen through a phenomenon known as the “photoelectric effect.” When ultraviolet light is shone on the surface of a metal, electrons are found to be emitted. In 1902, the German physicist Philipp Lenard had studied this phenomenon and showed that the energy of the individual emitted electrons increased with the frequency of the light. Einstein showed that the data could be explained if the electrons were absorbing light in quanta, whose energy was given by Planck’s rule. In this way, Einstein found direct confirmation of the quantum hypothesis. Yet, like Planck, Einstein also saw the worrying implications of quantization for any classical view of reality. He was later quoted as saying: “It was as if the ground was pulled out from under one.” 41
In 1913, the upheaval continued when Niels Bohr, working at Manchester under Ernest Rutherford, published a paper titled “On the Constitution of Atoms and Molecules.” Much as Planck had done for light, Bohr invoked quantization to explain the orbits of electrons in atoms. Just before Bohr’s work, Rutherford’s experiments had revealed the atom’s inner structure, showing it to be like a miniature solar system, with a tiny, dense nucleus at its centre and electrons whizzing around it.
Rutherford used the mysterious alpha particles, which Marie and Pierre Curie had observed to be emitted from radioactive material, as a tool to probe the structure of the atom. He employed a radioactive source to bombard a thin sheet of gold foil with alpha particles, and he detected how they scattered. He was amazed to find that most particles went straight through the metal but a few bounced back. He concluded that the inside of an atom is mostly empty space, with a tiny object — the atomic nucleus — at its centre. Legend has it that the morning after Rutherford made the discovery, he was scared to get out of bed for fear he would fall through the floor.42
Rutherford’s model of the atom consisted of a tiny, positively charged nucleus orbited by negatively charged electrons. Since unlike charges attract, the electrons are drawn into orbit around the nucleus. However, according to Maxwell’s theory of electromagnetism, as the charged electrons travelled around the nucleus they would cause changing electric and magnetic fields and they would continually emit electromagnetic waves. This loss of energy would cause the electrons to slow down and spiral inward to the nucleus, causing the atom to collapse. This would be a disaster every bit as profound as the ultraviolet catastrophe: it would mean that every atom in the universe would collapse in a very short time. The whole situation was very puzzling.
Niels Bohr, working with Rutherford, was well aware of the puzzle. Just as Planck had quantized electromagnetic waves, Bohr tried to quantize the orbits of the electron in Rutherford’s model. Again, he required that a quantity with the same units as Hamilton’s action — in Bohr’s case, the momentum of the electron times the circumference of its orbit — came in whole-number multiples of Planck’s constant. A hydrogen atom is the simplest atom, consisting of just one electron in orbit around a proton, the simplest nuclear particle. One quantum gave the innermost orbit, two the next orbit, and so on. As Bohr increased the number of quanta, he found his hypothesis predicted successive orbits, each one farther from the nucleus. In each orbit, the electron has a certain amount of energy. It could “jump” from one orbit to another by absorbing or emitting electromagnetic waves with just the right amount of energy.
Experiments had shown that atoms emitted and absorbed light only at certain fixed wavelengths, corresponding through Planck’s rule to fixed packets of energy. Bohr found that with his simple quantization rule, he could accurately match the wavelengths of the light emitted and absorbed by the hydrogen atom.
· · ·
PLANCK, EINSTEIN, AN
D BOHR’S breakthroughs had revealed the quantum nature of light and the structure of atoms. But the quantization rules they imposed were ad hoc and lacked any principled basis. In 1925, all that changed when Heisenberg launched a radically new view of physics with quantization built in from the start. His approach was utterly ingenious. He stepped back from the classical picture, which had so totally failed to make sense of the atom. Instead, he argued, we must build the theory around the only directly observable quantities — the energies of the light waves emitted or absorbed by the orbiting electrons. So he represented the position and momentum of the electron in terms of these emitted and absorbed energies, using a technique known as “Fourier analysis in time.”
At the heart of Fourier’s method is a strange number called i, the imaginary number, the square root of minus one. By definition, i times i is minus one. Calling i “imaginary” makes it sound made up. But within mathematics i is every bit as definite as any other number, and the introduction of i, as I shall explain, makes the numbers more complete than they would otherwise be. Before Heisenberg, physicists thought of i as merely a convenient mathematical trick. But in Heisenberg’s work, i was far more central. This was the first indication of reality’s imaginary aspect.
The imaginary number i entered mathematics in the sixteenth century, during the Italian Renaissance. The mathematicians of the time were obsessed with solving algebraic equations. Drawing on the results of Indian, Persian, and Chinese mathematicians before them, they started to find very powerful formulae. In 1545, Gerolamo Cardano summarized the state of the art in algebra, in his book Ars Magna (The Great Art). He was the first mathematician to make systematic use of negative numbers. Before then, people believed that only positive numbers made sense, since one cannot imagine a negative number of objects or a negative distance or negative time. But as we all now learn in school, it is often useful to think of numbers as lying on a number line, running from minus infinity to plus infinity from left to right, with zero in the middle. Negative numbers can be added, subtracted, multiplied, or divided just as well as positive numbers can.