by Neil Turok
Cardano and others had found general solutions to algebraic equations, but sometimes these solutions involved the square root of a negative number. At first sight, they discarded such solutions as meaningless. Then Scipione del Ferro invented a secret method of pretending these square roots made sense. He found that by manipulating the formulae he could sometimes get these square roots to cancel out of the final answer, allowing him to find many more solutions of equations.
There was a great deal of intrigue over this trick, because the mathematicians of the time held public contests, sponsored by wealthy patrons, in which any advantage could prove lucrative. But eventually the trick was published, first by Cardano and then more completely by Rafael Bombelli. In his 1572 book, simply titled Algebra,
Bombelli systematically explained how to extend the rules of arithmetic to include i. 43 You can add, subtract, multiply, or divide it with any ordinary number. When you do, you will obtain numbers like x + iy, where x and y are ordinary numbers. Numbers like this, which involve i, are called “complex numbers.” Just as we can think of the ordinary numbers as lying on a number line running from negative to positive values, we can think of the complex numbers as lying in a plane, where x and y are the horizontal and vertical coordinates. Mathematicians call this the “complex plane.” The number zero is at the origin and any complex number has a squared length, given by Pythagoras’s rule as x2 + y2.
Then it turns out, rather beautifully, that any complex number raised to the power of any other complex number is also a complex number. There are no more problems with square roots or cube roots or any other roots. In this sense, the complex numbers are complete: once you have added i, and any multiple of i, to the ordinary numbers, you do not need to add anything else. And later on, mathematicians proved that when you use complex numbers, every algebraic equation has a solution. This result is called the “fundamental theorem of algebra.” To put it simply, the inclusion of i makes algebra a far more beautiful subject than it would oterhwise be.
And from this idea came an equation that Richard Feynman called “the most remarkable formula in mathematics.” 44 It was discovered by Leonhard Euler, one of the most prolific mathematicians of all time. Euler was the main originator and organizer of the field of analysis — the collection of mathematical techniques for dealing with infinities. One of his many innovations was his use of the number e, which takes the numerical value 2.71828 . . . and which arises in many areas of mathematics. It describes exponential growth and is used in finance for calculating compound interest or the cumulative effects of economic inflation, in biology for the multiplication of natural populations, in information science, and in every area of physics. What Euler found is that e raised to i times an angle gives the two basic trigonometric functions, the sine and cosine. His formula connects algebra and analysis to geometry. It is used in electrical engineering for the flow of AC currents and in mechanical engineering to study vibrations; it is also used in music, computer science, and even in cosmology. In Chapter Four, we shall find Euler’s formula at the heart of our unified description of all known physics.
Heisenberg used Euler’s formula (in the form of a Fourier series in time) to represent the position of an electron as a sum of terms involving the energy states of the atom. The electron’s position became an infinite array of complex numbers, with every number representing a connection coefficient between two different energy states of the atom.
The appearance of Heisenberg’s paper had a dramatic effect on the physicists of the time. Suddenly there was a mathematical formalism that explained Bohr’s rule for quantization. However, within this new picture of physics, the position or velocity of the electron was a complex matrix, without any familiar or intuitive interpretation. The classical world was fading away.
Not long after Heisenberg’s discovery, Schrödinger published his famous wave equation. Instead of trying to describe the electron as a point-like particle, Schrödinger described it as a wave smoothly spread out over space. He was familiar with the way in which a plucked guitar string or the head of a drum vibrates in certain specific wave-like patterns. Developing this analogy, Schrödinger found a wave equation whose solutions gave the quantized energies of the orbiting electron in the hydrogen atom, just as Heisenberg’s matrices had done. Heisenberg’s and Schrödinger’s pictures turned out to be mathematically equivalent, though most physicists found Schrödinger’s waves more intuitive. But, like Heisenberg’s matrices, Schrödinger’s wave was a complex number. What on earth could it represent?
Shortly before the Fifth Solvay Conference, Max Born proposed the answer: Schrödinger’s wavefunction was a “probability wave.” The probability to find the particle at any point in space is the squared length of the wavefunction in the complex plane, given by the Pythagorean theorem. In this way, geometry appeared at the heart of quantum theory, and the weird complex numbers that Heisenberg and then Schrödinger had introduced became merely mathematical tools for obtaining probabilities.
This new view of physics was profoundly dissatisfying to physicists like Einstein, who wanted to visualize concretely how the world works. In the run-up to the Solvay meeting, all hope of that was dashed. Heisenberg published his famous uncertainty principle, showing that, within quantum theory, you could not specify the position and velocity of a particle at the same time. As he put it, “The more precisely the position [of an electron] is determined, the less precisely the momentum is known in this instant, and vice versa.” 45 If you know exactly where a particle is now, then you cannot say anything about where it will be a moment later. The very best you can hope for is a fuzzy view of the world, one where you know the position and velocity approximately.
Heisenberg’s arguments were based on general principles, and they applied to any object, even large ones like a ball or a planet. For these large objects, the quantum uncertainty represents only a tiny ambiguity in their position or velocity. However, as a matter of principle, the uncertainty is always there. What Heisenberg’s uncertainty principle showed is that, in quantum theory, nothing is as definite as Newton, or Maxwell, or any of the pre-quantum physicists had supposed it to be. Reality is far more slippery than our classical grasp of it would suggest.
ONE OF THE MOST beautiful illustrations of the quantum nature of reality is the famous “double-slit experiment.” Imagine placing a partition with two narrow, parallel slits in it, between a source of light of one colour — like a green laser — and a screen. Only the light that falls on a slit will pass through the partition and travel on to the screen. The light from each slit spreads out through a process called “diffraction,” so that each slit casts a broad beam of light onto the screen. The two beams of light overlap on the screen (click to see image).
However, the distance the light has to travel from either slit to each point on the screen is in general different, so that when the light waves from both slits arrive at the screen, they may add or they may cancel. The pattern of light formed on the screen is called an “interference pattern”: it consists of alternate bright and dark stripes at the locations where the light waves from the two slits add or cancel.46 Diffraction and interference are classic examples of wave-like behaviour, seen not only in light but in water waves, sound waves, radio waves, and so on.
Now comes the quantum part. If you dim the light source and replace the screen with a detector, like a digital camera sensitive enough to detect individual photons — Planck’s quanta of light — then you can watch the individual photons arrive. The light does not arrive as a continuous beam with a fixed intensity. Instead, the photons arrive as a random string of energy packets, each one announcing its arrival at the camera with a flash. The pattern of flashes still forms interference stripes, indicating that even though each photon of light arrived in only one place as an energy packet, the photons travelled through both slits and interfered as waves.
Now comes the strangest part. You can make the light
source so dim that the long interval between the flashes on the screen means there is never more than one photon in the apparatus at any one time. But then, set the camera to record each separate flash and add them all up into a picture. What you find is that the picture still consists of interference stripes. Each individual photon interfered with itself, and therefore must somehow have travelled through both slits on the way to the screen.
So we conclude that photons sometimes behave like particles and sometimes behave like waves. When you detect them, they are always in a definite position, like a particle. When they travel, they do so as waves, exploring all the available routes; they spread out through space, diffract, and interfere, and so on.
In time, it was realized that quantum theory predicts that electrons, atoms, and every other kind of particle also behave in this way. When we detect an electron, it is always in a definite position, and we find all its electric charge there. But when it is in orbit around an atom, or travelling freely through space, it behaves like a wave, exhibiting the same properties of diffraction and interference.
In this way, quantum theory unified forces and particles by showing that each possessed aspects of the other. It replaced the world that Newton and Maxwell had developed, in which particles interacted through forces due to fields, with a world in which both the particles and the forces were represented by one kind of entity: quantized fields possessing both wave-like and particle-like characters.
NIELS BOHR DESCRIBED THE coexistence of the wave and particle descriptions with what he called the “principle of complementarity.” He posited that some situations were best described by one classical picture — like a particle — while other situations were better described by another — like a wave. The key point was that there was no logical contradiction between the two. The words of the celebrated American author of the time, F. Scott Fitzgerald, come to mind: “The test of a first-rate intelligence is the ability to hold two opposed ideas in the mind at the same time, and still retain the ability to function.”47
Bohr had a background in philosophy as well as mathematics, and an exceptionally agile and open mind. His writings are a bit mystical and also somewhat impenetrable. His main role at the Solvay Conference seems to have been to calm everyone down and reassure them that despite all the craziness everything was going to work out fine. Somehow, Bohr had a very deep insight that quantum theory was consistent. It’s clear he couldn’t prove it. Nor could he convince Einstein.
Einstein was very quiet at the Fifth Solvay meeting, and there are few comments from him in the recorded proceedings. He was deeply bothered by the random, probabilistic nature of quantum theory, as well as the abstract nature of the mathematical formalism. He famously remarked (on a number of occasions), “God does not play dice!” To which at some point Bohr apparently replied, “Einstein, stop telling God how to run the world.”48 At this and subsequent Solvay meetings, Einstein tried again and again to come up with a paradox that would expose quantum theory as inconsistent or incomplete. Each time, after a day or two’s thought, Bohr was able to resolve the paradox.
Einstein continued to be troubled by quantum theory, and in particular by the idea that a particle could be in one place when it was measured and yet spread out everywhere when it was not. In 1935, with Boris Podolsky and Nathan Rosen, he wrote a paper that was largely ignored by physicists at the time because it was considered too philosophical. Nearly three decades later, it would seed the next revolutionary insight into the nature of quantum reality.
Einstein, Podolsky, and Rosen’s argument was ingenious. They considered a situation in which an unstable particle, like a radioactive nucleus, emits two smaller, identical particles, which fly apart at exactly the same speed but in opposite directions. At any time they should both be equidistant from the point where they were both emitted. Imagine you let the two particles get very far apart before you make any measurement — for the sake of argument, make it light years. Then, at the very last minute, as it were, you decide to measure either the position or the velocity of one of the particles. If you measure its position, you can infer the position of the other without measuring it at all. If instead you measure the velocity, you will know the velocity of the other particle, again without measuring it. The point was that you could decide whether to measure the position or the velocity of one particle when the other particle was so far away that it could not possibly be influenced by your decision. Then, when you made your measurement, you could infer the second particle’s position or velocity. So, Einstein and his colleagues argued, the unmeasured particle must really have both a position and a velocity, even if quantum theory was unable to describe them both at the same time. Therefore, they concluded, quantum theory must be incomplete.
Other physicists balked at this argument. Wolfgang Pauli said, “One should no more rack one’s brain about the problem of whether something one cannot know anything about exists all the same, than one should about the ancient question of how many angels are able to sit on the point of a needle.”49 But the Einstein–Podolsky–Rosen argument would not go away, and in the end someone saw how to make use of it.
· · ·
HAVE YOU EVER WONDERED whether there is a giant conspiracy in the world and whether things really are as they appear? I’m thinking of something like the The Truman Show, starring Jim Carrey as Truman, whose life appears normal and happy but is in fact a grand charade conducted for the benefit of millions of TV viewers. Eventually, Truman sees through the sham and escapes to the outside world through an exit door in the painted sky at the edge of his arcological dome.
In a sense, we all live in a giant Truman show: we conceptualize the world as if everything within it has definite properties at each point in space and at every moment of time. In 1964, the Irish physicist John Bell discovered a way to show conclusively that any such classical picture could, with some caveats, be experimentally disproved.
Quantum theory had forced physicists to abandon the idea of a deterministic universe and to accept that the best they could do, even in principle, was to predict probabilities. It remained conceivable that nature could be pictured as a machine containing some hidden mechanisms that, as Einstein put it, threw dice from time to time. One example of such a theory was invented by the physicist David Bohm. He viewed Schrödinger’s wavefunction as a “pilot wave” that guided particles forward in space and time. But the actual locations of particles in his theory are determined statistically, through a physical mechanism to which we have no direct access. Theories that employ this kind of mechanism are called “hidden variable” theories. Unfortunately, in Bohm’s theory, the particles are influenced by phenomena arbitrarily far away from them. Faraday and Maxwell had argued strongly against such theories in the nineteenth century, and since that time, physicists had adopted locality — meaning that an object is influenced directly only by its immediate physical surroundings — as a basic principle of physics. For this reason, many physicists find Bohm’s approach unappealing.
In 1964, inspired by Einstein, Podolsky, and Rosen’s argument, John Bell, working at the European Organization for Nuclear Research (CERN), proposed an experiment to rule out any local, classical picture of the world in which influences travel no faster than the speed of light. Bell’s proposal was, if you like, a way of “catching reality in the act” of behaving in a manner that would simply be impossible in any local, classical description.
The experiment Bell envisaged involved two elementary particles flying apart just as Einstein, Podolsky, and Rosen had imagined. Like them, Bell considered the two particles to be in a perfectly correlated state. However, instead of thinking of measuring their positions or velocities, Bell imagined measuring something even simpler: their spins.
Most of the elementary particles we know of have a spin — something Pauli and then Dirac had explained. You can think of particles, roughly speaking, as tiny little tops spinning at some fixed rate. The spin is quan
tized in units given by Planck’s constant, but the details of that will not matter here. All that concerns us in this case is that the outcome is binary. Whenever you measure a particle’s spin, there are only two possible outcomes: you will either find the particle spinning about the measurement axis either anticlockwise or clockwise at a fixed rate. If the particle spins anticlockwise, we say its spin is “up,” and if it is clockwise, we say its spin is “down.”
Bell hypothesized a situation in which the two Einstein–Podolsky–Rosen particles are produced in what is known as a “spin zero state.” In such a state, if you measure both particles with respect to the same axis, then if you find one of them to be “up,” the other will be “down,” and vice versa. We say that the particles are “entangled,” meaning that measuring the state of one fixes the state of the other. According to quantum theory, the two particles can retain this connection no matter how far apart they fly. The strange aspect of it is that by measuring one, you instantly determine the state of the other, no matter how distant it is. This is an example of what Einstein called “spooky non-locality” in quantum physics.
Bell imagined an experiment in which the particles were allowed to fly far apart before their spins were measured. He discovered a subtle but crucial effect, which meant that no local, classical picture of the world could possibly explain the pattern of probabilities that quantum theory predicts.
To make things more familiar, let us pretend that instead of two particles, we have two boxes, each with a coin inside it. Instead of saying the particle’s spin is “up,” we’ll say the coin shows heads; and instead of saying the particle’s spin is “down,” we’ll say the coin shows tails.