Book Read Free

Computing with Quantum Cats

Page 12

by John Gribbin


  Incidentally, the same discovery was made at about the same time by Oliver Penrose, a British theorist associated with the Open University; but Penrose's main research interests did not lie in computation and he did not follow up the idea. Bennett, but not Penrose, also made a connection which surely would have delighted Alan Turing, by discussing the biosynthesis of messenger RNA in living cells as an example of reversible computation.

  Bennett's discovery was purely theoretical. He had not worked out the practicalities of the kind of gates needed to make such a computer, but he had proven that there was nothing in the laws of physics to forbid it. The next practical step was taken by Ed Fredkin, a scientist with an unusual background, who didn't even know about Bennett's work at the time he made his contribution.

  Fredkin was an American college dropout who trained as a fighter pilot in the mid-1950s but had to quit flying because of asthma, and so worked for the air force on a project which introduced him to computer programming. After he returned to civilian life, he became a computer consultant, then founded his own company and became rich, all the while developing the idea, which most people dismissed as crazy at the time, that the Universe might be a gigantic digital computer. One of the reasons why people dismissed the idea as crazy was, of course, that the laws of physics are reversible, and in the 1950s and 1960s it was thought that computers had to be irreversible. But Fredkin was stubborn. He decided that if that was the flaw in the argument, he had to find a way to show that it was possible to make reversible computers. Fredkin had contacts at MIT through his computer company, and managed to wangle an appointment there, in 1966, as a visiting professor. He was so successful that the next year he became a full professor, at the age of thirty-four (still without having finished college!), and then Director of the Laboratory for Computer Science. It was in this capacity that he met Richard Feynman, and arranged to spend 1974 at Caltech so that Feynman could teach him quantum physics and he could teach Feynman about computers.15 It was while he was at Caltech that Fredkin found the way to make a reversible computer.

  Remember the NOT gate? Whichever input it is given, the output is the opposite. Put in 1, you get 0 out. Put in 0, you get 1 out. This is clearly reversible, but doesn't give you much scope to do interesting things. Fredkin's brilliant idea was to invent what is generally referred to as the Fredkin gate, which is complicated enough to do interesting things, but is still reversible.

  A Fredkin gate has three channels—three inputs and three outputs. One of the channels, denoted by the letter c, is the control. This always passes the input through the gate unchanged. The other two channels, which we might as well call a and b, also pass the input unchanged if the c input is 0, but swap it over if the c input is 1. So if the input is c = 1, a = 1 and b = 0, the output is c = 1, a = 0 and b = 1. It is pretty obvious that this is reversible; even better, if the output of one Fredkin gate is fed straight into another Fredkin gate (c to c, a to a, b to b), the output is the same as the input to the first gate. Better still, it is possible, though I won't go into details here, to build any logic circuit using only Fredkin gates, in combinations where, for example, output c from one gate becomes input a for the next gate and output a goes to input c, while output b from the first gate becomes input b for the next gate. The final proof of this was found by one of Fredkin's students, Guy Steel, back at MIT.

  All this means that it is in principle possible to build a reversible classical computer, which could simulate precisely the classical laws of physics, and that Fredkin's idea of the Universe being a gigantic computer is not so crazy after all. But at a fundamental level, the Universe operates on the basis of quantum physics, not classical physics. So the next question was, could a computer precisely simulate quantum physics? This is where Feynman comes back into the story.

  At the end of the 1970s, Paul Benioff, of the Argonne National Laboratory in Illinois, developed the idea of a Turing machine, operating on a “tape” in the way Turing described in his classic paper of 1936, which ran on quantum mechanical principles. Although his argument was rather complicated and difficult to follow for non-specialists, he was able to show that it is in principle possible to build a classical computer that operates on quantum principles. Since the real world seems to run on quantum principles, this was an important step forward; but it left open the question of whether a classical (or any) computer could be built that would simulate perfectly the quantum world.

  One of the places where Benioff presented his ideas was MIT, where he put them to a meeting in 1981 where Feynman was the keynote speaker. Feynman opened the proceedings with a talk titled “Simulating Physics with Computers,” in which he credited Ed Fredkin with stimulating his interest in the subject, and tackled two questions: Is it possible to simulate physics (meaning quantum physics) with a quantum computer? And, is it possible to simulate (quantum) physics with a classical computer (by simulating “probability” in some way)?

  Curiously, Feynman referred to the discussion of quantum simulators as “a side-remark,” which he dealt with briefly before moving on to the second question, which he regarded as more important. He gave an example of how a “universal quantum simulator” might work, and said, “I therefore believe it's true that with a suitable class of quantum machines you could imitate any quantum system, including the physical world,” but was unable to offer a definitive proof that this is the case. He was, though, able to offer a definitive proof that quantum systems cannot be “probabilistically simulated by a classical computer.”16

  The proof depends on the properties of pairs of particles that interact with one another and then fly off in different directions. Like the two-cat version of Schrödinger's puzzle, the behavior of one particle depends on what happens to the other particle even when they are far apart; this is known as “entanglement,” a term coined by Schrödinger himself. What the argument boils down to is the comparison of a simple pair of numbers, which can be (and have been) measured in experiments. If one number is bigger than the other, there is no way in which quantum mechanics can be simulated perfectly by a classical computer, and therefore no way in which the world can be simulated perfectly by a classical computer. Feynman was delighted by this argument. “I've entertained myself always,” he said, “by squeezing the difficulty of quantum mechanics into a smaller and smaller place, so as to get more and more worried about this particular item. It seems to be almost ridiculous that you can squeeze it to a numerical question that one thing is bigger than another. But there you are.”

  What Feynman, who was not always scrupulous about giving credit to others, neglected to tell his audience was that the entire argument had been taken, lock, stock and barrel, from the work of CERN physicist John Bell, and is usually known as the Bell Inequality. Although Bell himself had not applied his ideas to quantum computers—or quantum simulators, in Feynman's terminology—the idea is so important, both in terms of our understanding of reality and in terms of quantum computation, that it will be the subject of my next chapter. Feynman's conclusion from 1981 is, though, still apposite: “Nature isn't classical, dammit, and if you want to make a simulation of nature, you'd better make it quantum mechanical.”

  Feynman himself did think further about computation in general and about quantum computers. He gave a course on computation at Caltech in the mid-1980s, and a talk in Anaheim in 1984 in which he described the basis of a quantum mechanical computer,17 along the lines of Benioff's quantum Turing machine, using reversible gates.18 In that talk he came up with another of his memorable comments: “It seems that the laws of physics present no barrier to reducing the size of computers until bits are the size of atoms, and quantum behavior holds dominant sway.” But he never seems to have put two (from his 1981 lecture) and two (from his 1984 lecture) together and realized that such a computer would be fundamentally different from a classical computer not just in terms of its physics but in terms of the kinds of problems it could solve. That leap of inspiration came from Oxford theorist David Deut
sch, also in the mid-1980s, and will form the heart of Part Three of this book.

  The tangled story of entanglement1 begins, inasmuch as it has a beginning, with the work of a French nobleman, Louis de Broglie, in the 1920s. De Broglie, who held the honorary title of “Duke,” was a latecomer to research in physics. Born in 1892, as the younger son of an aristocratic family he was expected to make a career in the diplomatic service; but under the influence of his elder brother Maurice,2 who became a physicist in spite of the strenuous objections of their father, he too began to study physics, alongside his “proper” course in history, at the Sorbonne in 1909. He hoped to move on to research, but his career was interrupted by the First World War, during which he served in the radio communications branch of the army, including a spell based at the Eiffel Tower, which was used as a radio mast. So it was not until 1924, when he was already in his early thirties, that de Broglie was able to submit a thesis for his PhD; but what a thesis!

  De Broglie was not a great mathematician but he did have very good physical insight. He was one of the first people to fully accept the idea of light quanta (what we now call photons), and picked up on a curious feature of the equations that Einstein had used to describe these entities. The equations showed a relationship between the wave properties of a photon (its wavelength or frequency) and its particle properties (such as momentum and energy). They showed that light “waves” could also be treated as particles, and that if you knew the wavelength of a photon you could calculate its momentum. De Broglie pointed out that the same equations worked in reverse—that “particles” (specifically, electrons) should also behave as waves, if the equations were correct. If you knew the momentum of an electron you could calculate its wavelength. De Broglie's thesis supervisor, Paul Langevin, didn't know what to make of this and showed the work to Einstein, who said, “I believe that it involves more than a mere analogy.” De Broglie got his PhD, and within three years experiments had been carried out which showed conclusively that electrons do indeed behave as waves exactly in accordance with his description. De Broglie's thesis work, for which he received the Nobel Prize in 1929, was also the inspiration for Erwin Schrödinger's development of the wave version of quantum mechanics.

  So by 1927, soon after Schrödinger's wave mechanics and the particle version of quantum mechanics developed by Heisenberg and others had become established, de Broglie should have been a man whose ideas were taken seriously. Instead, his next big idea was first derided and then largely ignored.

  DROPPING THE PILOT

  The idea was as brilliantly simple as his earlier insight into the nature of wave-particle duality. While other researchers were struggling to choose between the wave version of quantum mechanics and the particle version of quantum mechanics, de Broglie said, in effect, why not have both? He suggested that the wave and the particle were equally real, and that what goes on in experiments such as the experiment with two holes is that a real wave (which became known, for obvious reasons, as a “pilot” wave) spreads through the apparatus and is responsible both for the interference and for guiding an equally real particle along trajectories determined by the interference, like a surfer riding the waves in the sea. The key point on this picture is that the particle always exists at a definite place even when it is not being measured or observed. Equally, the apparatus (in principle, the whole world) is occupied by a physically real field, like the electromagnetic field, which we are only aware of because of its influence on the statistical behavior of particles. Electrons (or photons) fired one after another through the experiment with two holes do not all follow exactly the same trajectory because of tiny differences in the speed or direction with which they start out. We can never measure the field, but we can measure the particles. This is known as a “hidden variables” theory, although as a result of what John Bell called “historical silliness”3 it is the things we can measure, such as the positions of particles, that are regarded as the “hidden” variables, while the thing we can't measure, the field, is not. As Bell also said, “this idea seems so natural and simple, to resolve the wave-particle dilemma in such a clear and ordinary way, that it is a great mystery to me that it was so generally ignored.”4

  But ignored it was. De Broglie presented his ideas (in a much more fully worked out form than the sketch I have provided here) to the same Solvay Congress in 1927 that heard Schrödinger point out that there is nothing in the equations to suggest that the wave function “collapses,” as proposed by Niels Bohr and followers of his “Copenhagen Interpretation.” De Broglie's pilot wave idea was savagely attacked on mathematical grounds by Wolfgang Pauli, a mathematical physicist with a (largely justified) high opinion of his own ability and a (not always justified) tendency to dismiss the efforts of those he regarded as lesser minds. Although de Broglie did respond to these criticisms, and a modern reading of the proceedings of the meeting suggests that he successfully answered them, his diffident style left the impression that he had lost the argument. In addition, Pauli pointed out that the hidden variables idea led to some peculiar implications when applied to more than one particle. Crushed, de Broglie gave up promoting his idea. Those peculiar implications concern entanglement and what Einstein would later call “spooky action at a distance,” and are at the heart of the modern understanding of quantum physics and quantum computation. But what seemed at the time the final nail in the coffin of de Broglie's idea came in 1932, when Johnny von Neumann published his great book on quantum mechanics, in which, among other things, he “proved” that no hidden variables theory could be an accurate description of reality. He was wrong.

  VON NEUMANN GETS IT WRONG

  In order to appreciate just how wrong von Neumann was, we need to be clear about what hidden variables theories are—and a good way to see that is by comparison with the all-conquering (for half a century) Copenhagen Interpretation. There are three significant points of conflict between these two views of the world:

  1. The Copenhagen Interpretation says that the wave function is a complete description of a system and contains all the information about it. Hidden variables theory says that the wave function is only part of the story, and that there are also particles with real positions and momenta.

  2. The Copenhagen Interpretation says that the wave mostly spreads out in line with Schrödinger's equation, but sometimes “collapses” to (more or less) a point. The mechanism for this collapse has never been satisfactorily explained. Hidden variables theory says that the wave always develops in line with Schrödinger's equation; there is no collapse.

  3. The Copenhagen Interpretation says that even though the evolution of the wave is deterministic, the process of collapse introduces an element of probability into the outcomes of experiments—that is, that quantum physics is stochastic. Hidden variables theory says that everything is deterministic, and that the only reason we cannot predict everything perfectly is that we can never know perfectly all of the starting conditions.

  Stated like that, it is hard to see how any sensible person could choose the Copenhagen Interpretation over hidden variables theory unless there were very powerful evidence against the latter. But the power of von Neumann's “proof” lay as much in his name as in the mathematics; not only that, it was refuted almost as soon as it was published, but the refutation was ignored. It is probably significant that the flaw in von Neumann's argument was found by a young researcher who came from outside the quantum mechanical fold, and was unlikely to be overawed by von Neumann's (or Bohr's) reputation; but the fact that she was a young outsider is also one factor in explaining why her refutation was ignored.

  Grete Hermann was born in Bremen in 1901. She qualified as a secondary school teacher in 1921, then studied mathematics and philosophy, working under Emmy Noether in Göttingen and receiving a PhD in 1926 for research into what would later become known as computer algebra. For some years she stayed on in Göttingen, developing her interest in philosophy and the foundations of science, but she was also a committed sociali
st, editing a newspaper called The Spark and actively opposing the increasingly powerful Nazis. In 1936 she had to flee Germany to escape the regime, traveling via Denmark and France to England. In 1946 she returned to Germany and worked in education, helping to rebuild post-war society; she died in 1984, having lived long enough to see her early insight come in from the cold.

  In 1934, Hermann had visited Werner Heisenberg's group in Leipzig to discuss the foundations of quantum physics in relation to her favored Kantian philosophy. It was there that she confronted von Neumann's “proof” of the impossibility of hidden variables theories, and pointed the flaws out to Heisenberg and his colleagues. The problem was not with the mathematics—von Neumann was not prone to making mathematical mistakes—but with the assumptions that went into the mathematics. Fortunately, this means that we do not need to go through the math to understand what went wrong; but, surprisingly, it means that the mistake ought to have been obvious to anyone who looked carefully, as Hermann did, at what von Neumann was saying. His mistake involved an assumption about the way averages are taken in quantum mechanics. There is no perfect way of explaining the error in everyday language, but one that may help is to look at the way we take averages in everyday life. Imagine that we have two groups of people. In one group there are ten people, and their average height is 1.8 meters. In the other group there are twenty people, and their average height is 1.5 meters. Von Neumann's mistake was equivalent to saying that therefore the average height of the thirty people is 1.65 meters, the average of 1.8 and 1.5. But because there are twice as many people in the second group, the correct average is, of course, 1.6 meters.

 

‹ Prev