Book Read Free

Programming the Universe

Page 16

by Seth Lloyd


  Feynman conjectured, and I proved, that quantum computers could function as universal quantum simulators whose dynamics could be the analog of any desired physical dynamics. The quantum simulation takes place in a simple and direct way. First, map the pieces of the quantum system to be simulated onto sets of quantum bits; each part of the system to be simulated gets just enough qubits to capture its dynamics. Second, map the interactions between the parts of the system onto quantum logic operations that act on the qubits corresponding to the parts. The universal nature of quantum logic operations ensures that any desired dynamics can be captured by this mapping.

  Quantum simulation is not just a theoretical notion; it has been performed experimentally, as in Peter Shor’s factoring algorithm. Unlike Shor’s algorithm, however, which has so far only been shown to factor 15, quantum simulation has been performed on a scale that could never be replicated by a classical computer. Over the past several years, David Cory’s group at MIT has performed quantum simulations involving billions and billions of qubits. These quantum simulators are crystals of calcium fluoride (I like to call them “weapons-grade toothpaste”) about a centimeter across, with an eerie light-purple color stemming from trace amounts of other types of atoms. Each crystal contains more than a billion billion atoms. Using the techniques of NMR quantum computation to manipulate the nuclear spins in the crystals, Cory has made these nuclear spins adopt a wide variety of interactions. Most represent interactions not found in nature. To simulate such artificial quantum dynamics on a conventional classical computer would require the computer to perform 2 raised to the billion billion power subcomputations—rather a tall order. Cory’s quantum simulators are far more powerful than any classical computer is or ever could be.

  Cory’s quantum simulations are far and away the most impressive quantum computations to date. But when I first presented his results in my lectures, I was surprised to find that many in the audience objected to describing these massive quantum simulations as computations. The typical response was “That’s not a computation; it’s an experiment!” I had a hard time understanding this response. Of course Cory was doing an experiment, I agreed—an experiment in quantum information processing. This seemed to sway many in the audience. But even when they had agreed that Cory was performing a computation, they would accept only that it was an analog quantum computation. It was hard for them to regard these analog quantum computations as “digital” quantum computation, like the factoring and search algorithms.

  How do analog and digital computers differ? A classical analog computer manipulates continuous variables, such as voltage. This is because classical variables, such as position, velocity, pressure, and volume, are continuous, so to simulate classical dynamics an analog computer has to be continuous, too. A classical digital computer deals with discrete quantities, because bits are discrete; a classical digital computer can deal with continuous quantities but only by discretizing them.

  In a quantum computer, however, there is no distinction between analog and digital computation. Quanta are by definition discrete, and their states can be mapped directly onto the states of qubits without approximation. But qubits are also continuous, because of their wave nature; their states can be continuous superpositions. Analog quantum computers and digital quantum computers are both made up of qubits, and analog quantum computations and digital quantum computations both progress by arranging logic operations between those qubits. Our classical intuition tells us that analog computation is intrinsically continuous and digital computation is intrinsically discrete. As with many other classical intuitions, this one is incorrect when applied to quantum computation. Analog quantum computers and digital quantum computers are one and the same device.

  Simulation vs. Reality

  The question of the difference between simulation and reality is an ancient one. In the sixth century B.C., in the first lines of the Tao Te Ching, Lao-tzu wrote of the problem inherent in describing reality: “The way that can be followed is not the true way. The name that can be named is not the true name.” The original Chinese text of the Tao Te Ching is highly compact and susceptible to 10,000 interpretations, but Lao-tzu seems to be suggesting that by naming things—by assigning meaning to words—we introduce artificial distinctions that fail to capture the underlying wholeness of the universe. (To reduce the thought to bumper-sticker level: “Don’t say it. Be it.”) Here is a less literal translation of the same passage, by the philosopher Archie Bahm: “Nature can never be completely described, for such a description of nature would have to duplicate nature.” That is, a perfect description of the universe is indistinguishable from the universe itself.

  Let’s look at what happens when we apply Lao-tzu’s dictum to a quantum computer that is simulating the universe. As we will see, the universe—or at least the accessible part of the universe—is finite in space and in time. All the pieces of the accessible part of the universe can in principle be mapped onto a finite number of qubits. Similarly, the physical dynamics of the universe, consisting of the interactions between those pieces, can be mapped onto logic operations that act on those qubits.

  This is not to say that we know exactly how to do this mapping. We know how to map the behavior of elementary particles onto qubits and logic operations. That is, we know how the Standard Model of particle physics—a model describing our world to superb precision—can be mapped into a quantum computer. But we don’t yet know how the behavior of gravity can be mapped into a quantum computer, for the simple reason that physicists have not yet arrived at a complete theory of quantum gravity. We do not know how to simulate the universe yet, but we may know soon.

  Now apply the Tao Te Ching. A quantum computer that simulated the universe would have exactly as many qubits as there are in the universe, and the logic operations on those qubits would exactly simulate the dynamics of the universe. Such a quantum computer would be a physical embodiment of the Marquis Pierre-Simon de Laplace’s demon: it would simulate the behavior of the universe as a whole. Such a quantum computation would constitute a complete description of nature, and so would be indistinguishable from nature. Thus, at bottom, the universe can be thought of as performing a quantum computation. Likewise, because the behavior of elementary particles can be mapped directly onto the behavior of qubits interacting via logic operations, a simulation of the universe on a quantum computer is indistinguishable from the universe itself.

  The conventional view is that the universe is nothing but elementary particles. That is true, but it is equally true that the universe is nothing but bits—or rather, nothing but qubits. Mindful that if it walks like a duck and quacks like a duck then it’s a duck, from this point on we’ll adopt the position that since the universe registers and processes information like a quantum computer and is observationally indistinguishable from a quantum computer, then it is a quantum computer.

  The History of the Computational Universe

  I have not been able to find any descriptions of the universe as a computer before the twentieth century. The ancient Greek atomists certainly thought of the universe as being constructed out of the interaction of tiny pieces, but they did not explicitly think of their atoms as processing information. Laplace conceived of his demon, able to calculate the entire future of the universe, as an abstract being, not as the universe itself. (Laplace did not call his being a demon; I rather think he thought his being was divine.) Charles Babbage seems to have been unconcerned about using his calculating machine as a model for physical dynamics, as was Alan Turing, although Turing was interested in the origins of patterns and complexity and did significant research on the subject.

  The first explicit description I found of the universe as a computer is in Isaac Asimov’s great 1956 science fiction story, “The Last Question.” In this story, human beings create a sequence of ever larger analog computers to help them explore first their galaxy and then others. (In one Web parody of the story, the computer, which Asimov called Multivac, is renamed Google.)r />
  Early in the story, in the year 2061, Lupov and Adell, two of Multivac’s human co-workers, have an argument about the future of the universe and decide to ask the computer whether humanity will survive tens of billions of years hence, when the stars have all burned out. Lupov says:

  “It all had a beginning in the original cosmic explosion, whatever that was, and it’ll all have an end when all the stars run down. . . . [J]ust give us a trillion years and everything will be dark. Entropy has to increase to maximum, that’s all.” . . .

  It was Adell’s turn to be contrary. “Maybe we can build things up again someday,” he said.

  “Never.”

  “Why not? Someday.”

  “Never.”

  “Ask Multivac.”

  “You ask Multivac. I dare you. Five dollars says it can’t be done.”

  Adell was just drunk enough to try, just sober enough to be able to phrase the necessary symbols and operations into a question which, in words, might have corresponded to this: Will mankind one day without the net expenditure of energy be able to restore the sun to its full youthfulness even after it had died of old age? Or maybe it could be put more simply like this: How can the net amount of entropy of the universe be massively decreased?

  Multivac fell dead and silent. The slow flashing of lights ceased, the distant sounds of clicking relays ended. Then, just as the frightened technicians felt they could hold their breath no longer, there was a sudden springing to life of the teletype attached to that portion of Multivac. Five words were printed: INSUFFICIENT DATA FOR MEANINGFUL ANSWER.

  In the story, time moves on. As humans explore the galaxy, then other galaxies, then become immortal (it’s science fiction, after all), successive versions of Multivac become ever more powerful, eventually permeating the entire fabric of the universe. Humans continue to ask the computer variants of the same question about how to reverse the second law of thermodynamics, and all are answered the same way. Finally, when all of human intelligence, together with everything else, has been subsumed into Multivac’s final incarnation, the universal AC, the computer figures it out and says . . . “LET THERE BE LIGHT!”

  Note that in Asimov’s story, the universe gradually becomes a computer, rather than starting out as one. We are interested in how the universe began computing from the beginning. The connections between computation and physics began to be worked out in the early 1960s by Rolf Landauer at IBM. The idea that computation could take place in a way that respects the underlying information-preserving character of physical law was developed in the 1970s by Charles Bennett at IBM and Edward Fredkin, Tommaso Toffoli, and Norman Margolus at MIT. The idea that the universe might be a kind of computer was proposed in the 1960s by Fredkin and independently by Konrad Zuse, the first person to build a modern electronic computer. Fredkin and Zuse suggested that the universe might be a type of classical computer called a cellular automaton, consisting of a regular array of bits interacting with their neighbors. More recently, Stephen Wolfram has extended and elaborated Fredkin’s and Zuse’s ideas.

  The idea of using cellular automata as a basis for a theory of the universe is an appealing one. The problem with this argument is that classical computers are bad at reproducing quantum features, such as entanglement. Moreover, as has been noted, it would take a classical computer the size of the whole universe just to simulate a very tiny quantum-mechanical piece of it. It is thus hard to see how the universe could be a classical computer such as a cellular automaton. If it is, then the vast majority of its computational apparatus is inaccessible to observation.

  Physical Limits to Computation

  Once you know about quantum mechanics and quantum computation, it is surprisingly simple to determine just how much computation any physical system can perform. Start from the fact that all physical systems register information. Consider an electron that can be detected in one of two places, “here” and “there.” An electron that can be either here or there registers one bit of information. (As Rolf Landauer said, “Information is physical.”) When the electron moves from here to there, its bit flips. In other words, whenever a physical system changes its state—whenever anything at all happens—the information that the system registered is transformed and processed. (Information processing is also physical.)

  Where electrons can be, and how they move from here to there, is governed by the laws of physics. The laws of physics determine how much information a particular physical system can register and how fast that information can be processed. Physics sets the ultimate limits to how powerful computers can be.

  In a paper entitled “Ultimate Physical Limits to Computation,” I showed how the computational capacity of any physical system can be calculated as a function of the amount of energy available to the system, together with the system’s size.11 As an example, I applied these limits to calculate the maximum computational capacity of a kilogram of matter confined to a liter of volume. A conventional laptop computer weighs about a kilogram and takes up about a liter of space. I call such a one-kilogram, one-liter computer “the ultimate laptop.” Next time you think about buying a new laptop, you should first compare it to the ultimate one.

  Figure 13. The Ultimate Laptop

  “The ultimate laptop” is a computer with a mass of one kilogram and a volume of one liter (the size of a conventional laptop computer) in which every elementary particle is put to use for the purposes of computation. The ultimate laptop can perform ten million billion billion billion billion billion logical operations per second on ten thousand billion billion billion bits.

  Just how powerful is the ultimate laptop? The first fundamental limitation to computational performance comes from energy. Energy limits speed. For example, consider our one-bit electron moving from here to there. The more energy the electron has, the faster it can move from here to there and the faster it can flip its bit.

  The maximum rate at which a bit can flip is governed by a useful theorem called the Margolus-Levitin theorem. Norm Margolus, as noted, is one of the pioneers in the physics of computation; working with his MIT advisor, Tommaso Toffoli, he showed that simple physical systems such as colliding atoms can perform universal digital computation. Lev Levitin of Boston University was one of the first scientists to use the laws of physics to calculate the capacity of communication channels such as fiber-optic cables for transferring information. They combined forces to publish their theorem in 1998.12

  The Margolus-Levitin theorem says that the maximum rate at which a physical system (an electron, for example) can move from one state to another is proportional to the system’s energy; the more energy available, the smaller the amount of time required for the electron to go from here to there. The theorem is very general. It doesn’t care what system is registering and processing the information; it cares only how much energy the system has available to process that information. For example, consider the atoms and electrons in my computer. They are all at a temperature slightly higher than room temperature. Each atom and electron is jiggling around, and the amount of energy associated with a typical jiggle is the same for an atom as for an electron. The energy per jiggle is simply proportional to temperature, independent of whether it’s an atom or an electron doing the jiggling. Thus the rate at which an electron inside a computer can move from one state to another—from here to there, or from 0 to 1—is the same as the rate at which an atom can move from one state to another. Electrons and atoms flip their bits at the same rate.

  The Margolus-Levitin theorem supplies a method for calculating the maximum rate at which a bit can flip. Take the amount of energy available to flip the bit, multiply by 4, and divide by Planck’s constant. The result is the number of times per second that the bit can flip. Applying this method to the atoms and electrons in my computer, we find that each jiggling atom and electron flips its bit about 30 trillion (30 × 1012) times per second.

  The rate at which atoms and electrons flip their bits is typically much faster than the rate at which
a conventional computer flips its bits. The computer upon which I type devotes a billion times more energy to charging and discharging the capacitors that register its bits than do atoms and electrons to jiggling and flipping theirs. But my computer operates 10,000 times more slowly than atoms do. The slowness of my computer doesn’t contradict the Margolus-Levitin theorem. The theorem gives only an upper limit to how fast a bit can flip. A bit can flip more slowly than the maximum rate allowed by the theorem. A quantum computer, however, always flips its bits at the maximum rate.

  The Margolus-Levitin theorem sets limits on the number of elementary operations (ops) that a bit can perform per second. Suppose we keep the amount of energy available for flipping bits the same, but now we divide the energy between two bits. Each of these two bits has half the energy of our original bit and can flip at half the rate. The total number of flips per second, however, remains the same.

  If we divide the amount of available energy among ten bits, each will take ten times as long to flip, but the total number of flips per second will remain constant. Just as it is indifferent to the size of the system involved, the theorem doesn’t care how the available energy is allocated. The maximum number of ops per second is given by the energy E × 4 Π Planck’s constant.

 

‹ Prev