Programming the Universe

Home > Other > Programming the Universe > Page 21
Programming the Universe Page 21

by Seth Lloyd


  Take one qubit in a superposition of 0 and 1. This qubit registers 0 and 1 at the same time, by the ordinary laws of quantum mechanics. Now let this qubit interact with another qubit in the state 0—for example, by undergoing a controlled-NOT operation on the second qubit with the first qubit as control. The two qubits taken together are now in a superposition of 00 and 11: the quantum information in the first qubit has infected the second qubit. As a result of the interaction, however, the first qubit taken on its own behaves as if it registered either 0 or 1 but not both; that is, the interaction has decohered the first qubit.

  As more and more interactions between qubits take place, quantum information that is initially localized in individual qubits spreads out among many qubits. As this epidemic of shared quantum information grows, the qubits decohere. As they decohere (one history no longer having an effect on the other), we can say that a particular region has either a higher energy density or a lower energy density. In the language of decoherent histories, we can start to talk about the energy density of the universe at the dinner table.

  The next step in the computational universe is a crucial one. Recall that gravity responds to the presence of energy. Where the energy density is higher, the fabric of spacetime begins to curve a little more. As the fluctuations in energy density decohere, gravity responds to the fluctuations in the energy of the quantum bits by clumping matter together in the “1” component of the superposition.

  In the computational-universe model of quantum gravity, the clumping occurs in a natural fashion: the contents of the underlying quantum computation determine the structure of spacetime, including its curvature. So a component of the superposition with a 1 automatically induces higher curvature than a component with a 0. When the qubit decoheres, so that it now reads either 0 or 1 but not both, the curvature of spacetime is either higher (in the 1 component) or lower (in the 0 component), but not both. In the computational universe, when qubits decohere and begin to behave more classically, gravity also begins to behave classically.

  This mechanism for decoherence stands in contrast to other theories of quantum gravity, in which the gravitational interaction itself decoheres the qubits. No matter which theory of quantum gravity you adopt, however, the picture of the universe at this early stage is basically the same. Bits are being created and beginning to flip. Gravity responds to those bit flips by clumping matter about the 1’s. Quantum bits are decohering, and random sequences of 0’s and 1’s are being injected into the universe. The computation is off and running.

  In addition to making the earth on which we walk, gravitational clumping supplies the raw material necessary for generating complexity. As matter clumps together, the energy that matter contains becomes available for use; the calories we consume to stay alive owe their origin to the gravitational clumping that formed the sun and made it shine. Gravitational clumping in the very early universe is responsible for the large-scale structure of galaxies and clusters of galaxies.

  This initial revolution in information processing was followed by a sequence of further revolutions: life, sexual reproduction, brains, language, numbers, writing, printing, computing, and whatever comes next. Each successive information-processing revolution arises from the computational machinery of the previous revolution. In terms of complexity, each successive revolution inherits virtually all of the logical and thermodynamic depth of the previous revolution. For example, since sexual reproduction is based on life, it is at least as deep as life. Depth accumulates.

  Effective complexity, by contrast, need not accumulate: the offspring need not be more effectively complex than the parent. In the design process, repeated redesign to hone away unnecessary features can lead to designs that are less effectively complex but more efficient than their predecessors. In addition to being refined away, effective complexity can also just disappear. The effective complexity of an organism is at least as great as the information content of its genes. When species go extinct, their effective complexity is lost.

  Still, life on Earth seems to have started from a low level of effective complexity, then exploded to produce the hugely diverse and effectively complex world we see around us. The computational capacity of the universe means that logically and thermodynamically deep things necessarily evolve spontaneously. Does the computational universe spontaneously give rise to ever increasing effective complexity? Looking around, we see vast quantities of effective complexity. But does the total effective complexity necessarily increase? Or might it at some point collapse? The effective complexity of human society seems quite capable of collapsing, for example, in the event of all-out nuclear war. When the sun burns out billions of years hence, life on Earth will be over.

  How, why, and when effective complexity increases are open questions in the science of complexity. We can get a sense of the answers to these questions by looking at the mechanisms that generate effective complexity. We have defined purposeful behavior as that which allows systems to (a) get energy and (b) reproduce. The effective complexity of a living system can be defined as the number of bits of information that affect the system’s ability to consume energy and reproduce. If we add to these two behaviors a third, to reproduce with variation, then we can look at the way in which effective complexity changes over time.

  Any system, such as sexual reproduction, that consumes energy and reproduces with variation can both generate additional effective complexity and lose existing effective complexity. Of the varying copies constructed during reproduction, some will be better at consuming and reproducing than others, and those variants will come to dominate the population. Some variants will have greater effective complexity than the original system and some will have less. To the extent that greater effective complexity enhances the ability to reproduce, effective complexity will tend to grow; by contrast, if some variant can reproduce better with less effective complexity, then effective complexity can also decrease. In a diverse environment with many reproducing variants, we expect effective complexity to grow in some populations and decrease in others.

  All living systems consume energy and reproduce with variation, but such reproducing systems need not be alive. At the very beginning of the universe, the cosmological process called inflation produced new space and new free energy at a great rate. Each volume of space spawned new volumes by doubling in size every tiny fraction of a second. Space itself reproduced. Variation was supplied by quantum fluctuations (those monkeys): As space reproduced, each offspring volume was slightly different from its parent volume. As matter started to clump together around regions of greater density, those regions accumulated more free energy at the expense of other, less dense regions. Billions of years later, the Earth formed in one of those regions of higher density. And billions of years after that, some piece of Earth evolved into us.

  Life Begins

  Biologists know a huge amount about how living systems work; ironically, they know less about how life began than cosmologists know about the beginning of the universe. The date of the Big Bang and its location (everywhere) are known to a higher degree of precision than the date and location, let alone the procedural details, of the origin of life. What is known is that life first appeared on Earth almost 4 billion years ago. It might have originated here or it might have originated elsewhere and been transported here.

  Wherever life began, how did it begin? The answer to this question is the subject of hot debate. Here is one scenario.

  We have seen that the laws of physics allow computation at the scale of atoms, electrons, photons, and other elementary particles. Because of this computational universality, systems at larger scales are also computationally universal. You, I, and our computers are also capable of the same basic computation. Computation can take place at the scale just above the atomic scale, as well. Atoms can combine to form molecules. Chemistry is the science that describes how atoms combine, recombine, and disassociate. Simple chemical systems are also capable of computation.

  How doe
s chemistry compute? Imagine a container, such as a small pore in a rock, filled with various chemicals. At the beginning of our chemical computation, some of the chemicals have high concentrations. You can think of these chemicals as bits that read 1. Others have low concentrations: these read 0. Just where the boundary between high and low concentration lies is not particularly important for our purposes.

  These chemicals react with one another. Some that started out in a high concentration are depleted; the bits corresponding to these chemicals go from 1 to 0. Some that started out in a low concentration go to a high concentration; these bits go from 0 to 1. As the chemical reactions proceed, some bits flip while others remain the same.

  This sounds promising. After all, a computation is just bits flipping in a systematic fashion. In order to show that a chemical reaction can perform a universal computation, all we have to do is show that it can perform AND, NOT, and COPY operations.

  Let’s start with COPY. Suppose that chemical A enhances the production of chemical B, so that without lots of A around, the level of B remains low. If there is a low concentration of A and a low concentration of B, then the concentrations of A and B remain low. If the bit corresponding to A is 0 initially, as is the bit corresponding to B, then these bits remain 0. That is, 00 → 00. Similarly, if there is a high concentration of A and a low concentration of B to begin with, then the chemical reaction gives rise to a high concentration of A together with a high concentration of B. That is, if the bit corresponding to A is 1 initially, and the bit corresponding to B is 0, then these bits both end up 1. 10 → 11. The reaction has performed a COPY operation. The bit corresponding to A is what it was before the reaction and the bit corresponding to B is now a copy of the bit corresponding to A. Note that in this process, A has an effect on whether or not B is produced, but A itself is not consumed in the reaction; in chemical terms, A is called a catalyst for the production of B.

  NOT is produced in a similar fashion. Suppose that instead of enhancing the production of B, the presence of A inhibits the production of B. In this case, the reaction leads to B’s bit being the opposite of A’s bit; that is, B’s bit is the logical NOT of A’s bit.

  How about AND? Suppose chemical C goes from a low concentration to a high concentration if and only if there are high concentrations of A and B around. Then a reaction that starts out with C in low concentration (its bit is initially 0) leads to a high concentration of C if and only if both A and B are in a high concentration (i.e., if and only if A’s bit and B’s bit are both 1). After the reaction, C’s bit is the logical AND of A’s and B’s bits.

  Chemical reactions can readily produce AND, NOT, and COPY operations. By adding more chemicals to the set, such logic operations combine to produce a set of reactions corresponding to any desired logic circuit. Thus, chemical reactions are computationally universal.

  In general, as the chemicals in the pore in the rock react, some are catalysts for the initial set of reactions and some of the products of these initial reactions are catalysts for yet further reactions. Such a process is called an “autocatalytic set of reactions”: each reaction produces catalysts for other reactions within the set. Autocatalytic sets of reactions are powerful systems. In addition to computing, they can produce a wide variety of chemical outputs. In effect, an autocatalytic set of reactions is like a tiny, computer-controlled factory for producing chemicals. Some of these chemicals are the constituents of life.

  Did life begin as an autocatalytic set? Maybe so. We won’t know for sure until we identify the circuit diagram and the program for the autocatalytic set that first started producing cells and genes. The computational universality of autocatalytic sets tells us that some such program exists, but it doesn’t tell us that such a program is simple or easy to find.

  Many Worlds, Again

  In his 1997 book The Fabric of Reality, the physicist David Deutsch eloquently defends the Many Worlds theory of quantum mechanics in terms of quantum computation. Before wrapping up, let’s briefly look at the senses in which other worlds, à la Deutsch and Borges, can exist.

  The universe we see around us corresponds to just one of a set of decoherent histories; that is, what we actually see when we look out the window is just one component of the superposition of states that make up the overall quantum state of the universe. The other components of this state correspond to “other worlds,” worlds in which the tosses of the quantum dice turned out differently. The set of all possible worlds together constitutes the multiverse. I’ll leave it up to the reader to decide whether or not these other worlds exist in the same way as ours does. Whether they exist or not, as long as they are decoherent, these worlds can have no effect on our own.

  Note that our history is effectively complex. Like other histories in the decoherent set, ours is the result of many, many throws of the quantum dice. (About 1092, to be exact.) Nonetheless, the overall quantum state of the universe remains simple: the universe begins in a simple state and evolves according to simple laws.

  How can our history, which is just part of the entire state of the universe, be more effectively complex than the whole? There is nothing particularly paradoxical about it: the set of all billion-bit numbers is simple to describe, but almost any given number in the set requires a billion bits to describe. The same principle holds for the state of the multiverse. A given component of the superposition can require about 1092 bits to describe, while the state as a whole requires only a few. In the case of the computational universe, the overall state is simple: the multiverse is performing all possible computations in quantum parallel. But to specify any single one of those computations requires summoning the bits that correspond to the program for that computation. A given computation may take many bits to specify.

  As the multiverse computes, every possible computation is represented in quantum parallel in its overall state. The probability of any given computation is equal to the probability that monkeys type out its program. According to the Church-Turing hypothesis, every possible mathematical structure is represented in some component of the superposition. One such mathematical structure is the structure we see around us, every detail that we observe, including the laws of physics, chemistry, and biology. In other components of the superposition, the details are different. In some component, everything else is the same, but I have brown eyes instead of blue. In some component, it may even be that some features of the Standard Model for elementary particles, such as the masses of quarks, are different from those in other components of the superposition.

  There is a second way that all possible mathematical structures can be generated. Current observational evidence suggests that the universe is spatially infinite: it extends forever outside of the horizon. If this is the case, then somewhere, sometime, it will generate every possible mathematical structure. These structures can come to exist within our branch of the superposition; at some point in the future, they will come within our horizon and can affect us. Somewhere out there, there are exact copies of you and me. Somewhere else, the copies exist but are imperfect: I have brown eyes instead of blue. At some point in the future, information about these distant copies will enter our horizon. But the stars will have gone out long before. As Boltzmann might have said, if you’re interested in communicating with such other worlds, don’t hold your breath.

  By contrast, if you’re interested in communicating with life on other planets, then you might be in luck. For the same reason that we know that the laws of physics support computation (we have computers), we know that they support life (we are alive). But we don’t know the probability of life arising spontaneously on some other planet, nor do we know the probability that life, once established on one planet, could be transported to another. The chances of communicating with living creatures from another planet depend crucially on these odds. Someday we may know enough about how life arose to calculate them; until then, you have to ask yourself, “Do I feel lucky?”

  The Future

  How long
can computation continue in the universe? Current observational evidence suggests that the universe will expand forever. As it expands, the number of ops performed and the number of bits available within the horizon will continue to grow. Entropy will also increase, but—because as the universe gets bigger it takes longer and longer to reach thermal equilibrium—actual entropy will increase at a slower rate than maximum possible entropy. As a result, the number of calories of free energy available for consumption within the horizon will increase.

  So far, the news seems good. The problem is that while the total amount of free energy within the horizon continues to grow, the density of free energy—the amount of free energy per cubic meter—is decreasing. That is, there are more calories out there, but they’re getting harder and harder to collect. Trillions of years from now, the stars will have burned through their store of nuclear fuel. At that point, our descendants, should they still be around, could harvest energy by collecting matter and converting it into usable energy, a strategy analyzed in detail by Steven Frautschi of Caltech.15 The maximum amount of free energy that could be extracted is E = mc 2, where m is the mass of the collected matter. (Of course, some fraction of the energy will be lost due to inefficiency of extraction.)

  By scavenging farther and farther afield, our descendants will collect more and more matter and extract its energy. Some fraction of this energy will inevitably be wasted or lost in transmission. Some cosmological models allow the continued collection of energy ad infinitum, but others do not.16

 

‹ Prev