Book Read Free

The Singularity Is Near: When Humans Transcend Biology

Page 15

by Ray Kurzweil


  The Limits of Computation

  If a most efficient supercomputer works all day to compute a weather simulation problem, what is the minimum amount of energy that must be dissipated according to the laws of physics? The answer is actually very simple to calculate, since it is unrelated to the amount of computation. The answer is always equal to zero.

  —EDWARD FREDKIN, PHYSICIST45

  We’ve already had five paradigms (electromechanical calculators, relay-based computing, vacuum tubes, discrete transistors, and integrated circuits) that have provided exponential growth to the price-performance and capabilities of computation. Each time a paradigm reached its limits, another paradigm took its place. We can already see the outlines of the sixth paradigm, which will bring computing into the molecular third dimension. Because computation underlies the foundations of everything we care about, from the economy to human intellect and creativity, we might well wonder: are there ultimate limits to the capacity of matter and energy to perform computation? If so, what are these limits, and how long will it take to reach them?

  Our human intelligence is based on computational processes that we are learning to understand. We will ultimately multiply our intellectual powers by applying and extending the methods of human intelligence using the vastly greater capacity of nonbiological computation. So to consider the ultimate limits of computation is really to ask: what is the destiny of our civilization?

  A common challenge to the ideas presented in this book is that these exponential trends must reach a limit, as exponential trends commonly do. When a species happens upon a new habitat, as in the famous example of rabbits in Australia, its numbers grow exponentially for a while. But it eventually reaches the limits of that environment’s ability to support it. Surely the processing of information must have similar constraints. It turns out that, yes, there are limits to computation based on the laws of physics. But these still allow for a continuation of exponential growth until nonbiological intelligence is trillions of trillions of times more powerful than all of human civilization today, contemporary computers included.

  A major factor in considering computational limits is the energy requirement. The energy required per MIPS for computing devices has been falling exponentially, as shown in the following figure.46

  However, we also know that the number of MIPS in computing devices has been growing exponentially. The extent to which improvements in power usage have kept pace with processor speed depends on the extent to which we use parallel processing. A larger number of less-powerful computers can inherently run cooler because the computation is spread out over a larger area. Processor speed is related to voltage, and the power required is proportional to the square of the voltage. So running a processor at a slower speed significantly reduces power consumption. If we invest in more parallel processing rather than faster single processors, it is feasible for energy consumption and heat dissipation to keep pace with the growing MIPS per dollar, as the figure “Reduction in Watts per MIPS” shows.

  This is essentially the same solution that biological evolution developed in the design of animal brains. Human brains use about one hundred trillion computers (the interneuronal connections, where most of the processing takes place). But these processors are very low in computational power and therefore run relatively cool.

  Until just recently Intel emphasized the development of faster and faster single-chip processors, which have been running at increasingly high temperatures. Intel is gradually changing its strategy toward parallelization by putting multiple processors on a single chip. We will see chip technology move in this direction as a way of keeping power requirements and heat dissipation in check.47

  Reversible Computing. Ultimately, organizing computation with massive parallel processing, as is done in the human brain, will not by itself be sufficient to keep energy levels and resulting thermal dissipation at reasonable levels. The current computer paradigm relies on what is known as irreversible computing, meaning that we are unable in principle to run software programs backward. At each step in the progression of a program, the input data is discarded—erased—and the results of the computation pass to the next step. Programs generally do not retain all intermediate results, as that would use up large amounts of memory unnecessarily. This selective erasure of input information is particularly true for pattern-recognition systems. Vision systems, for example, whether human or machine, receive very high rates of input (from the eyes or visual sensors) yet produce relatively compact outputs (such as identification of recognized patterns). This act of erasing data generates heat and therefore requires energy. When a bit of information is erased, that information has to go somewhere. According to the laws of thermodynamics, the erased bit is essentially released into the surrounding environment, thereby increasing its entropy, which can be viewed as a measure of information (including apparently disordered information) in an environment. This results in a higher temperature for the environment (because temperature is a measure of entropy).

  If, on the other hand, we don’t erase each bit of information contained in the input to each step of an algorithm but instead just move it to another location, that bit stays in the computer, is not released into the environment, and therefore generates no heat and requires no energy from outside the computer.

  Rolf Landauer showed in 1961 that reversible logical operations such as NOT (turning a bit into its opposite) could be performed without putting energy in or taking heat out, but that irreversible logical operations such as AND (generating bit C, which is a 1 if and only if both inputs A and B are 1) do require energy.48 In 1973 Charles Bennett showed that any computation could be performed using only reversible logical operations.49 A decade later, Ed Fredkin and Tommaso Toffoli presented a comprehensive review of the idea of reversible computing.50 The fundamental concept is that if you keep all the intermediate results and then run the algorithm backward when you’ve finished your calculation, you end up where you started, have used no energy, and generated no heat. Along the way, however, you’ve calculated the result of the algorithm.

  How Smart Is a Rock? To appreciate the feasibility of computing with no energy and no heat, consider the computation that takes place in an ordinary rock. Although it may appear that nothing much is going on inside a rock, the approximately 1025 (ten trillion trillion) atoms in a kilogram of matter are actually extremely active. Despite the apparent solidity of the object, the atoms are all in motion, sharing electrons back and forth, changing particle spins, and generating rapidly moving electromagnetic fields. All of this activity represents computation, even if not very meaningfully organized.

  We’ve already shown that atoms can store information at a density of greater than one bit per atom, such as in computing systems built from nuclear magnetic-resonance devices. University of Oklahoma researchers stored 1,024 bits in the magnetic interactions of the protons of a single molecule containing nineteen hydrogen atoms.51 Thus, the state of the rock at any one moment represents at least 1027 bits of memory.

  In terms of computation, and just considering the electromagnetic interactions, there are at least 1015 changes in state per bit per second going on inside a 2.2-pound rock, which effectively represents about 1042 (a million trillion trillion trillion) calculations per second. Yet the rock requires no energy input and generates no appreciable heat.

  Of course, despite all this activity at the atomic level, the rock is not performing any useful work aside from perhaps acting as a paperweight or a decoration. The reason for this is that the structure of the atoms in the rock is for the most part effectively random. If, on the other hand, we organize the particles in a more purposeful manner, we could have a cool, zero-energy-consuming computer with a memory of about a thousand trillion trillion bits and a processing capacity of 1042 operations per second, which is about ten trillion times more powerful than all human brains on Earth, even if we use the most conservative (highest) estimate of 1019 cps.52

  Ed Fredkin demonstrated th
at we don’t even have to bother running algorithms in reverse after obtaining a result.53 Fredkin presented several designs for reversible logic gates that perform the reversals as they compute and that are universal, meaning that general-purpose computation can be built from them.54 Fredkin goes on to show that the efficiency of a computer built from reversible logic gates can be designed to be very close (at least 99 percent) to the efficiency of ones built from irreversible gates. He writes:

  it is possible to . . . implement . . . conventional computer models that have the distinction that the basic components are microscopically reversible. This means that the macroscopic operation of the computer is also reversible. This fact allows us to address the . . . question . . . “what is required for a computer to be maximally efficient?” The answer is that if the computer is built out of microscopically reversible components then it can be perfectly efficient. How much energy does a perfectly efficient computer have to dissipate in order to compute something? The answer is that the computer does not need to dissipate any energy.55

  Reversible logic has already been demonstrated and shows the expected reductions in energy input and heat dissipation.56 Fredkin’s reversible logic gates answer a key challenge to the idea of reversible computing: that it would require a different style of programming. He argues that we can, in fact, construct normal logic and memory entirely from reversible logic gates, which will allow the use of existing conventional software-development methods.

  It is hard to overstate the significance of this insight. A key observation regarding the Singularity is that information processes—computation—will ultimately drive everything that is important. This primary foundation for future technology thus appears to require no energy.

  The practical reality is slightly more complicated. If we actually want to find out the results of a computation—that is, to receive output from a computer—the process of copying the answer and transmitting it outside of the computer is an irreversible process, one that generates heat for each bit transmitted. However, for most applications of interest, the amount of computation that goes into executing an algorithm vastly exceeds the computation required to communicate the final answers, so the latter does not appreciably change the energy equation.

  However, because of essentially random thermal and quantum effects, logic operations have an inherent error rate. We can overcome errors using error-detection and -correction codes, but each time we correct a bit, the operation is not reversible, which means it requires energy and generates heat. Generally, error rates are low. But even if errors occur at the rate of, say, one per 1010 operations, we have only succeeded in reducing energy requirements by a factor of 1010, not in eliminating energy dissipation altogether.

  As we consider the limits of computation, the issue of error rate becomes a significant design issue. Certain methods of increasing computational rate, such as increasing the frequency of the oscillation of particles, also increase error rates, so this puts natural limits on the ability to perform computation using matter and energy.

  Another important trend with relevance here will be the moving away from conventional batteries toward tiny fuel cells (devices storing energy in chemicals, such as forms of hydrogen, which is combined with available oxygen). Fuel cells are already being constructed using MEMS (microelectronic mechanical systems) technology.57 As we move toward three-dimensional, molecular computing with nanoscale features, energy resources in the form of nano–fuel cells will be as widely distributed throughout the computing medium among the massively parallel processors. We will discuss future nanotechnology-based energy technologies in chapter 5.

  The Limits of Nanocomputing. Even with the restrictions we have discussed, the ultimate limits of computers are profoundly high. Building on work by University of California at Berkeley professor Hans Bremermann and nanotechnology theorist Robert Freitas, MIT professor Seth Lloyd has estimated the maximum computational capacity, according to the known laws of physics, of a computer weighing one kilogram and occupying one liter of volume—about the size and weight of a small laptop computer—what he calls the “ultimate laptop.”58 The potential amount of computation rises with the available energy. We can understand the link between energy and computational capacity as follows. The energy in a quantity of matter is the energy associated with each atom (and subatomic particle). So the more atoms, the more energy. As discussed above, each atom can potentially be used for computation. So the more atoms, the more computation. The energy of each atom or particle grows with the frequency of its movement: the more movement, the more energy. The same relationship exists for potential computation: the higher the frequency of movement, the more computation each component (which can be an atom) can perform. (We see this in contemporary chips: the higher the frequency of the chip, the greater its computational speed.)

  So there is a direct proportional relationship between the energy of an object and its potential to perform computation. The potential energy in a kilogram of matter is very large, as we know from Einstein’s equation E = mc2. The speed of light squared is a very large number: approximately 1017 meter2/second2. The potential of matter to compute is also governed by a very small number, Planck’s constant: 6.6 × 10–34 joule-seconds (a joule is a measure of energy). This is the smallest scale at which we can apply energy for computation. We obtain the theoretical limit of an object to perform computation by dividing the total energy (the average energy of each atom or particle times the number of such particles) by Planck’s constant.

  Lloyd shows how the potential computing capacity of a kilogram of matter equals pi times energy divided by Planck’s constant. Since the energy is such a large number and Planck’s constant is so small, this equation generates an extremely large number: about 5 × 1050 operations per second.59

  If we relate that figure to the most conservative estimate of human brain capacity (1019 cps and 1010 humans), it represents the equivalent of about five billion trillion human civilizations.60 If we use the figure of 1016 cps that I believe will be sufficient for functional emulation of human intelligence, the ultimate laptop would function at the equivalent brain power of five trillion trillion human civilizations.61 Such a laptop could perform the equivalent of all human thought over the last ten thousand years (that is, ten billion human brains operating for ten thousand years) in one ten-thousandth of a nanosecond.62

  Again, a few caveats are in order. Converting all of the mass of our 2.2-pound laptop into energy is essentially what happens in a thermonuclear explosion. Of course, we don’t want the laptop to explode but to stay within its one-liter dimension. So this will require some careful packaging, to say the least. By analyzing the maximum entropy (degrees of freedom represented by the state of all the particles) in such a device, Lloyd shows that such a computer would have a theoretical memory capacity of 1031 bits. It’s difficult to imagine technologies that would go all the way in achieving these limits. But we can readily envision technologies that come reasonably close to doing so. As the University of Oklahoma project shows, we already demonstrated the ability to store at least fifty bits of information per atom (although only on a small number of atoms, so far). Storing 1027 bits of memory in the 1025 atoms in a kilogram of matter should therefore be eventually achievable.

  But because many properties of each atom could be exploited to store information—such as the precise position, spin, and quantum state of all of its particles—we can probably do somewhat better than 1027 bits. Neuroscientist Anders Sandberg estimates the potential storage capacity of a hydrogen atom at about four million bits. These densities have not yet been demonstrated, however, so we’ll use the more conservative estimate.63 As discussed above, 1042 calculations per second could be achieved without producing significant heat. By fully deploying reversible computing techniques, using designs that generate low levels of errors, and allowing for reasonable amounts of energy dissipation, we should end up somewhere between 1042 and 1050 calculations per second.

  T
he design terrain between these two limits is complex. Examining the technical issues that arise as we advance from 1042 to 1050 is beyond the scope of this chapter. We should keep in mind, however, that the way this will play out is not by starting with the ultimate limit of 1050 and working backward based on various practical considerations. Rather, technology will continue to ramp up, always using its latest prowess to progress to the next level. So once we get to a civilization with 1042 cps (for every 2.2 pounds), the scientists and engineers of that day will use their essentially vast nonbiological intelligence to figure out how to get 1043, then 1044, and so on. My expectation is that we will get very close to the ultimate limits.

  Even at 1042 cps, a 2.2-pound “ultimate portable computer” would be able to perform the equivalent of all human thought over the last ten thousand years (assumed at ten billion human brains for ten thousand years) in ten microseconds.64 If we examine the “Exponential Growth of Computing” chart (p. 70), we see that this amount of computing is estimated to be available for one thousand dollars by 2080.

  A more conservative but compelling design for a massively parallel, reversible computer is Eric Drexler’s patented nanocomputer design, which is entirely mechanical.65 Computations are performed by manipulating nanoscale rods, which are effectively spring-loaded. After each calculation, the rods containing intermediate values return to their original positions, thereby implementing the reverse computation. The device has a trillion (1012) processors and provides an overall rate of 1021 cps, enough to simulate one hundred thousand human brains in a cubic centimeter.

 

‹ Prev