Book Read Free

The Pleasure of Finding Things Out

Page 5

by Richard P Feynman


  FIGURE 4

  For a transistor, the energy loss multiplied by the time it takes to operate is a product of several factors (Fig. 4):

  1.the thermal energy proportional to temperature, kT;

  2.the length of the transistor between source and drain, divided by the velocity of the electrons inside (the thermal velocity );

  3.the length of the transistor in units of the mean free path for collisions of electrons in the transistor;

  4.the total number of the electrons that are inside the transistor when it operates.

  Putting in appropriate values for all of these numbers tells us that the energy used in transistors today is somewhere between a billion to ten billion or more times the thermal energy kT. When the transistor switches, we use that much energy. This is very large amount of energy. It is obviously a good idea to decrease the size of the transistor. We decrease the length between source and drain and we can decrease the number of the electrons, and so use much less energy. It also turns out that a smaller transistor is much faster, because the electrons can cross it faster and make their decisions to switch faster. For every reason, it is a good idea to make the transistor smaller, and everybody is always trying to do that.

  But suppose we come to a circumstance in which the mean free path is longer than the size of the transistor; then we discover that the transistor doesn’t work properly anymore. It does not behave the way we expected. This reminds me, years ago there was something called the sound barrier. Airplanes were supposed not to be able to go faster than the speed of sound because, if you designed them normally and then tried to put the speed of sound in the equations, the propeller wouldn’t work and the wings don’t lift and nothing works correctly. Nevertheless, airplanes can go faster than the speed of sound. You just have to know what the right laws are under the right circumstances, and design the device with the correct laws. You cannot expect old designs to work in new circumstances. But new designs can work in new circumstances, and I assert that it is perfectly possible to make transistor systems, or, more correctly, switching systems and computing devices, in which the dimensions are smaller than the mean free path. I speak, of course, “in principle,” and I am not speaking about the actual manufacture of such devices. Let us therefore discuss what happens if we try to make the devices as small as possible.

  Reducing the Size

  FIGURE 5

  So my third topic is the size of computing elements and now I speak entirely theoretically. The first thing that you would worry about when things get very small is Brownian motion*—everything is shaking about and nothing stays in place. How can you control the circuits then? Furthermore, if a circuit does work, doesn’t it now have a chance of accidentally jumping back? If we use two volts for the energy of this electric system, which is what we ordinarily use (Fig. 5), that is eighty times the thermal energy at room temperature (kT = 1/40 volt) and the chance that something jumps backward against 80 times thermal energy is e, the base of the natural logarithm, to the power minus eighty, or 10-43. What does that mean? If we had a billion transistors in a computer (which we don’t yet have), all of them switching 1010 times a second (a switching time of a tenth of a nanosecond), switching perpetually, operating for 109 seconds, which is 30 years, the total number of switching operations in such a machine is 1028. The chance of one of the transistors going backward is only 10-43, so there will be no error produced by thermal oscillations whatsoever in 30 years. If you don’t like that, use 2.5 volts and then the probability gets even smaller. Long before that, real failures will come when a cosmic ray accidentally goes through the transistor, and we don’t have to be more perfect than that.

  However, much more is in fact possible and I would like to refer you to an article in a most recent Scientific American by C. H. Bennett and R. Landauer, “The Fundamental Physical Limits of Computation.”* It is possible to make a computer in which each element, each transistor, can go forward and accidentally reverse and still the computer will operate. All the operations in the computer can go forward or backward. The computation proceeds for a while one way and then it un-does itself, “uncalculates,” and then goes forward again and so on. If we just pull it along a little, we can make this computer go through and finish the calculation by making it just a little bit more likely that it goes forward than backward.

  It is known that all possible computations can be made by putting together some simple elements like transistors; or, if we want to be more logically abstract, something called a NAND gate, for example (NAND means NOT-AND). A NAND gate has two “wires” in and one out (Fig. 6). Forget the NOT for the moment. What is an AND gate? An AND gate is a device whose output is 1 only if both input wires are 1, otherwise its output is 0. NOT-AND means the opposite, thus the output wire reads 1 (i.e., has the voltage level corresponding to 1) unless both input wires read 1; if both input wires read 1, then the output wire reads 0 (i.e., has the voltage level corresponding to 0). Figure 6 shows a little table of inputs and outputs for such a NAND gate. A and B are inputs and C is the output. If A and B are both 1, the output is 0, otherwise 1. But such a device is irreversible: Information is lost. If I only know the output, I cannot recover the input. The device can’t be expected to flip forward and then come back and compute correctly anymore. For instance, if we know that the output is now 1, we don’t know whether it came from A=0, B=l or A=l, B=0 or A=0, B=0 and it cannot go back. Such a device is an irreversible gate. The great discovery of Bennett and, independently, of Fredkin is that it is possible to do computation with a different kind of fundamental gate unit, namely, a reversible gate unit. I have illustrated their idea–with a unit which I could call a reversible NAND gate. It has three inputs and three outputs (Fig. 7). Of the outputs, two, A′ and B′, are the same as two of the inputs, A and B, but the third input works this way. C′ is the same as C unless A and B are both 1, in which case it changes whatever C is. For instance, if C is 1 it is changed to 0, if C is 0 it is changed to 1–but these changes only happen if both A and B are 1. If you put two of these gates in succession, you see that A and B will go through, and if C is not changed in both it stays the same. If C is changed, it is changed twice so that it stays the same. So this gate can reverse itself and no information has been lost. It is possible to discover what went in if you know what came out.

  FIGURE 6

  FIGURE 7

  A device made entirely with such gates will make calculations if everything moves forward. But if things go back and forth for a while, but then eventually go forward enough, it still operates correctly. If the things flip back and then go forward later it is still all right. It’s very much the same as a particle in a gas which is bombarded by the atoms around it. Such a particle usually goes nowhere, but with just a little pull, a little prejudice that makes a chance to move one way a little higher than the other way, the thing will slowly drift forward and travel from one end to the other, in spite of the Brownian motion that it has made. So our computer will compute provided we apply a drift force to pull the thing across the calculation. Although it is not doing the calculation in a smooth way, nevertheless, calculating like this, forward and backward, it eventually finishes the job. As with the particle in the gas, if we pull it very slightly, we lose very little energy, but it takes a long time to get to one side from the other. If we are in a hurry, and we pull hard, then we lose a lot of energy. It is the same with this computer. If we are patient and go slowly, we can make the computer operate with practically no energy loss, even less than kT per step, any amount as small as you like if you have enough time. But if you are in a hurry, you must dissipate energy, and again it’s true that the energy lost to pull the calculation forward to complete it multiplied by the time you are allowed to make the calculation is a constant.

  With these possibilities in mind, let’s see how small we can make a computer. How big must a number be? We all know we can write numbers in base 2 as strings of “bits,” each a one or a zero. And the next atom could be
a one or a zero, so a little string of atoms are enough to hold a number, one atom for each bit. (Actually, since an atom can have more than just two states, we could use even fewer atoms, but one per bit is little enough!) So, for intellectual entertainment, we consider whether we could make a computer in which the writing of bits is of atomic size, in which a bit is, for example, whether the spin in the atom is up for 1 or down for 0. And then our “transistor,” which changes the bits in different places, would correspond to some interaction between atoms which will change their states. The simplest example would be if a kind of 3-atom interaction were to be the fundamental element or gate in such a computer. But again, the device won’t work right if we design it with the laws appropriate for large objects. We must use the new laws of physics, quantum mechanical laws, the laws that are appropriate to atomic motion (Fig. 8).

  FIGURE 8

  We therefore have to ask whether the principles of quantum mechanics permit an arrangement of atoms so small in number as a few times the number of gates in a computer that could operate as a computer. This has been studied in principle, and such an arrangement has been found. Since the laws of quantum mechanics are reversible, we must use the invention by Bennett and Fredkin of reversible logic gates. When this quantum mechanical situation is studied, it is found that quantum mechanics adds no further limitations to anything that Mr. Bennett has said from thermodynamic considerations. Of course there is a limitation, the practical limitation anyway, that the bits must be of the size of an atom and a transistor 3 or 4 atoms. The quantum mechanical gate I used has 3 atoms. (I would not try to write my bits onto nuclei, I’ll wait till the technological development reaches atoms before I need to go any further!) That leaves us just with: (a) the limitations in size to the size of atoms; (b) the energy requirements depending on the time as worked out by Bennett; and c) the feature that I did not mention concerning the speed of light–we can’t send the signals any faster than the speed of light. These are the only physical limitations on computers that I know of.

  FIGURE 9

  If we somehow manage to make an atomic size computer, it would mean (Fig. 9) that the dimension, the linear dimension, is a thousand to ten thousand times smaller than those very tiny chips that we have now. It means that the volume of the computer is 100 billionth or 10-11 of the present volume, because the volume of the “transistor” is smaller by a factor 10-11 than the transistors we make today. The energy requirement for a single switch is also about eleven orders of magnitude smaller than the energy required to switch the transistor today, and the time to make the transitions will be at least ten thousand times faster per step of calculation. So there is plenty of room for improvement in the computer and I leave this to you, practical people who work on computers, as an aim to get to. I underestimated how long it would take for Mr. Ezawa to translate what I said, and I have no more to say that I have prepared for today. Thank you! I will answer questions if you’d like.

  Questions and Answers

  Q: You mentioned that one bit of information can be stored in one atom, and I wonder if you can store the same amount of information in one quark.

  A: Yes. But we don’t have control of the quarks and that becomes a really impractical way to deal with things. You might think that what I am talking about is impractical, but I don’t believe so. When I am talking about atoms, I believe that someday we will be able to handle and control them individually. There would be so much energy involved in the quark interactions that they would be very dangerous to handle because of the radioactivity and so on. But the atomic energies that I am talking about are very familiar to us in chemical energies, electrical energies, and those are numbers that are within the realm of reality, I believe, however absurd it may seem at the moment.

  Q: You said that the smaller the computing element is the better. But, I think equipment has to be larger, because. . .

  A: You mean that your finger is too big to push the buttons? Is that what you mean?

  Q: Yes, it is.

  A: Of course, you are right. I am talking about internal computers perhaps for robots or other devices. The input and output is something that I didn’t discuss, whether the input comes from looking at pictures, hearing voices, or buttons being pushed. I am discussing how the computation is done in principle and not what form the output should take. It is certainly true that the input and the output cannot be reduced in most cases effectively beyond human dimensions. It is already too difficult to push the buttons on some of the computers with our big fingers. But with elaborate computing problems that take hours and hours, they could be done very rapidly on the very small machines with low energy consumption. That’s the kind of machine I was thinking of. Not the simple applications of adding two numbers but elaborate calculations.

  Q: I would like to know your method to transform the information from one atomic scale element to another atomic scale element. If you will use a quantum mechanical or natural interaction between the two elements, then such a device will become very close to Nature itself. For example, if we make a computer simulation, a Monte Carlo simulation of a magnet to study critical phenomena, then your atomic scale computer will be very close to the magnet itself. What are your thoughts about that?

  A: Yes. All things that we make are Nature. We arrange it in a way to suit our purpose, to make a calculation for a purpose. In a magnet there is some kind of relation, if you wish; there are some kinds of computations going on, just like there are in the solar system, in a way of thinking. But that might not be the calculation we want to make at the moment. What we need to make is a device for which we can change the programs and let it compute the problem that we want to solve, not just its own magnet problem that it likes to solve for itself. I can’t use the solar system for a computer unless it just happens that the problem that someone gave me was to find the motion of the planets, in which case all I have to do is to watch. There was an amusing article written as a joke. Far in the future, the “article” appears discussing a new method of making aerodynamical calculations: Instead of using the elaborate computers of the day, the author invents a simple device to blow air past the wing. (He reinvents the wind tunnel!)

  Q: I have recently read in a newspaper article that operations of the nerve system in a brain are much slower than present-day computers and the unit in the nerve system is much smaller. Do you think that the computers you have talked about today have something in common with the nerve system in the brain?

  A: There is an analogy between the brain and the computer in that there are apparently elements that can switch under the control of others. Nerve impulses controlling or exciting other nerves, in a way that often depends upon whether more than one impulse comes in–something like an AND or its generalization. What is the amount of energy used in the brain cell for one of these transitions? I don’t know the number. The time it takes to make a switching in the brain is very much longer than it is in our computers even today, never mind the fancy business of some future atomic computer, but the brain’s interconnection system is much more elaborate. Each nerve is connected to thousands of other nerves, whereas we connect transistors only to two or three others.

  Some people look at the activity of the brain in action and see that in many respects it surpasses the computer of today, and in many other respects the computer surpasses ourselves. This inspires people to design machines that can do more. What often happens is that an engineer has an idea of how the brain works (in his opinion) and then designs a machine that behaves that way. This new machine may in fact work very well. But, I must warn you that that does not tell us anything about how the brain actually works, nor is it necessary to ever really know that, in order to make a computer very capable. It is not necessary to understand the way birds flap their wings and how the feathers are designed in order to make a flying machine. It is not necessary to understand the lever system in the legs of a cheetah–an animal that runs fast–in order to make an automobile with wheels that goes very fast.
It is therefore not necessary to imitate the behavior of Nature in detail in order to engineer a device which can in many respects surpass Nature’s abilities. It is an interesting subject and I like to talk about it.

  Your brain is very weak compared to a computer. I will give you a series of numbers, one, three, seven. . . Or rather, ichi, san, shichi, san, ni, go, ni, go, ichi, hachi, ichi, ni, ku, san, go. Now I want you to repeat them back to me. A computer can take tens of thousands of numbers and give me them back in reverse, or sum them or do lots of things that we cannot do. On the other hand, if I look at a face, in a glance I can tell you who it is if I know that person, or that I don’t know that person. We do not yet know how to make a computer system so that if we give it a pattern of a face it can tell us such information, even if it has seen many faces and you have tried to teach it.

  Another interesting example is chess playing machines. It is quite a surprise that we can make machines that play chess better than almost everybody in the room. But they do it by trying many, many possibilities. If he moves here, then I could move here, and he can move there, and so forth. They look at each alternative and choose the best. Computers look at millions of alternatives, but a master chess player, a human, does it differently. He recognizes patterns. He looks at only thirty or forty positions before deciding what move to make. Therefore, although the rules are simpler in Go, machines that play Go are not very good, because in each position there are too many possibilities to move and there are too many things to check and the machines cannot look deeply. Therefore the problem of recognizing patterns and what to do under these circumstances is the thing that the computer engineers (they like to call themselves computer scientists) still find very difficult. It is certainly one of the important things for future computers, perhaps more important than the things I spoke about. Make a machine to play Go effectively!

 

‹ Prev