The Pleasure of Finding Things Out

Home > Other > The Pleasure of Finding Things Out > Page 4
The Pleasure of Finding Things Out Page 4

by Richard P Feynman


  I may be quite wrong, maybe they do know all these things, but I don’t think I’m wrong. You see, I have the advantage of having found out how hard it is to get to really know something, how careful you have to be about checking the experiments, how easy it is to make mistakes and fool yourself. I know what it means to know something, and therefore I see how they get their information and I can’t believe that they know it, they haven’t done the work necessary, haven’t done the checks necessary, haven’t done the care necessary. I have a great suspicion that they don’t know, that this stuff is [wrong] and they’re intimidating people. I think so. I don’t know the world very well but that’s what I think.

  Doubt and Uncertainty

  If you expected science to give all the answers to the wonderful questions about what we are, where we’re going, what the meaning of the universe is and so on, then I think you could easily become disillusioned and then look for some mystic answer to these problems. How a scientist can take a mystic answer I don’t know because the whole spirit is to understand–well, never mind that. Anyhow, I don’t understand that, but anyhow if you think of it, the way I think of what we’re doing is we’re exploring, we’re trying to find out as much as we can about the world. People say to me, “Are you looking for the ultimate laws of physics?” No, I’m not, I’m just looking to find out more about the world and if it turns out there is a simple ultimate law which explains everything, so be it, that would be very nice to discover.

  If it turns out it’s like an onion with millions of layers and we’re just sick and tired of looking at the layers, then that’s the way it is, but whatever way it comes out its nature is there and she’s going to come out the way she is, and therefore when we go to investigate it we shouldn’t predecide what it is we’re trying to do except to try to find out more about it. If you say your problem is, why do you find out more about it, if you thought you were trying to find out more about it because you’re going to get an answer to some deep philosophical question, you may be wrong. It may be that you can’t get an answer to that particular question by finding out more about the character of nature, but I don’t look at it [like that]. My interest in science is to simply find out about the world, and the more I find out the better it is, like, to find out.

  There are very remarkable mysteries about the fact that we’re able to do so many more things than apparently animals can do, and other questions like that, but those are mysteries I want to investigate without knowing the answer to them, and so altogether I can’t believe these special stories that have been made up about our relationship to the universe at large because they seem to be too simple, too connected, too local, too provincial. The earth, He came to the earth, one of the aspects of God came to the earth, mind you, and look at what’s out there. It isn’t in proportion. Anyway, it’s no use arguing, I can’t argue it, I’m just trying to tell you why the scientific views that I have do have some effect on my belief. And also another thing has to do with the question of how you find out if something’s true, and if all the different religions have all different theories about the thing, then you begin to wonder. Once you start doubting, just like you’re supposed to doubt, you ask me if the science is true. You say no, we don’t know what’s true, we’re trying to find out and everything is possibly wrong.

  Start out understanding religion by saying everything is possibly wrong. Let us see. As soon as you do that, you start sliding down an edge which is hard to recover from and so on. With the scientific view, or my father’s view, that we should look to see what’s true and what may be or may not be true, once you start doubting, which I think to me is a very fundamental part of my soul, to doubt and to ask, and when you doubt and ask it gets a little harder to believe.

  You see, one thing is, I can live with doubt and uncertainty and not knowing. I think it’s much more interesting to live not knowing than to have answers which might be wrong. I have approximate answers and possible beliefs and different degrees of certainty about different things, but I’m not absolutely sure of anything and there are many things I don’t know anything about, such as whether it means anything to ask why we’re here, and what the question might mean. I might think about it a little bit and if I can’t figure it out, then I go on to something else, but I don’t have to know an answer, I don’t feel frightened by not knowing things, by being lost in a mysterious universe without having any purpose, which is the way it really is so far as I can tell. It doesn’t frighten me.

  _______

  *(1906– ) Winner of the 1967 Nobel Prize in Physics for contributions to the theory of nuclear reactions, especially for his discoveries concerning the energy production in stars. Ed.

  †In 1965, the Nobel Prize for Physics was shared by Richard Feynman, Julian Schwinger, and Sin–Itiro Tomonaga for their fundamental work in quantum electrodynamics, and its deep consequences for the physics of elementary particles. Ed.

  2

  COMPUTING MACHINES IN THE FUTURE

  Forty years to the day after the atomic bombing of Nagasaki, Manhattan Project veteran Feynman delivers a talk in Japan, but the topic is a peaceful one, one that still occupies our sharpest minds: the future of the computing machine, including the topic that made Feynman seem a Nostradamus of computer science–the ultimate lower limit to the size of a computer. This chapter may be challenging for some readers; however, it is such an important part of Feynman’s contribution to science that I hope they will take the time to read it, even if they have to skip over some of the more technical spots. It ends with a brief discussion of one of Feynman’s favorite pet ideas, which launched the current revolution in nanotechnology.

  Introduction

  It’s a great pleasure and an honor to be here as a speaker in memorial for a scientist that I have respected and admired as much as Professor Nishina. To come to Japan and talk about computers is like giving a sermon to Buddha. But I have been thinking about computers and this is the only subject I could think of when invited to talk.

  The first thing I would like to say is what I am not going to talk about. I want to talk about the future of computing machines. But the most important possible developments in the future are things that I will not speak about. For example, there is a great deal of work to try to develop smarter machines, machines which have a better relationship with humans so that input and output can be made with less effort than the complex programming that’s necessary today. This often goes under the name of artificial intelligence, but I don’t like that name. Perhaps the unintelligent machines can do even better than the intelligent ones.

  Another problem is the standardization of programming languages. There are too many languages today, and it would be a good idea to choose just one. (I hesitate to mention that in Japan, for what will happen will be that there will simply be more standard languages–you already have four ways of writing now, and attempts to standardize anything here result apparently in more standards and not fewer!)

  Another interesting future problem that is worth working on but I will not talk about is automatic debugging programs. Debugging means fixing errors in a program or in a machine, and it is surprisingly difficult to debug programs as they get more complicated.

  Another direction of improvement is to make physical machines three dimensional instead of all on a surface of a chip. That can be done in stages instead of all at once–you can have several layers and then add many more layers as time goes on. Another important device would be one that could automatically detect defective elements on a chip; then the chip would automatically rewire itself so as to avoid the defective elements. At the present time, when we try to make big chips there are often flaws or bad spots in the chips, and we throw the whole chip away. If we could make it so that we could use the part of the chip that was effective, it would be much more efficient. I mention these things to try to tell you that I am aware of what the real problems are for future machines. But what I want to talk about is simple, just some small technical, physica
lly good things that can be done in principle according to the physical laws. In other words, I would like to discuss the machinery and not the way we use the machines.

  I will talk about some technical possibilities for making machines. There will be three topics. One is parallel processing machines, which is something of the very near future, almost present, that is being developed now. Further in the future is the question of the energy consumption of machines, which seems at the moment to be a limitation, but really isn’t. Finally I will talk about the size. It is always better to make the machines smaller, and the question is, how much smaller is it still possible, in principle, to make machines according to the laws of Nature? I will not discuss which and what of these things will actually appear in the future. That depends on economic problems and social problems and I am not going to try to guess at those.

  Parallel Computers

  The first topic concerns parallel computers. Almost all the present computers, conventional computers, work on a layout or an architecture invented by von Neumann,* in which there is a very large memory that stores all the information, and one central location that does simple calculations. We take a number from this place in the memory and a number from that place in the memory, send the two to the central arithmetical unit to add them, and then send the answer to some other place in the memory. There is, therefore, effectively one central processor which is working very, very fast and very hard, while the whole memory sits out there like a fast filing cabinet of cards which are very rarely used. It is obvious that if there were more processors working at the same time we ought to be able to do calculations faster. But the problem is that someone who might be using one processor may be using some information from the memory that another one needs, and it gets very confusing. For such reasons it has been said that it is very difficult to get many processors to work in parallel.

  Some steps in that direction have been taken in the larger conventional machines called “vector processors.” When sometimes you want to do exactly the same step on many different items, you can perhaps do that at the same time. The hope is that regular programs can be written in the ordinary way, and then an interpreter program will discover automatically when it is useful to use this vector possibility. That idea is used in the Cray and in “supercomputers” in Japan. Another plan is to take what is effectively a large number of relatively simple (but not very simple) computers, and connect them all together in some pattern. Then they can all work on a part of the problem. Each one is really an independent computer, and they will transfer information to each other as one or another needs it. This kind of a scheme is realized in the Caltech Cosmic Cube, for example, and represents only one of many possibilities. Many people are now making such machines. Another plan is to distribute very large numbers of very simple central processors all over the memory. Each one deals with just a small part of the memory and there is an elaborate system of interconnections between them. An example ample of such a machine is the Connection Machine made at MIT. It has 64,000 processors and a system of routing in which every 16 can talk to any other 16 and thus has 4,000 routing connection possibilities.

  It would appear that scientific problems such as the propagation of waves in some material might be very easily handled by parallel processing. This is because what happens in any given part of space at any moment can be worked out locally and only the pressures and the stresses from the neighboring volumes need to be known. These can be worked out at the same time for each volume and these boundary conditions communicated across the different volumes. That’s why this type of design works for such problems. It has turned out that a very large number of problems of all kinds can be dealt with in parallel. As long as the problem is big enough so that a lot of calculating has to be done, it turns out that a parallel computation can speed up time to solution enormously, and this principle applies not just to scientific problems.

  What happened to the prejudice of two years ago, which was that the parallel programming is difficult? It turns out that what was difficult, and almost impossible, is to take an ordinary program and automatically figure out how to use the parallel computation effectively on that program. Instead, one must start all over again with the problem, appreciating that we have the possibility of parallel calculation, and rewrite the program completely with a new [understanding of] what is inside the machine. It is not possible to effectively use the old programs. They must be rewritten. That is a great disadvantage to most industrial applications and has met with considerable resistance. But the big programs usually belong to scientists or other, unofficial, intelligent programmers who love computer science and are willing to start all over again and rewrite the program if they can make it more efficient. So what’s going to happen is that the hard programs, vast big ones, will be the first to be re-programmed by experts in the new way, and then gradually everybody will have to come around, and more and more programs will be programmed that way, and programmers will just have to learn how to do it.

  Reducing the Energy Loss

  The second topic I want to talk about is energy loss in computers. The fact that they must be cooled is an apparent limitation for the largest computers–a good deal of effort is spent in cooling the machine. I would like to explain that this is simply a result of very poor engineering and is nothing fundamental at all. Inside the computer a bit of information is controlled by a wire which has a voltage of either one value or another value. It is called “one bit,” and we have to change the voltage of the wire from one value to the other and put charge on or take charge off. I make an analogy with water: We have to fill a vessel with water to get one level or empty it to get to the other level. This is just an analogy–if you like electricity better you can think more accurately electrically. What we do now is analogous, in the water case, to filling the vessel by pouring water in from a top level (Fig. 1), and lowering the level by opening the valve at the bottom and letting it all run out. In both cases there is a loss of energy because of the sudden drop in level of the water, through a height from the top level where it comes in, to the low bottom level, and also when you start pouring water in to fill it up again. In the cases of voltage and charge, the same thing occurs.

  FIGURE 1

  It’s like, as Mr. Bennett has explained, operating an automobile which has to start by turning on the engine and stop by putting on the brakes. By turning on the engine and then putting on the brakes, each time you lose power. Another way to arrange things for a car would be to connect the wheels to flywheels. Now when the car stops, the flywheel speeds up, thus saving the energy–it can then be reconnected to start the car again. The water analog of this would be to have a U-shaped tube with a valve in the center at the bottom, connecting the two arms of the U (Fig. 2). We start with it full on the right but empty on the left with the valve closed. If we now open the valve, the water will slip over to the other side, and we can close the valve again, just in time to catch the water in the left arm. Now when we want to go the other way, we open the valve again and the water slips back to the other side and we catch it again. There is some loss and the water doesn’t climb as high as it did before, but all we have to do is to put a little water in to correct the loss–a much smaller energy loss than the direct fill method. This trick uses the inertia of the water and the analog for electricity is inductance. However, it is very difficult with the silicon transistors that we use today to make up inductance on the chips. So this technique is not particularly practical with present technology.

  FIGURE 2

  Another way would be to fill the tank by a supply which stays only a little bit above the level of the water, lifting the water supply in time as we fill up the tank (Fig. 3), so that the dropping of water is always small during the entire effort. In the same way, we could use an outlet to lower the level in the tank, but just take water off near the top and lower the tube so that the heat loss would not appear at the position of the transistor, or would be small. The actual amount of loss will
depend on how high the distance is between the supply and the surface as we fill it up. This method corresponds to changing the voltage supply with time. So if we could use a time varying voltage supply, we could use this method. Of course, there is energy loss in the voltage supply, but that is all located in one place and there it is simple to make one big inductance. This scheme is called “hot clocking,” because the voltage supply operates at the same time as the clock which times everything. In addition, we don’t need an extra clock signal to time the circuits as we do in conventional designs.

  FIGURE 3

  Both of these last two devices use less energy if they go slower. If I try to move the water supply level too fast, the water in the tube doesn’t keep up with it and there ends being a big drop in water level. So to make the device work I must go slowly. Similarly, the U-tube scheme will not work unless that central valve can open and close faster than the time it takes for the water in the U-tube to slip back and forth. So my devices must be slower–I’ve saved an energy loss but I’ve made the devices slower. In fact the energy loss multiplied by the time it takes for the circuit to operate is constant. But nevertheless, this turns out to be very practical because the clock time is usually much larger than the circuit time for the transistors, and we can use that to decrease the energy. Also if we went, let us say, three times slower with our calculations, we could use one-third the energy over three times the time, which is nine times less power that has to be dissipated. Maybe this is worth it. Maybe by redesigning using parallel computations or other devices, we can spend a little longer than we could do at maximum circuit speed, in order to make a larger machine that is practical and from which we could still reduce the energy loss.

 

‹ Prev