Book Read Free

'What Do You Care What Other People Think?'

Page 18

by Richard P Feynman


  The only way to have real success in science, the field I’m familiar with, is to describe the evidence very carefully without regard to the way you feel it should be. If you have a theory, you must try to explain what’s good and what’s bad about it equally. In science, you learn a kind of standard integrity and honesty.

  In other fields, such as business, it’s different. For example, almost every advertisement you see is obviously designed, in some way or another, to fool the customer: the print that they don’t want you to read is small; the statements are written in an obscure way. It is obvious to anybody that the product is not being presented in a scientific and balanced way. Therefore, in the selling business, there’s a lack of integrity.

  My father had the spirit and integrity of a scientist, but he was a salesman. I remember asking him the question “How can a man of integrity be a salesman?”

  He said to me, “Frankly, many salesmen in the business are not straightforward—they think it’s a better way to sell. But I’ve tried being straightforward, and I find it has its advantages. In fact, I wouldn’t do it any other way. If the customer thinks at all, he’ll realize he has had some bad experience with another salesman, but hasn’t had that kind of experience with you. So in the end, several customers will stay with you for a long time and appreciate it.”

  My father was not a big, successful, famous salesman; he was the sales manager for a medium-sized uniform company. He was successful, but not enormously so.

  When I see a congressman giving his opinion on something, I always wonder if it represents his real opinion or if it represents an opinion that he’s designed in order to be elected. It seems to be a central problem for politicians. So I often wonder: what is the relation of integrity to working in the government?

  Now, Dr. Keel started out by telling me that he had a degree in physics. I always assume that everybody in physics has integrity—perhaps I’m naive about that—so I must have asked him a question I often think about: “How can a man of integrity get along in Washington?”

  It’s very easy to read that question another way: “Since you’re getting along in Washington, you can’t be a man of integrity!”

  Another thing I understand better now has to do with where the idea came from that cold affects the O-rings. It was General Kutyna who called me up and said, “I was working on my carburetor, and I was thinking: what is the effect of cold on the O-rings?”

  Well, it turns out that one of NASA’s own astronauts told him there was information, somewhere in the works of NASA, that the O-rings had no resilience whatever at low temperatures—and NASA wasn’t saying anything about it.

  But General Kutyna had the career of that astronaut to worry about, so the real question the General was thinking about while he was working on his carburetor was, “How can I get this information out without jeopardizing my astronaut friend?” His solution was to get the professor excited about it, and his plan worked perfectly.

  *Richard’s younger sister, Joan, has a Ph.D. in physics, in spite of this preconception that only boys are destined to be scientists.

  *Note for foreign readers: the quota system was a discriminatory practice of limiting the number of places in a university available to students of Jewish background.

  *Feynman was suffering from abdominal cancer. He had surgery in 1978 and 1981. After he returned from Japan, he had more surgery, in October 1986 and October 1987.

  *Hideki Yukawa. Eminent Japanese physicist; Nobel Prize, 1949.

  *Four years later Richard and Gweneth met the king of Sweden—at the Nobel Prize ceremony.

  †The Feynmans’ dog.

  *Gweneth was expecting Carl at the time.

  †Kiwi.

  ‡Carl. This letter was written in 1963.

  *About 200 square feet.

  * Daughter Michelle was about eleven when this letter was written, in 1980 or 1981.

  *The “New Zealand lectures,” delivered in 1979, are written up in QED: The Strange Theory of Light and Matter (Princeton University Press, 1985).

  †These letters were contributed by Freeman Dyson. They are the first and last letters he wrote that mention Richard Feynman. Other letters are referred to in Dyson’s book Disturbing the Universe.

  *A family friend.

  *As it turned out, Feynman was not to be disappointed: Carl works at the Thinking Machines Company, and daughter Michelle is studying to become a commercial photographer.

  †This letter was contributed by Henry Bethe.

  *The National Aeronautics and Space Administration

  *NASA’s Jet Propulsion Laboratory, located in Pasadena; it is administered by Caltech.

  *Note for foreign readers: a flight that leaves the West Coast around 11 P.M. and arrives on the East Coast around 7 A.M., five hours and three time zones later.

  *Note for foreign readers: Sally Ride was the first American woman in space.

  *Later in our investigation we discovered that it was this leak check which was a likely cause of the dangerous bubbles in the zinc chromate putty that I had heard about at JPL.

  *The tang is the male part of the joint; the clevis is the female part (see Figure 13).

  *The thing Feynman was going to break up was the baloney (the “bull——”) about how good everything was at NASA.

  *The Office of Management and Budget.

  *The reference is to Feynman’s method of slicing string beans, recounted in Surely You’re Joking, Mr. Feynman!

  *Note for foreign readers: the Warren Report was issued in 1964 by the Warren Commission, headed by retired Supreme Court Chief Justice Earl Warren, which investigated the assassination of President John K. Kennedy.

  *Feynman’s way of saying, “whatever it was.”

  *Later, Mr. Lovingood sent me that report. It said things like “The probability of mission success is necessarily very close to 1.0”—does that mean it is close to 1.0, or it ought lo be close to 1.0?—and “Historically, this high degree of mission success has given rise to a difference in philosophy between unmanned and manned space flight programs; i.e., numerical probability versus engineering judgment.” As far as I can tell, “engineering judgment” means they’re just going to make up numbers! The probability of an engine-blade failure was given as a universal constant, as if all the blades were exactly the same, under the same conditions. The whole paper was quantifying everything. Just about every nut and bolt was in there: “The chance that a HPHTP pipe will burst is 10-7.” You can’t estimate things like that; a probability of 1 in 10,000,000 is almost impossible to estimate. It was clear that the numbers for each part of the engine were chosen so that when you add everything together you gel 1 in 100.000.

  *I had heard about this from Bill Graham. He said that when he was first on the job as head of NASA, he was looking through some reports and noticed a little bullet: “*4,000 cycle vibration is within our data base.” He thought that was a funny-looking phrase, so he began asking questions. When he got all the way through, he discovered it was a rather serious matter: some of the engines would vibrate so much that they couldn’t be used. He used it as an example of how difficult it is to get information unless you go down and check on it yourself.

  *Note for foreign readers: Federal Aviation Administration.

  *This refers to “Safecracker Meets Safecracker,” another story told in Surely You’re Joking, Mr. Feynman!

  Appendix F: Personal Observations on the Reliability of the Shuttle

  Introduction

  It appears that there are enormous differences of opinion as to the probability of a failure with loss of vehicle and of human life.* The estimates range from roughly 1 in 100 to 1 in 100,000. The higher figures come from working engineers, and the very low figures come from management. What are the causes and consequences of this lack of agreement? Since 1 part in 100,000 would imply that one could launch a shuttle each day for 300 years expecting to lose only one, we could properly ask, “What is the cause of management’s fantastic faith i
n the machinery?”

  We have also found that certification criteria used in flight readiness reviews often develop a gradually decreasing strictness. The argument that the same risk was flown before without failure is often accepted as an argument for the safety of accepting it again. Because of this, obvious weaknesses are accepted again and again—sometimes without a sufficiently serious attempt to remedy them, sometimes without a flight delay because of their continued presence.

  There are several sources of information: there are published criteria for certification, including a history of modifications in the form of waivers and deviations; in addition, the records of the flight readiness reviews for each flight document the arguments used to accept the risks of the flight. Information was obtained from direct testimony and reports of the range safety officer, Louis J. Ullian, with respect to the history of success of solid fuel rockets. There was a further study by him (as chairman of the Launch Abort Safety Panel, LASP) in an attempt to determine the risks involved in possible accidents leading to radioactive contamination from attempting to fly a plutonium power supply (called a radioactive thermal generator, or RTG) on future planetary missions. The NASA study of the same question is also available. For the history of the space shuttle main engines, interviews with management and engineers at Marshall, and informal interviews with engineers at Rocketdyne, were made. An independent (Caltech) mechanical engineer who consulted for NASA about engines was also interviewed informally. A visit to Johnson was made to gather information on the reliability of the avionics (computers, sensors, and effectors). Finally, there is the report “A Review of Certification Practices Potentially Applicable to Man-rated Reusable Rocket Engines,” prepared at the Jet Propulsion Laboratory by N. Moore et al. in February 1986 for NASA Headquarters, Office of Space Flight. It deals with the methods used by the FAA and the military to certify their gas turbine and rocket engines. These authors were also interviewed informally.

  Solid Rocket Boosters (SRB)

  An estimate of the reliability of solid-fuel rocket boosters (SRBs) was made by the range safety officer by studying the experience of all previous rocket flights. Out of a total of nearly 2900 flights, 121 failed (1 in 25). This includes, however, what may be called “early errors”—rockets flown for the first few times in which design errors are discovered and fixed. A more reasonable figure for the mature rockets might be 1 in 50. With special care in selecting parts and in inspection, a figure below 1 in 100 might be achieved, but 1 in 1000 is probably not attainable with today’s technology. (Since there are two rockets on the shuttle, these rocket failure rates must be doubled to get shuttle failure rates due to SRB failure.)

  NASA officials argue that the figure is much lower. They point out that “since the shuttle is a manned vehicle, the probability of mission success is necessarily very close to 1.0.” It is not very clear what this phrase means. Does it mean it is close to 1 or that it ought to be close to I ? They go on to explain, “Historically, this extremely high degree of mission success has given rise to a difference in philosophy between manned space flight programs and unmanned programs; i.e., numerical probability usage versus engineering judgment.” (These quotations are from “Space Shuttle Data for Planetary Mission RTG Safety Analysis,” pages 3-1 and 3-2, February 15, 1985, NASA, JSC.) It is true that if the probability of failure was as low as 1 in 100,000 it would take an inordinate number of tests to determine it: you would get nothing but a string of perfect flights with no precise figure—other than that the probability is likely less than the number of such flights in the string so far. But if the real probability is not so small, flights would show troubles, near failures, and possibly actual failures with a reasonable number of trials, and standard statistical methods could give a reasonable estimate. In fact, previous NASA experience had shown, on occasion, just such difficulties, near accidents, and even accidents, all giving warning that the probability of flight failure was not so very small.

  Another inconsistency in the argument not to determine reliability through historical experience (as the range safety officer did) is NASA’s appeal to history: “Historically, this high degree of mission success…” Finally, if we are to replace standard numerical probability usage with engineering judgment, why do we find such an enormous disparity between the management estimate and the judgment of the engineers? It would appear that, for whatever purpose—be it for internal or external consumption—the management of NASA exaggerates the reliability of its product to the point of fantasy.

  The history of the certification and flight readiness reviews will not be repeated here (see other parts of the commission report), but the phenomenon of accepting seals that had shown erosion and blowby in previous flights is very clear. The Challenger flight is an excellent example: there are several references to previous flights; the acceptance and success of these flights are taken as evidence of safety. But erosion and blowby are not what the design expected. They are warnings that something is wrong. The equipment is not operating as expected, and therefore there is a danger that it can operate with even wider deviations in this unexpected and not thoroughly understood way. The fact that this danger did not lead to a catastrophe before is no guarantee that it will not the next time, unless it is completely understood. When playing Russian roulette, the fact that the first shot got off safely is of little comfort for the next. The origin and consequences of the erosion and blowby were not understood. Erosion and blowby did not occur equally on all flights or in all joints: sometimes there was more, sometimes less. Why not sometime, when whatever conditions determined it were right, wouldn’t there be still more, leading to catastrophe?

  In spite of these variations from case to case, officials behaved as if they understood them, giving apparently logical arguments to each other—often citing the “success” of previous flights. For example, in determining if flight 51-L was safe to fly in the face of ring erosion in flight 51-C, it was noted that the erosion depth was only one-third of the radius. It had been noted in an experiment cutting the ring that cutting it as deep as one radius was necessary before the ring failed. Instead of being very concerned that variations of poorly understood conditions might reasonably create a deeper erosion this time, it was asserted there was “a safety factor of three.”

  This is a strange use of the engineer’s term “safety factor.” If a bridge is built to withstand a certain load without the beams permanently deforming, cracking, or breaking, it may be designed for the materials used to actually stand up under three times the load. This “safety factor” is to allow for uncertain excesses of load, or unknown extra loads, or weaknesses in the material that might have unexpected flaws, et cetera. But if the expected load comes on to the new bridge and a crack appears in a beam, this is a failure of the design. There was no safety factor at all, even though the bridge did not actually collapse because the crack only went one-third of the way through the beam. The O-rings of the solid rocket boosters were not designed to erode. Erosion was a clue that something was wrong. Erosion was not something from which safety could be inferred.

  There was no way, without full understanding, that one could have confidence that conditions the next time might not produce erosion three times more severe than the time before. Nevertheless, officials fooled themselves into thinking they had such understanding and confidence, in spite of the peculiar variations from case to case. A mathematical model was made to calculate erosion. This was a model based not on physical understanding but on empirical curve fitting. Specifically, it was supposed that a stream of hot gas impinged on the O-ring material, and the heat was determined at the point of stagnation (so far, with reasonable physical, thermodynamical laws). But to determine how much rubber eroded, it was assumed that the erosion varied as the .58 power of heat, the .58 being determined by a nearest fit. At any rate, adjusting some other numbers, it was determined that the model agreed with the erosion (to a depth of one-third the radius of the ring). There is nothing so wrong with this analy
sis as believing the answer! Uncertainties appear everywhere in the model. How strong the gas stream might be was unpredictable; it depended on holes formed in the putty. Blowby showed that the ring might fail, even though it was only partially eroded. The empirical formula was known to be uncertain, for the curve did not go directly through the very data points by which it was determined. There was a cloud of points, some twice above and some twice below the fitted curve, so erosions twice those predicted were reasonable from that cause alone. Similar uncertainties surrounded the other constants in the formula, et cetera, et cetera. When using a mathematical model, careful attention must be given to the uncertainties in the model.

 

‹ Prev