The Folly of Fools: The Logic of Deceit and Self-Deception in Human Life

Home > Other > The Folly of Fools: The Logic of Deceit and Self-Deception in Human Life > Page 24
The Folly of Fools: The Logic of Deceit and Self-Deception in Human Life Page 24

by Robert Trivers


  Regarding the specific event of 9/11 itself, although the United States already had a general history of inattention to safety, the George W. Bush administration even more dramatically dropped the ball in the months leading up to 9/11—first downgrading Richard Clarke, the internal authority on possible terrorist attacks, including specifically those from Osama bin Laden. The administration stated they were interested in a more aggressive approach than merely “swatting at flies” (bin Laden here being, I think, the fly). Bush himself joked about the August 2001 memo saying that bin Laden was planning an attack within the United States. Indeed, he denigrated the CIA officer who had relentlessly pressed (amid code-red terrorist chatter) to give the president the briefing at his Texas home. “All right,” Bush said when the man finished. “You’ve covered your ass now,” as indeed he had, but Bush left his own exposed. So his administration had a particular interest in focusing only on the enemy, not on any kind of missed signals or failure to exercise due caution. Absence of self-criticism converts attention from defense to offense.

  THE CHALLENGER DISASTER

  On January 28, 1986, the Challenger space vehicle took off from Florida’s Kennedy Space Center and seventy-three seconds later exploded over the Atlantic Ocean, killing all seven astronauts aboard. The disaster was subject to a brilliant analysis by the famous physicist Richard Feynman, who had been placed on the board that investigated and reported on the crash. He was known for his propensity to think everything through for himself and hence was relatively immune to conventional wisdom. It took him little more than a week (with the help of an air force general) to locate the defective part (the O-ring, a simple part of the rocket), and he spent the rest of his time trying to figure out how an organization as large, well funded, and (apparently) sophisticated as NASA could produce such a shoddy product.

  Feynman concluded that the key was NASA’s deceptive posture toward the United States as a whole. This had bred self-deception within the organization. When NASA was given the assignment and the funds to travel to the moon in the 1960s, the society, for better or worse, gave full support to the objective: beat the Russians to the moon. As a result, NASA could design the space vehicle in a rational way, from the bottom up—with multiple alternatives tried at each step—giving maximum flexibility, should problems arise, as the spacecraft was developed. Once the United States reached the moon, NASA was a $5 billion bureaucracy in need of employment. Its subsequent history, Feynman argued, was dictated by the need to create employment, and this generated an artificial system for justifying space travel—a system that inevitably compromised safety. Put more generally, when an organization practices deception toward the larger society, this may induce self-deception within the organization, just as deception between individuals induces individual self-deception.

  The space program, Feynman argued, was dominated by a need to generate funds, and critical design features, such as manned flight versus unmanned flight, were chosen precisely because they were costly. The very concept of a reusable vehicle—the so-called shuttle—was designed to appear inexpensive but was in fact just the opposite (more expensive, it turned out, than using brand-new capsules each time). In addition, manned flight had glamour appeal, which might generate enthusiasm for the expenses. But since there was very little scientific work to do in space (that wasn’t better done by machines or on Earth), most was make-do work, showing how plants grow absent gravity (gravity-free zones can be produced on Earth at a fraction of the cost) and so on. This was a little self-propelled balloon with unfortunate downstream effects. Since it was necessary to sell this project to Congress and the American people, the requisite dishonesty led inevitably to internal self-deception. Means and concepts were chosen for their ability to generate cash flow and the apparatus was then designed top-down. This had the unfortunate effect that when a problem surfaced, such as the fragile O-rings, there was little parallel exploration and knowledge to solve the problem. Thus NASA chose to minimize the problem and the NASA unit assigned to deal with safety became an agent of rationalization and denial, instead of careful study of safety factors. Presumably it functioned to supply higher-ups with talking points in their sales pitches to others and to themselves.

  Some of the most extraordinary mental gyrations in service of institutional self-deception took place within the safety unit. Seven of twenty-three Challenger flights had shown O-ring damage. If you merely plot chance of damage as a function of temperature at time of takeoff, you get a significant negative relationship: lower temperature meant higher chance of O-ring damage. For example, all four flights below 65 degrees F showed some O-ring damage. To prevent themselves—or others—from seeing this, the safety unit performed the following mental operation. They said that sixteen flights showed no damage and were thus irrelevant and could be excluded from further analysis. This is extraordinary in itself—one never wishes to throw away data, especially when it is so hard to come by. Since some of the damage occurred during high-temperature takeoffs, temperature at takeoff could be ruled out as a cause. This example is now taught in elementary statistics texts as an example of how not to do statistics. It is also taught in courses on optimal (or suboptimal) data presentation since, even while arguing against a flight, the engineers at Thiokol, the company that built the O-ring, presented their evidence in such a way as to invite rebuttal. The relevance of the mistake itself could hardly be clearer since the temperature during the Challenger takeoff (below freezing) was more than 20 degrees below the previous lowest takeoff temperature.

  On the previous coldest flight (at a balmy 54 degrees), an O-ring had been eaten one-third of the way through. Had it been eaten all the way through, the flight would have blown up, as did the Challenger. But NASA cited this case of one-third damage as a virtue, claiming to have built in a “threefold safety factor.” This is a most unusual use of language. By law, you must build an elevator strong enough that the cable can support a full load and run up and down a number of times without any damage. Then you must make it eleven times stronger. This is called an elevenfold safety factor. NASA has the elevator hanging by a thread and calls it a virtue. They even used circular arguments with a remarkably small radius: since manned flight had to be much safer than unmanned flight, it perforce was. In short, in service of the larger institutional deceit and self-deception, the safety unit was thoroughly corrupted to serve propaganda ends, that is, to create the appearance of safety where none existed. This must have aided top management in their self-deception: less conscious of safety problems, less internal conflict while selling the story.

  There is thus a close analogy between self-deception within an individual and self-deception within an organization—both serving to deceive others. In neither case is information completely destroyed (all twelve Thiokol engineers had voted against flight that morning, and one was vomiting in his bathroom in fear shortly before takeoff). The truth is merely relegated to portions of the person or the organization that are inaccessible to consciousness (we can think of the people running NASA as the conscious part of the organization). In both cases, the entity’s relationship to others determines its internal information structure. In a non-deceitful relationship, information can be stored logically and coherently. In a deceitful relationship, information will be stored in a biased manner the better to fool others—but with serious potential costs. However, note here that it is the astronauts who suffer the ultimate cost, while the upper echelons of NASA—indeed, the entire organization minus the dead—may enjoy a net benefit (in employment, for example) from this casual and self-deceived approach to safety. Feynman imagined the kinds of within-organization conversations that would bias information flow in the appropriate direction. You, as a working engineer, might take your safety concern to your boss and get one of two responses. He or she might say, “Tell me more” or “Have you tried such-and-such?” But if he or she replied, “Well, see what you can do about it” once or twice, you might very well decide, “To hell with it.” These
are the kinds of interactions—individual on individual (or cell on cell)—that can produce within-unit self-deception. And have no fear, the pressures from overhead are backed up with power, deviation is punished, and employment is put at risk. When the head of the engineers told upper management that he and other engineers were voting against the flight, he was told to “take off your engineering hat and put on your management hat.” Without even producing a hat, this did the trick and he switched his vote.

  There was one striking success of the safety unit. When asked to guess the chance of a disaster occurring, they estimated one in seventy. They were then asked to provide a new estimate and they answered one in ninety. Upper management then reclassified this arbitrarily as one in two hundred, and after a couple of additional flights, as one in ten thousand, using each new flight to lower the overall chance of disaster into an acceptable range. As Feynman noted, this is like playing Russian roulette and feeling safer after each pull of the trigger fails to kill you. In any case, the number produced by this logic was utterly fanciful: you could fly one of these contraptions every day for thirty years and expect only one failure? The original estimate turned out to be almost exactly on target. By the time of the Columbia disaster, there had been 126 flights with two disasters for a rate of one in sixty-three. Note that if we tolerated this level of error in our commercial flights, three hundred planes would fall out of the sky every day across the United States alone. One wonders whether astronauts would have been so eager for the ride if they had actually understood their real odds. It is interesting that the safety unit’s reasoning should often have been so deficient, yet the overall estimate exactly on the mark. This suggests that much of the ad hoc “reasoning” was produced under pressure from the upper ranks after the unit had surmised correctly. There is an analogy here to individual self-deception, in which the initial, spontaneous evaluation (for example, of fairness) is unbiased, after which higher-level mental processes introduce the bias.

  There is an additional irony to the Challenger disaster. This was an all-American crew, an African American, a Japanese American, and two women—one an elementary schoolteacher who was to teach a class to fifth graders across the nation from space, a stunt of marginal educational value. Yet the stunt helped entrain the flight, since if the flight was postponed, the next possible date was in the summer, when children would no longer be in school to receive their lesson. Thus was NASA hoisted on its own petard. Or as has been noted, the space program shares with gothic cathedrals the fact that each is designed to defy gravity for no useful purpose except to aggrandize humans. Although many would say that the primary purpose of cathedrals was to glorify God, many such individuals were often self-aggrandizing. One wonders how many more people died building cathedrals than flying space machines.

  THE COLUMBIA DISASTER

  It is extraordinary that seventeen years later, the Challenger disaster would be repeated, with many elements unchanged, in the Columbia disaster. Substitute “foam” for “O-ring” and the story is largely the same. In both cases, NASA denied they had a problem, and in both cases it proved fatal. In both cases, the flight itself had little in the way of useful purpose but was done for publicity purposes: to generate funding and/or meet congressionally mandated flight targets. As before, the crew was a multicultural dream: another African American, two more women (one of whom was Indian), and an Israeli who busied himself on the flight collecting dust over (where else?) the Middle East. Experiments designed by children in six countries on spiders, silkworms, and weightlessness were duly performed. In short, as before, there was no serious purpose to the flight; it was a publicity show.

  The Columbia spacecraft took off on January 15, 2003 (another relatively cold date), for a seventeen-day mission in space. Eighty-two seconds after launch, a 1.7-pound chunk of insulating foam broke off from the rocket, striking the leading edge of the left wing of the space capsule, and (as was later determined) apparently punching a hole in it about a foot in diameter. The insulating foam was meant to protect the rocket from cold during takeoff, and there was a long history of foam breaking off during flight and striking the capsule. Indeed, on average thirty small pieces struck on every flight. Only this time the piece of foam was one hundred times larger than any previously seen. On the Atlantis flight in December 1988, 707 small particles of foam hit the capsule, which, in turn, was inspected during orbit with a camera attached to a robotic arm. The capsule looked as though it had been blasted with a shotgun. It had lost a heat-protective tile but was saved by an aluminum plate underneath. As before, rather than seeing this degree of damage as alarming, the fact that the capsule survived reentry was taken as evidence that foam was not a safety problem. But NASA did more. Two flights before the Columbia disaster, a piece of foam had broken off from the bipod ramp and dented one of the rockets, but shuttle managers formally decided not to classify it as an “in-flight anomaly,” though all similar events from the bipod ramp had been so classified. The reason for this change was to avoid a delay in the next flight, and NASA was under special pressure from its new head to make sure flights were frequent. This is similar to the artificial pressure for the Challenger to fly to meet an external schedule.

  The day after takeoff, low-level engineers assigned to review film of the launch were alarmed at the size and speed of the foam that had struck the shuttle. They compiled the relevant footage and e-mailed it to various superiors, engineers, and managers in charge of the shuttle program itself. Anticipating that their grainy photos would need to be replaced by much more accurate and up-to-date footage, they presumed on their own to contact the Department of Defense and ask that satellite or high-resolution ground cameras be used to photograph the shuttle in orbit. Within days the Air Force said it would be happy to oblige and made the first moves to satisfy this request. Then an extraordinary thing happened. Word reached a higher-level manager who normally would have cleared such a request with the Air Force. At once, she asked her superiors whether they wanted to know the requested information. They said no. Armed with this, she told the Air Force they no longer needed to provide the requested information and that the only problem was underlings who failed to go through proper channels! On such nonsense, life-and-death decisions may turn.

  This is vintage self-deception: having failed to deal with the problem over the long term, having failed to prepare for a contingency in which astronauts are alive in a disabled capsule unable to return to Earth, the NASA higher-ups then decide to do nothing at all except avert their eyes and hope for the best. With fast, well-thought-out action, there was just barely time to launch a flight that might reach the astronauts before their oxygen expired. It would have required a lot of luck, with few or no hitches during countdown, so it was unlikely. An alternative was for the astronauts to attempt crude patches on the damaged wing itself. But why face reality at this point? They had made no preparation for this contingency, and they would be making life-and-death decisions with all the world watching. Why not make it with no one watching, including themselves? Why not cross their fingers and go with the program? Denial got them where they were, so why not ride it all the way home?

  The pattern of instrument failure before disintegration and the wreckage itself made it abundantly clear that the foam strike filmed during takeoff must have brought down the Columbia, but people at NASA still resisted, denying that it was even possible for a foam strike to have done such damage and deriding those who thought otherwise as “foam-ologists.” For this reason, the investigating commission decided to put the matter to a direct test. They fired foam pieces of the correct weight at different angles to the left sides of mock-ups of the spacecraft. Even this NASA resisted, insisting that the test use only the small pieces of foam that NASA had modeled! The key shot was the one that mimicked most closely the actual strike, and it blew a hole in the capsule big enough to put your head through. That was the end of that: even NASA folded its tent. But note that denial (of the problem ahead of time) entrained deni
al (of the ongoing problem), which entrained denial (after the fact). As we have noted in other contexts, this is a characteristic feature of denial: it is self-reinforcing.

  The new safety office created in response to the Challenger explosion was also a fraud, as described by the head of the commission that later investigated the Columbia disaster, with no “people, money, engineering experience, [or] analysis.” Two years after the Columbia crash, the so-called broken safety culture (twenty years and counting) at NASA still had not been changed, at least according to a safety expert and former astronaut (James Wetherbee). Under pressure to stick to budget and flight schedules, managers continue to suppress safety concerns from engineers and others close to reality. Administrators ask what degree of risk is acceptable, when it should be what degree is necessary and how to eliminate that which is unnecessary. A recent poll showed the usual split: 40 percent of managers in the safety office thought the safety culture was improving while only 8 percent of workers saw it that way. NASA’s latest contributions to safety are a round table in the conference room instead of a rectangular one, meetings allowed to last more than half an hour, and an anonymous suggestion box. These hardly seem to go to the heart of the problem.

  That the safety unit should have been such a weak force within the organization is part of a larger problem of organizational self-criticism. It has been argued that organizations often evaluate their behavior and beliefs poorly because the organizations turn against their evaluation units, attacking, destroying, or co-opting them. Promoting change can threaten jobs and status, and those who are threatened are often more powerful than the evaluators, leading to timid and ineffective self-criticism and inertia within the organization. As we have seen, such pressures have kept the safety evaluation units in NASA crippled for twenty years, despite disaster after disaster. This is also a reason that corporations often hire outsiders, at considerable expense, to come in and make the evaluation for them, analogous perhaps to individuals who at considerable expense consult psychotherapists and the like. Even grosser and more costly failures of self-criticism occur at the national level, and we will refer to some of these when we discuss war (see Chapter 11).

 

‹ Prev