Book Read Free

The Undoing Project

Page 24

by Michael Lewis


  Exactly how some decision analyst would persuade any business, military, or political leader to allow him to edit his thinking was unclear. How would you even persuade some important decision maker to assign numbers to his “utilities”? Important people didn’t want their gut feelings pinned down, even by themselves. And that was the rub.

  Later, Danny recalled the moment he and Amos lost faith in decision analysis. The failure of Israeli intelligence to anticipate the Yom Kippur attack led to an upheaval in the Israeli government and a subsequent brief period of introspection. They’d won the war, but the outcome felt like a loss. The Egyptians, who had suffered even greater losses, were celebrating in the streets as if they had won, while everyone in Israel was trying to figure out what had gone wrong. Before the war, the Israeli intelligence unit had insisted, despite a lot of evidence to the contrary, that Egypt would never attack Israel so long as Israel maintained air superiority. Israel had maintained air superiority, and yet Egypt had attacked. After the war, with the view that perhaps it could do better, Israel’s Ministry of Foreign Affairs set up its own intelligence unit. The man in charge of it, Zvi Lanir, sought Danny’s help. In the end, Danny and Lanir conducted an elaborate exercise in decision analysis. Its basic idea was to introduce a new rigor in dealing with questions of national security. “We started with the idea that we should get rid of the usual intelligence report,” said Danny. “Intelligence reports are in the form of essays. And essays have the characteristic that they can be understood any way you damn well please.” In place of the essay, Danny wanted to give Israel’s leaders probabilities, in numerical form.

  In 1974, U.S. Secretary of State Henry Kissinger had served as the middleman in peace negotiations between Israel and Egypt and between Israel and Syria. As a prod to action, Kissinger had sent the Israeli government the CIA’s assessment that, if the attempt to make peace failed, very bad events were likely to follow. Danny and Lanir set out to give Israeli foreign minister Yigal Allon and the director-general of the ministry precise numerical estimates of the likelihood of some very specific bad things happening. They assembled a list of possible “critical events or concerns”: regime change in Jordan, U.S. recognition of the Palestine Liberation Organization, another full-scale war with Syria, and so on. They then surveyed experts and well-informed observers to establish the likelihood of each event. Among these people, they found a remarkable consensus: There wasn’t a lot of disagreement about the odds. When Danny asked the experts what the effect might be of the failure of Kissinger’s negotiations on the probability of war with Syria, for instance, their answers clustered around “raises the chance of war by 10 percent.”

  Danny and Lanir then presented their probabilities to Israel’s Foreign Ministry. (“The National Gamble,” they called their report.) The director-general looked at the numbers and said, “10 percent increase?—that is a small difference.”

  Danny was stunned: If a 10 percent increase in the chances of full-scale war with Syria wasn’t enough to interest the director-general in Kissinger’s peace process, how much would it take to convince him? That number represented the best estimate of the odds. Apparently the director-general didn’t want to rely on the best estimates. He preferred his own internal probability calculator: his gut. “That was the moment I gave up on decision analysis,” said Danny. “No one ever made a decision because of a number. They need a story.” As Danny and Lanir wrote, decades later, after the U.S. Central Intelligence Agency asked them to describe their experience in decision analysis, the Israeli Foreign Ministry was “indifferent to the specific probabilities.” What was the point of laying out the odds of a gamble, if the person taking it either didn’t believe the numbers or didn’t want to know them? The trouble, Danny suspected, was that “the understanding of numbers is so weak that they don’t communicate anything. Everyone feels that those probabilities are not real—that they are just something on somebody’s mind.”

  * * *

  In the history of Danny and Amos, there are periods when it is difficult to disentangle their enthusiasm for their ideas from their enthusiasm for each other. The moments before and after the Yom Kippur war appear, in hindsight, less like a natural progression from one idea to the next than two men in love scrambling to find an excuse to be together. They felt they were finished exploring the errors that arose from the rules of thumb people use to evaluate probabilities in any uncertain situation. They’d found decision analysis promising but ultimately futile. They went back and forth on writing a general interest book about the various ways the human mind deals with uncertainty; for some reason, they could never get beyond a sketchy outline and false starts of a few chapters. After the Yom Kippur war—and the ensuing collapse of the public’s faith in the judgment of Israeli government officials—they thought that what they really should do was reform the educational system so that future leaders were taught how to think. “We have attempted to teach people to be aware of the pitfalls and fallacies of their own reasoning,” they wrote, in a passage for the popular book that never came to be. “We have attempted to teach people at various levels in government, army etc. but achieved only limited success.”

  Adult minds were too self-deceptive. Children’s minds were a different matter. Danny created a course in judgment for elementary school children, Amos briefly taught a similar class to high school students, and they put together a book proposal. “We found these experiences highly encouraging,” they wrote. If they could teach Israeli kids how to think—how to detect their own seductive but misleading intuition and to correct for it—who knew where it might lead? Perhaps one day those children would grow up to see the wisdom of encouraging Henry Kissinger’s next efforts to make peace between Israel and Syria. But this, too, they never followed through on. They never went broad. It was as if the temptation to address the public interfered with their interest in each other’s minds.

  Instead, Amos invited Danny to explore the question that had kept Amos interested in psychology: How did people make decisions? “One day, Amos just said, ‘We’re finished with judgment. Let’s do decision making,’” recalled Danny.

  The distinction between judgment and decision making appeared as fuzzy as the distinction between judgment and prediction. But to Amos, as to other mathematical psychologists, they were distinct fields of inquiry. A person making a judgment was assigning odds. How likely is it that that guy will be a good NBA player? How risky is that triple-A-rated subprime mortgage–backed CDO? Is the shadow on the X-ray cancer? Not every judgment is followed by a decision, but every decision implies some judgment. The field of decision making explored what people did after they had formed some judgment—after they knew the odds, or thought they knew the odds, or perhaps had judged the odds unknowable. Do I pick that player? Do I buy that CDO? Surgery or chemotherapy? It sought to understand how people acted when faced with risky options.

  Students of decision making had more or less given up on real-world investigations and reduced the field to the study of hypothetical gambles, made by subjects in a lab, in which the odds were explicitly stated. Hypothetical gambles played the same role in the study of decision making that the fruit fly played in the study of genetics. They served as proxies for phenomena impossible to isolate in the real world. To introduce Danny to his field—Danny knew nothing about it—Amos gave him an undergraduate textbook on mathematical psychology that he had written with his teacher Clyde Coombs and another Coombs student, Robyn Dawes, the researcher who had confidently and incorrectly guessed “Computer scientist!” when Danny handed him the Tom W. sketch in Oregon. Then he directed Danny to a very long chapter called “Individual Decision Making.”

  The history of decision theory—the textbook explained to Danny—began in the early eighteenth century, with dice-rolling French noblemen asking court mathematicians to help them figure out how to gamble. The expected value of a gamble was the sum of its outcomes, each weighted by the probability of its occurring
. If someone offers you a coin flip, and you win $100 if the coin lands on heads but lose $50 if it lands on tails, the expected value is $100 × 0.5 + (-$50) × 0.5, or $25. If you follow the rule that you take any bet with a positive expected value, you take the bet. But anyone with eyes could see that people, when they made bets, didn’t always act as if they were seeking to maximize their expected value. Gamblers accepted bets with negative expected values; if they didn’t, casinos wouldn’t exist. And people bought insurance, paying premiums that exceeded their expected losses; if they didn’t, insurance companies would have no viable business. Any theory pretending to explain how a rational person should take risks must at least take into account the common human desire to buy insurance, and other cases in which people systematically failed to maximize expected value.

  The major theory of decision making, Amos’s textbook explained, had been published in the 1730s by a Swiss mathematician named Daniel Bernoulli. Bernoulli sought to account a bit better than simple calculations of expected value for how people actually behaved. “Let us suppose a pauper happens to acquire a lottery ticket by which he may with equal probability win either nothing or 20,000 ducats,” he wrote, back when a ducat was a ducat. “Will he have to evaluate the worth of the ticket as 10,000 ducats, or would he be acting foolishly if he sold it for 9,000 ducats?” To explain why a pauper would prefer 9,000 ducats to a 50-50 chance to win 20,000, Bernoulli resorted to sleight of hand. People didn’t maximize value, he said; they maximized “utility.”

  What was a person’s “utility”? (That odd, off-putting word here meant something like “the value a person assigns to money.”) Well, that depended on how much money the person had to begin with. But a pauper holding a lottery ticket with an expected value of 10,000 ducats would certainly experience greater utility from 9,000 ducats in cash.

  “People will choose whatever they most want” is not all that helpful as a theory to predict human behavior. What saved “expected utility theory,” as it came to be called, from being so general as to be meaningless were its assumptions about human nature. To his assumption that people making decisions sought to maximize utility, Bernoulli added an assumption that people were “risk averse.” Amos’s textbook defined risk aversion this way: “The more money one has, the less he values each additional increment, or, equivalently, that the utility of any additional dollar diminishes with an increase in capital.” You value the second thousand dollars you get your hands on a bit less than you do the first thousand, just as you value the third thousand a bit less than the second thousand. The marginal value of the dollars you give up to buy fire insurance on your house is less than the marginal value of the dollars you lose if your house burns down—which is why even though the insurance is, strictly speaking, a stupid bet, you buy it. You place less value on the $1,000 you stand to win flipping a coin than you do on the $1,000 already in your bank account that you stand to lose—and so you reject the bet. A pauper places so much value on the first 9,000 ducats he gets his hands on that the risk of not having them overwhelms the temptation to gamble, at favorable odds, for more.

  This was not to say that real people in the real world behaved as they did because they had the traits Bernoulli ascribed to them. Only that the theory seemed to describe some of what people did in the real world, with real money. It explained the desire to buy insurance. It distinctly did not explain the human desire to buy a lottery ticket, however. It effectively turned a blind eye to gambling. Odd this, as the search for a theory about how people made risky decisions had started as an attempt to make Frenchmen shrewder gamblers.

  Amos’s text skipped over the long, tortured history of utility theory after Bernoulli all the way to 1944. A Hungarian Jew named John von Neumann and an Austrian anti-Semite named Oskar Morgenstern, both of whom fled Europe for America, somehow came together that year to publish what might be called the rules of rationality. A rational person making a decision between risky propositions, for instance, shouldn’t violate the von Neumann and Morgenstern transitivity axiom: If he preferred A to B and B to C, then he should prefer A to C. Anyone who preferred A to B and B to C but then turned around and preferred C to A violated expected utility theory. Among the remaining rules, maybe the most critical—given what would come—was what von Neumann and Morgenstern called the “independence axiom.” This rule said that a choice between two gambles shouldn’t be changed by the introduction of some irrelevant alternative. For example: You walk into a deli to get a sandwich and the man behind the counter says he has only roast beef and turkey. You choose turkey. As he makes your sandwich he looks up and says, “Oh, yeah, I forgot I have ham.” And you say, “Oh, then I’ll take the roast beef.” Von Neumann and Morgenstern’s axiom said, in effect, that you can’t be considered rational if you switch from turkey to roast beef just because they found some ham in the back.

  And, really, who would switch? Like the other rules of rationality, the independence axiom seemed reasonable, and not obviously contradicted by the way human beings generally behaved.

  Expected utility theory was just a theory. It didn’t pretend to be able to explain or predict everything people did when they faced some risky decision. Danny gleaned its importance not from reading Amos’s description of it in the undergraduate textbook but only from the way Amos spoke of it. “This was a sacred thing for Amos,” said Danny. Although the theory made no great claim to psychological truth, the textbook Amos had coauthored made it clear that it had been accepted as psychologically true. Pretty much everyone interested in such things, a group that included the entire economics profession, seemed to take it as a fair description of how ordinary people faced with risky alternatives actually went about making choices. That leap of faith had at least one obvious implication for the sort of advice economists gave to political leaders: It tilted everything in the direction of giving people the freedom to choose and leaving markets alone. After all, if people could be counted on to be basically rational, markets could, too.

  Amos had clearly wondered about that, even as a Michigan graduate student. Amos had always had an almost jungle instinct for the vulnerability of other people’s ideas. He of course knew that people made decisions that the theory would not have predicted. Amos himself had explored how people could be—as the theory assumed they were not—reliably “intransitive.” As a graduate student in Michigan, he had induced both Harvard undergraduates and convicted murderers in Michigan prisons, over and over again, to choose gamble A over gamble B, then choose gamble B over gamble C—and then turn around and choose C instead of A. That violated a rule of expected utility theory. And yet Amos had never followed his doubts very far. He saw that people sometimes made mistakes; he did not see anything systematically irrational in the way they made decisions. He hadn’t figured out how to bring deep insights about human nature into the mathematical study of human decision making.

  By the summer of 1973, Amos was searching for ways to undo the reigning theory of decision making, just as he and Danny had undone the idea that human judgment followed the precepts of statistical theory. On a trip to Europe with his friend Paul Slovic, he shared his latest thoughts about how to make room, in the world of decision theory, for a messier view of human nature. “Amos warns against pitting utility theory vs. an alternative model in a direct, head to head, empirical test,” Slovic relayed, in a letter to a colleague, in September 1973. “The problem is that utility theory is so general that it is hard to refute. Our strategy should be to take the offensive in building a case, not against utility theory, but for an alternative conception that brings man’s limitations in as a constraint.”

  Amos had at his disposal a connoisseur of man’s limitations. He now described Danny as “the world’s greatest living psychologist.” Not that he ever said anything so flattering to Danny directly. (“Manly reticence was the rule,” said Danny.) He never fully explained to Danny why he thought to invite him into decision theory—a technical and antiseptic field
Danny cared little about and knew less of. But it is hard to believe that Amos was simply looking around for something else they might do together. It’s easier to believe that Amos suspected what might happen after he gave Danny his textbook on the subject. That moment has the feel of an old episode of The Three Stooges, when Larry plays “Pop Goes the Weasel” and triggers Curly into a frenzy of destruction.

  Danny read Amos’s textbook the way he might have read a recipe written in Martian. He decoded it. He had long ago realized that he wasn’t a natural applied mathematician, but he could follow the logic of the equations. He knew that he was meant to respect, even revere, them. Amos was a member of high standing in the society of mathematical psychologists. That society in turn looked down upon much of the rest of psychology. “It is a given that people who use mathematics have some glamour,” said Danny. “It was prestigious because it borrowed the aura of mathematics and because nobody else could understand what was going on there.” Danny couldn’t escape the growing prestige of math in the social sciences: His remove counted against him. But he didn’t really admire decision theory, or care about it. He cared why people behaved as they did. And to Danny’s way of thinking, the major theory of decision making did not begin to describe how people made decisions.

 

‹ Prev