The Undoing Project

Home > Other > The Undoing Project > Page 26
The Undoing Project Page 26

by Michael Lewis


  10

  THE ISOLATION EFFECT

  It was seldom possible for Amos and Danny to recall where their ideas had come from. They both found it pointless to allocate credit, as their thoughts felt like some alchemical by-product of their interaction. Yet, on occasion, their origins were preserved. The notion that people making risky decisions were especially sensitive to change pretty clearly had at least started with Danny. But it became seriously valuable only because of what Amos said next. One day, toward the end of 1974, as they looked over the gambles they had put to their subjects, Amos asked, “What if we flipped the signs?” Till that point, the gambles had all involved choices between gains. Would you rather have $500 for sure or a 50-50 shot at $1,000? Now Amos asked, “What about losses?” As in:

  Which of the following do you prefer?

  Gift A: A lottery ticket that offers a 50 percent chance of losing $1,000

  Gift B: A certain loss of $500

  It was instantly obvious to them that if you stuck minus signs in front of all these hypothetical gambles and asked people to reconsider them, they behaved very differently than they had when faced with nothing but possible gains. “It was a eureka moment,” said Danny. “We immediately felt like fools for not thinking of that question earlier.” When you gave a person a choice between a gift of $500 and a 50-50 shot at winning $1,000, he picked the sure thing. Give that same person a choice between losing $500 for sure and a 50-50 risk of losing $1,000, and he took the bet. He became a risk seeker. The odds that people demanded to accept a certain loss over the chance of some greater loss crudely mirrored the odds they demanded to forgo a certain gain for the chance of a greater gain. For example, to get people to prefer a 50-50 chance of $1,000 over some certain gain, you had to lower the certain gain to around $370. To get them to prefer a certain loss to a 50-50 chance of losing $1,000, you had to lower the loss to around $370.

  Actually, they soon discovered, you had to reduce the amount of the certain loss even further if you wanted to get people to accept it. When choosing between sure things and gambles, people’s desire to avoid loss exceeded their desire to secure gain.

  The desire to avoid loss ran deep, and expressed itself most clearly when the gamble came with the possibility of both loss and gain. That is, when it was like most gambles in life. To get most people to flip a coin for a hundred bucks, you had to offer them far better than even odds. If they were going to lose $100 if the coin landed on heads, they would need to win $200 if it landed on tails. To get them to flip a coin for ten thousand bucks, you had to offer them even better odds than you offered them for flipping it for a hundred. “The greater sensitivity to negative rather than positive changes is not specific to monetary outcomes,” wrote Amos and Danny. “It reflects a general property of the human organism as a pleasure machine. For most people, the happiness involved in receiving a desirable object is smaller than the unhappiness involved in losing the same object.”

  It wasn’t hard to imagine why this might be—a heightened sensitivity to pain was helpful to survival. “Happy species endowed with infinite appreciation of pleasures and low sensitivity to pain would probably not survive the evolutionary battle,” they wrote.

  As they sorted through the implications of their new discovery, one thing was instantly clear: Regret had to go, at least as a theory. It might explain why people made seemingly irrational decisions to accept a sure thing over a gamble with a far greater expected value. It could not explain why people facing losses became risk seeking. Anyone who wanted to argue that regret explains why people prefer a certain $500 to an equal chance to get $0 and $1,000 would never be able to explain why, if you simply subtracted $1,000 from all the numbers and turned the sure thing into a $500 loss, people would prefer the gamble. Amazingly, Danny and Amos did not so much as pause to mourn the loss of a theory they’d spent more than a year working on. The speed with which they simply walked away from their ideas about regret—many of them obviously true and valuable—was incredible. One day they are creating the rules of regret as if those rules might explain much of how people made risky decisions; the next, they have moved on to explore a more promising theory, and don’t give regret a second thought.

  Instead they set out to determine precisely where and how people responded to the odds of various bets involving both losses and gains. Amos liked to call good ideas “raisins.” There were three raisins in the new theory. The first was the realization that people responded to changes rather than absolute levels. The second was the discovery that people approached risk very differently when it involved losses than when it involved gains. Exploring people’s responses to specific gambles, they found a third raisin: People did not respond to probability in a straightforward manner. Amos and Danny already knew, from their thinking about regret, that in gambles that offered a certain outcome, people would pay dearly for that certainty. Now they saw that people reacted differently to different degrees of uncertainty. When you gave them one bet with a 90 percent chance of working out and another with a 10 percent chance of working out, they did not behave as if the first was nine times as likely to work out as the second. They made some internal adjustment, and acted as if a 90 percent chance was actually slightly less than a 90 percent chance, and a 10 percent chance was slightly more than a 10 percent chance. They responded to probabilities not just with reason but with emotion.

  Whatever that emotion was, it became stronger as the odds became more remote. If you told them that there was a one-in-a-billion chance that they’d win or lose a bunch of money, they behaved as if the odds were not one in a billion but one in ten thousand. They feared a one-in-a-billion chance of loss more than they should and attached more hope to a one-in-a-billion chance of gain than they should. People’s emotional response to extremely long odds led them to reverse their usual taste for risk, and to become risk seeking when pursuing a long-shot gain and risk avoiding when faced with the extremely remote possibility of loss. (Which is why they bought both lottery tickets and insurance.) “If you think about the possibilities at all, you think of them too much,” said Danny. “When your daughter is late and you worry, it fills your mind even when you know there is very little to fear.” You’d pay more than you should to rid yourself of that worry.

  People treated all remote probabilities as if they were possibilities. To create a theory that would predict what people actually did when faced with uncertainty, you had to “weight” the probabilities, in the way that people did, with emotion. Once you did that, you could explain not only why people bought insurance and lottery tickets. You could even explain the Allais paradox.*

  At some point, Danny and Amos became aware that they had a problem on their hands. Their theory explained all sorts of things that expected utility failed to explain. But it implied, as utility theory never had, that it was as easy to get people to take risks as it was to get them to avoid them. All you had to do was present them with a choice that involved a loss. In the more than two hundred years since Bernoulli started the discussion, intellectuals had regarded risk-seeking behavior as a curiosity. If risk seeking was woven into human nature, as Danny and Amos’s theory implied that it was, why hadn’t people noticed it before?

  The answer, Amos and Danny now thought, was that intellectuals who studied human decision making had been looking in the wrong places. Mostly they had been economists, who directed their attention to the way people made decisions about money. “It is an ecological fact,” wrote Amos and Danny in a draft, “that most decisions in that context (except insurance) involve mainly favorable prospects.” The gambles that economists studied were, like most savings and investment decisions, choices between gains. In the domain of gains, people were indeed risk averse. They took the sure thing over the gamble. Danny and Amos thought that if the theorists had spent less time with money and more time with politics and war, or even marriage, they might have come to different conclusions about human nature. In politics and wa
r, as in fraught human relationships, the choice faced by the decision maker was often between two unpleasant options. “A very different view of man as a decision maker might well have emerged if the outcomes of decisions in the private-personal, political or strategic domains had been as easily measurable as monetary gains and losses,” they wrote.

  * * *

  Danny and Amos spent the first half of 1975 getting their theory into shape so that a rough draft might be shown to other people. They started with the working title “Value Theory” but then changed it to “Risk-Value Theory.” For a pair of psychologists who were attacking a theory erected and defended mainly by economists, they wrote with astonishing aggression and confidence. The old theory, they wrote, didn’t really even consider how actual human beings grappled with risky decisions. All it did was “to explain risky choices solely in terms of attitudes to money or wealth.” Between the lines, the reader could detect their giddiness. “Amos and I are in the middle of our most productive period ever,” Danny wrote to Paul Slovic, in early 1975. “We’re developing what appears to us to be a rather complete and quite novel account of choice under uncertainty. The regret treatment has been superseded by a sort of reference level or adaptation level treatment.” Six months later, Danny wrote Slovic that they had a prototype of a new theory of decision making. “Amos and I barely managed to finish a paper on risky choice in time to present it to an illustrious group of economists who convene in Jerusalem this week,” he wrote. “It is still fairly rough.”

  The meeting in question, billed as a conference on public economics, convened in June 1975 at a kibbutz just outside Jerusalem. And so it was on a farm that a theory that would become among the most influential in the history of economics made its public debut. Decision theory was Amos’s field, and so Amos did all the talking. The audience contained at least three current and future Nobel Prize winners in economics: Peter Diamond, Daniel McFadden, and Kenneth Arrow. “When you listened to Amos, you knew you were talking to a first-rate mind,” said Arrow. “You raise a question. He’s thought of the question already, and he has an answer.”

  After he listened to Amos’s presentation, Arrow had one big question for Amos: What is a loss?

  The theory obviously turned on the stark difference in people’s feelings when they faced potential losses rather than potential gains. A loss, according to the theory, was when a person wound up worse off than his “reference point.” But what was this reference point? The easy answer was: wherever you started from. Your status quo. A loss was just when you ended up worse than your status quo. But how did you determine any person’s status quo? “In the experiments it’s pretty clear what a loss is,” Arrow said later. “In the real world it’s not so clear.”

  Wall Street trading desks at the end of each year offer a flavor of the problem. If a Wall Street trader expects to be paid a bonus of one million dollars and he’s given only half a million, he feels himself to be, and behaves as if he is, in the domain of losses. His reference point is an expectation of what he would receive. That expectation isn’t a stable number; it can be changed in all sorts of ways. A trader who expects to be given a million-dollar bonus, and who further expects everyone else on his trading desk to be given million-dollar bonuses, will not maintain the same reference point if he learns that everyone else just received two million dollars. If he is then paid a million dollars, he is back in the domain of losses. Danny would later use the same point to explain the behavior of apes in experiments researchers had conducted on bonobos. “If both my neighbor in the next cage and I get a cucumber for doing a great job, that’s great. But if he gets a banana and I get a cucumber, I will throw the cucumber at the experimenter’s face.” The moment one ape got a banana, it became the ape next door’s reference point.

  The reference point was a state of mind. Even in straight gambles you could shift a person’s reference point and make a loss seem like a gain, and vice versa. In so doing, you could manipulate the choices people made, simply by the way they were described. They gave the economists a demonstration of the point:

  Problem A. In addition to whatever you own, you have been given $1,000. You are now required to choose between the following options:

  Option 1. A 50 percent chance to win $1000

  Option 2. A gift of $500

  Most everyone picked option 2, the sure thing.

  Problem B. In addition to whatever you own, you have been given $2,000. You are now required to choose between the following options:

  Option 3. A 50 percent chance to lose $1,000

  Option 4. A sure loss of $500

  Most everyone picked option 3, the gamble.

  The two questions were effectively identical. In both cases, if you picked the gamble, you wound up with a 50-50 shot at being worth $2,000. In both cases, if you picked the sure thing, you wound up being worth $1,500. But when you framed the sure thing as a loss, people chose the gamble. When you framed it as a gain, people picked the sure thing. The reference point—the point that enabled you to distinguish between a gain and a loss— wasn’t some fixed number. It was a psychological state. “What constitutes a gain or a loss depends on the representation of the problem and on the context in which it arises,” the first draft of “Value Theory” rather loosely explained. “We propose that the present theory applies to the gains and losses as perceived by the subject.”

  Danny and Amos were trying to show that people faced with a risky choice failed to put it in context. They evaluated it in isolation. In exploring what they now called the isolation effect, Amos and Danny had stumbled upon another idea—and its real-world implications were difficult to ignore. This one they called “framing.” Simply by changing the description of a situation, and making a gain seem like a loss, you could cause people to completely flip their attitude toward risk, and turn them from risk avoiding to risk seeking. “We invented framing without realizing we were inventing framing,” said Danny. “You take two things that should be identical—the way they differ should be irrelevant—and by showing it isn’t irrelevant, you show that expected utility theory is wrong.” Framing, to Danny, felt like their work on judgment. Here, look, yet another strange trick the mind played on itself.

  Framing was just another phenomenon: There was never going to be a theory of framing. But Amos and Danny would eventually spend all kinds of time and energy dreaming up examples of the phenomenon, to illustrate how it might distort real-world decisions. The most famous was the Asian Disease Problem.

  The Asian Disease Problem was actually two problems, which they gave, separately, to two different groups of subjects innocent of the power of framing. The first group got this problem:

  Problem 1. Imagine that the U.S. is preparing for the outbreak of an unusual Asian disease, which is expected to kill 600 people. Two alternative programs to combat the disease have been proposed. Assume that the exact scientific estimate of the consequence of the programs is as follows:

  If Program A is adopted, 200 people will be saved.

  If Program B is adopted, there is a 1/3 probability that 600 people will be saved, and a 2/3 probability that no people will be saved.

  Which of the two programs would you favor?

  An overwhelming majority chose Program A, and saved 200 lives with certainty.

  The second group got the same setup but with a choice between two other programs:

  If Program C is adopted, 400 people will die.

  If Program D is adopted, there is a 1/3 probability that nobody will die and a 2/3 probability that 600 people will die.

  When the choice was framed this way, an overwhelmingly majority chose Program D. The two problems were identical, but, in the first case, when the choice was framed as a gain, the subjects elected to save 200 people for sure (which meant that 400 people would die for sure, though the subjects weren’t thinking of it that way). In the second case, with the choice framed as a loss, the
y did the reverse, and ran the risk that they’d kill everyone.

  People did not choose between things. They chose between descriptions of things. Economists, and anyone else who wanted to believe that human beings were rational, could rationalize, or try to rationalize, loss aversion. But how did you rationalize this? Economists assumed that you could simply measure what people wanted from what they chose. But what if what you want changes with the context in which the options are offered to you? “It was a funny point to make because the point within psychology would have been banal,” the psychologist Richard Nisbett later said. “Of course we are affected by how the decision is presented!”

  After the meeting between the American economists and the Israeli psychologists on the Jerusalem kibbutz, the economists returned to the United States and Amos sent a letter to Paul Slovic. “Everything considered we got a very favorable response,” he wrote. “Somehow, the economists felt that we are right and at the same time they wished we weren’t because the replacement of utility theory by the model we outlined would cause them no end of problems.”

  * * *

  There was at least one economist who didn’t feel that way, but he wasn’t, at least when he came upon Danny and Amos’s theory, anyone’s idea of a future Nobel Prize winner. His name was Richard Thaler. In 1975, Thaler was a thirty-year-old assistant professor in the School of Management at the University of Rochester with vague prospects. It was a wonder he was even there. He had two deeply pronounced traits that rendered him unsuited not just to economics but to academic life. The first was that he was easily bored, and highly imaginative in his attempts to escape boredom. As a child he routinely changed the rules of the games he was expected to play. The first hour and a half of Monopoly, when players march around the board randomly landing on properties and buying them, he found tedious. After playing a few times, he announced, “This is a stupid game.” He said that he would only play if all the properties were shuffled and dealt to the players at the start of the game. Same with Scrabble. Finding it boring when he got dealt five “E”s and no high-value consonants, he changed the rules so that the letters were organized into three buckets: vowels, common consonants, and rare, high-value consonants. Each player got the same number of each; after seven rounds, each player was given a high-value consonant. All the changes Thaler made to the games he played as a kid reduced the waiting-around time, and the role of luck, and increased the challenge and, usually, the players’ competitiveness.

 

‹ Prev