The Confidence Game
Page 30
In 1985, Hal Arkes and Catherine Blumer published a series of ten studies to illustrate sunk costs in action and thus shed light on what was driving the irrational-seeming behavior. What if they spelled it all out and made the red flags and errors clear? What did the sunk-cost fallacy mean in a practical scenario? It would be the equivalent of pointing the Bureau of Reclamation’s leader at every bit of evidence and calling out the potential for trouble, or laying out a list of reasons, complete with documentation, for Ann Freedman as to why Glafira Rosales might not be what she said, and then looking to see if their behavior would change.
The researchers began with a classic behavioral economics problem. People were told that they have tickets for two ski trips, one to Michigan for $100 and one to Wisconsin for $50. The trip to Wisconsin is likely to be the more enjoyable one. Alas, it ends up that the tickets are for the same weekend and neither is refundable. Which do you keep? Over half the participants elected to stick with the more expensive trip—even though they knew they would enjoy the other one more. The result held even when no actual money was spent and the tickets were a gift from a radio promotion.
Arkes and Blumer then enlisted the help of a local venue, the Ohio University Theater. Would the theater, they wondered, be willing to help them with an experiment: sell some randomly selected season ticket purchasers discounted tickets? The theater was happy to oblige, billing it as a select promotion. That season, some season ticket holders had their usually priced seats, at $15 a ticket; others had gotten a $2 discount per ticket; still others got a $7 discount. The tickets were color-coded so that once they were collected at each performance, you could count how many of each had been used.
Over the course of the 1982–1983 season, Arkes and Blumer tabulated the receipts. Did people who’d paid more also attend more performances? As it turns out, they did. For six months after the purchase, those who’d paid full price were reliably more likely to be seen in the theater than those who’d received a discount. On average, they attended 4.1 of the five performances, as compared to the 3.3 performances seen by the two discount groups.
What if the price were in millions? It didn’t matter. People were told to imagine they were the president of an airline that had spent $10 million on the development of a plane undetectable by radar, only to find, with 10 percent of the project left to go, that a rival had released a superior model. They still overwhelmingly advised their company to finish the project rather than spend the remaining funds on something else. A full 85 percent suggested the wisdom of seeing it through to the end. What’s more, they not only insisted on continuing to invest, but thought that the likelihood of financial success despite the odds was still 41 percent—significantly more than the odds given by a disinterested observer. The researchers tested another scenario in which you were either advising a different company or the project hadn’t yet been started; to such third-party observers, the foolishness and chance of failure were apparent. But to the people in the actual scenario, the chance of success was seen as far more certain.
Arkes and Blumer observed this effect in scenario after scenario, going so far as to give the tests to economics students who had already studied the very effect the tests were getting at. Their results were indistinguishable from those of naïve participants who knew nothing of sunk costs. To the psychologists, the results were clear: cutting losses would mean admitting a mistake, and the psychological costs of doing that were simply too high. “The admission that one has wasted money would seem to be an aversive event,” they wrote. “The admission can be avoided by continuing to act as if the prior spending was sensible, and a good way to foster that belief would be to invest more.” Indeed, in an unrelated study, Northwestern psychologist Barry Staw found that even telling someone flat out that an investment decision was bad wasn’t enough to get them to reverse it. Business school students who felt responsible for a bad investment continued to put money into it—significantly more than in any other option.
The thing is, just as with most fallacies, the psychology of sunk costs—continuing to wait it out despite a stream of losses—isn’t always irrational. In the entrapment effect, people undergo steady, small losses as they await an eventual goal—like commuters who’ve already spent an hour waiting for the bus and remain reluctant to call a cab because the bus might still come. And indeed, at times, come it does. We are willing to take on increasingly greater risk for a reward that is potentially even greater. A dam in an area that has long needed it, leading to billions in economic gains. An art legacy that will cement your role as a discoverer of a key trove of important paintings, a part of the legacy of Abstract Expressionism. The only problem is, we fail to see the picture for what it is, and in so doing, both underestimate the risk and overestimate the chances of success. The longer we remain in the confidence game, and the more we have invested and even lost, the longer we will persist in insisting it will all work out: in the breakdown, we’ve lost, and it seems like we should rightly quit, yet here we are in the send, reupping our commitment so that the actual touch goes off without a hitch.
Not only do we become blinded to risks, but the past starts to look infinitely better in retrospect. Writing in the Harvard Law Review in 1897, future Supreme Court justice Oliver Wendell Holmes observed, “It is in the nature of a man’s mind. A thing which you enjoyed and used as your own for a long time, whether property or opinion, takes root in your being and cannot be torn away without your resenting the act and trying to defend yourself, however you came by it. The law can ask no better justification than the deepest instincts of man.” In psychology, that idea is called the endowment effect, first articulated by Thaler in 1980. By virtue of being ours, our actions, thoughts, possessions, and beliefs acquire a glow they didn’t have before we committed to them. Sunk costs make us loath to spot problems and reluctant to swerve from a committed path. And the endowment effect imbues the status quo—what we’ve done—with an overly optimistic and rosy glow. It makes us want to hold on to it all the more. Those mysterious paintings start looking all the more real once they’ve been hanging on the walls of your home—Freedman herself purchased two, hanging them prominently in her entryway. Of course you weren’t wrong about them. Just look at how beautiful they are. The proof, as they say, is in the pudding.
In 1991, Daniel Kahneman, Richard Thaler, and Jack Knetsch offered a compelling example: the case of one of their economist colleagues. Years earlier, he’d bought some cases of Bordeaux—he was a fan of French wines. At the time, they were $10 a bottle, right in his price-point sweet spot. On principle, he tried not to buy wine over $30 a bottle. In the intervening years, though, his cases had grown substantially in value. Now each bottle would bring over $200 at auction. The friend had been approached by potential buyers, willing to give him quite a bit of cash to part with the wine. He had refused—although he’d equally refused to buy any additional bottles at the new, “crazy” price. He wasn’t going to get that much additional enjoyment out of them—he simply didn’t think a wine could ever justify such a price tag. He was, the behavioral economists concluded, suffering from both an endowment effect—to him, the bottles were worth even more than $200 simply because they were his, though the exact same wine bought anew would be worth substantially less—and a status quo bias—the tendency to leave things as they are, neither buying nor selling, but simply continuing on as is.
Experimentally, the endowment effect is remarkably well documented. Repeatedly, people who don’t own something—say, a pen or a mug, two items often used in these studies—will be willing to pay less for it than they would to sell the exact same object. Take this example, from one of Kahneman and Thaler’s many studies. I give you a list of prices, from $0.25 to $9.25. I ask you one of three questions. In one case, I’ve already given you a mug from your college. Now I want to know whether you’d be willing to sell it for each of the range of prices (the “sellers”). In another case, I ask you whether, at each price, you’d be willing to buy a mug
(the “buyers”). In a final case, I ask you whether, at each price, you’d rather have the cash or the mug (the “choosers”). Objectively, the sellers and choosers are in identical positions: getting either cash or mug, at a price of their choice. But what the researchers observed was that the choosers actually behaved much more like the buyers, willing to spend, on average, up to $3.12 on the mug (compared with the buyers’ $2.87). Above that, they wanted the cash. The sellers, in contrast, wouldn’t part with the mug for less than $7.12. Once we have something, its value increases by virtue of that ownership. We no longer look objectively; we look with the eyes of someone who has put down a stake.
Small children do the same thing instinctively: the toy they have becomes more valuable than the toy they didn’t get. It’s the grass is greener turned on its head. We quite rationally decide that we might as well enjoy what we own. And so its value rises in our minds.
The status quo bias only makes things worse. We like things as they are. Children already know the toy they have is fun. Why risk exchanging it only to find the new one isn’t as good? The new path is uncertain. The one we’re on, we’ve already charted out and experienced. Just ask the executives who came up with New Coke. They’ll tell you how deeply we cling to the status quo. From toys to elections (the incumbent effect) to jobs and relationships that coast along on inertia, the status quo is supremely attractive. As Samuel Johnson once said, “To do nothing is within the power of all men.” Once we’re in the home stretch of the confidence game, our investment renders us unable to be objective about the past evidence; we ignore the breakdown and open the way for the send because we refuse to admit we could have been wrong. We persist in acting as we did before, despite the growing evidence that we should change course. And so of course the con is successful: the touch goes off without a hitch, and we’re left completely fleeced.
In one of the first demonstrations of the effect, William Samuelson and Richard Zeckhauser had people role-play a layperson, a manager, or a government policymaker. In one scenario, approximately five hundred economics students pretended to be, while financially literate and interested in the markets, inexperienced in investment—until now, when a relative has left them a large inheritance. How would they invest it? People left to their own devices chose substantially different investments from those who’d been told that a good chunk of the money had already been invested in a certain company. Independently, the company hadn’t been very popular or a particularly attractive option. But when a good amount of the investment was already in it, many elected to keep it that way.
The same pattern held even when the data were very clearly against the status quo. This time, students were top managers at a regional airline and had to decide on the number and type of aircraft they would fly in each of two years. They could, the experimenters told them, switch leases for year two at absolutely no cost. At each decision point, for the first and second year, the students received a forecast for predicted economic conditions. The forecast was either good (stable airfares, high demand) or bad (price wars, lower demand). Some students received a good followed by a bad forecast, while for others the forecasts were reversed.
Rationally, someone with a good forecast should lease a larger fleet. Someone with a bad forecast should commit to a smaller one. If the forecast changes, the first manager should cut back, and the second expand. That’s not, however, what their subjects did. In the first scenario, 64 percent of the students had chosen the larger fleet to begin with—and a full 50 percent stuck with it for year two. That is, 79 percent overall stuck with the status quo. In the second scenario, 57 percent started out with a smaller fleet—and despite the lost economic opportunity, 43 percent stuck with it in year two. That is, 86 percent overall sticking to their guns despite the changing landscape and the new information. The status, in other words, had changed markedly. The status quo, however, had not.
Samuelson and Zeckhauser went on to replicate the effect in actual behaviors, first by looking at the default options in Harvard University employee health plans, and then by examining retirement plans for the Teachers Insurance and Annuity Association. In both cases, a very strong status quo persisted. Despite new, better plans becoming available, people tended to stick with what they knew. “The individual,” they concluded, “may retain the status quo out of convenience, habit or inertia, policy (company or government) or custom, because of fear or innate conservatism, or through simple rationalization.” Whatever the reason, their strong bias was to hold on to it no matter what. And when they tried to explain the logic to push people to change? “Most were readily persuaded of the aggregate pattern of behavior (and the reasons for it), but seemed unaware (and slightly skeptical) that they personally would fall prey to this bias.”
In the confidence game, the status quo favors the grifter and leaves the mark in the cold. It’s a question of perception. I, Ann Freedman, have already put my reputation on the line for these paintings. I’ve been selling them. And buying them myself. And exhibiting them. Clearly, I believe in them—and others know I do. If I swerve from the path now, what will it look like? And anyway, there’s no reason to worry. The longer we’re committed to one path, the more right it feels. Fool me for a day, shame on you. Fool me for months, years, or decades, well, that’s a different story. I’m not that gullible. I couldn’t possibly be fooled for that long. That’s the exact kind of reasoning that makes us so susceptible to the send: we give more and more to justify our “objectivity.” And by the time we realize something is off, if we ever do, the touch has already gone off and the meat of the confidence game is done.
Once we’re in the game, it’s easiest to follow the path of least resistance. It justifies what we’ve already done and reduces the effort we need to make going forward. The deeper we get, the more difficult psychologically it becomes to extricate ourselves, or to see that we’re even in need of extrication. All of the factors are aligned against us.
Remember Demara’s escapade aboard the Cayuga? Even after news came that he was an impostor and not the doctor he’d been letting on, the captain didn’t believe it. He thought the other Dr. Cyr was the liar. He couldn’t have possibly been inveigled by Fred’s wiles. He’d have him back as staff surgeon any day of the week, he said as they parted ways. He trusted completely in Demara’s surgical skill.
Besides, we tell ourselves, the moment I see a red flag, I’ll get out. I can always get out. This is my choice, my situation, my life, and I am in control. The reason I haven’t gotten out up until now is that there have been no red flags and no reason to do anything other than what I’ve been doing. I can change my mind whenever I want, if I see a reason to change it. I am, after all, smart, successful, and naturally skeptical.
That certainty, alas, is an illusory one. It’s the belief that you can always control your exit, a subset of a broader category of beliefs in your ability to control events even when they’ve gone beyond your reach—the illusion of control. We recommit and are taken for all we’re worth, victims of send and touch at once, because we never get out of any situation in time. We always think we’re in control, and so we never realize when we should cut and run.
In 1975, psychologist Ellen Langer tried a simple experiment: have people flip a coin and predict how it would land. The coin, however, didn’t just land randomly. Langer had carefully engineered the sequences. Some people ended up making a lot of accurate calls right off. Some people would be wrong most of the time, until near the end, when suddenly they became more accurate. And for some, the calls were basically random. In each case, the number of each type of call was the same. What changed was the order in which it came.
Coin flips are based entirely on luck. Unless you’re playing with a loaded coin, you always have a fifty-fifty shot at guessing correctly. It’s not an outcome you can control, not a skill you can improve, not something you can be good or bad at doing. It’s simply that. A flip of a coin. That’s not, however, how people perceived it. Those who made a lot
of accurate calls early on said that they were simply good at predicting coin tosses. They treated it like a skill instead of dumb luck and said that, with practice, they’d even improve over time. When Langer asked how many correct guesses they’d gotten before, they vastly overestimated their success. She called this tendency the illusion of control: we think we are in control even when there’s no way we can be—and even if, somewhere in the back of our minds, we know we’re dealing with a game of chance. As Langer put it in the title of her paper: “Heads, I Win; Tails, It’s Chance.”
We overestimate the extent to which we, personally, are the designers of our success, as opposed to it just happening all on its own. When something goes wrong, we’re only too eager to blame ill fortune. Not so when it goes right. In several studies, teachers accepted the credit for student improvements but blamed the students if they continued doing poorly. Likewise in investment behavior: if we pick a stock and it goes up, we think we’re the cause; if it goes down, stupid markets.
And the deeper we’re enmeshed in something, be it a confidence scheme or a more benign sort of game, the stronger the illusion grows. In one of her subsequent studies, Langer found that the more information individuals had about a pure chance lottery, the more confident they were in their ability to win—to the point that they actively refused a chance to trade their original ticket for one where the objective chances of winning were better. What’s more, when participants were given time to familiarize themselves with and practice on a task of pure luck, they rated their confidence in their ability to succeed in that task significantly higher—even though the chance nature of the task had not changed in any way. They further thought they had more control if, in a dice game, they, and not someone else, were the ones doing the throwing.