Because of our self-serving bias, we tend to rationalize after the fact, focusing on reasons that justify our choice rather than those that went against it. It’s a kind of confirmation in reverse: we want to decide to do one thing over another—take the suitcase for Denise—and we want to think our decision good—this is what a good husband does—so we marshal evidence that makes our decision seem well reasoned even when we didn’t actually use any of that evidence in reaching our choice to begin with. There’s nothing wrong with this suitcase. It’s perfectly normal to carry an empty bag for a girl you’ve never met, that was given to you by a man in a dark Bolivian street. She has always joked with me, so the cocaine texts are jokes—and I want her to like me and think that I’m “cool” and “with-it” and able to play along. The woman of my dreams would never actually hurt me. Oh, and if she were out to get me: I’m a brilliant man. I would know. And I know real love when I see it. Real love would not actually give me cocaine. So I’ll text away. After all, there’s no actual danger. And she will be even more fond of me when she sees how playful I can be. I may be older than she, but I can be hip and lighthearted with the best of them. I will pass her every test.
Paul Slovic’s work centers on how we make decisions, especially under conditions of risk—that is, when we’re taking some sort of gamble, be it financial or personal. He proposes that when we want to go for something, for whatever reason, the reasons in support of the choice will loom much heavier than those against it. If, however, we want to reject it, the negatives will suddenly seem much meatier. We focus on the rationale that retroactively justifies our choice rather than actually base our choice in the moment on the most pertinent rationale. If Frampton had found Denise unattractive, he would have likely evaluated her identical dating profile much more negatively. He might have even called it a scam, one of many so-called sweetheart cons that line the pages of dating services. He would have said he was basing his conclusion on the evidence, when he was doing no such thing. He would have already decided he didn’t like her, and so would be looking for reasons to justify that conclusion.
Even after he’d been imprisoned, Frampton remained unconvinced of Denise’s duplicity. For months, the New York Times reported, he held that they had both been set up. “When he first called me, he thought he’d be home in a couple of days,” Anne-Marie Frampton told the Telegraph. It would all turn out to be a big misunderstanding, he assumed. She, however, was more alarmed. Not mincing words, she summed up the situation this way: “I realize that for anybody who doesn’t know Paul, it must seem crazy that someone could both be so intelligent and so devoid of common sense . . . Even his friends would tell you that he’s completely daft. He’s a naïve fool—he is, really, an idiot savant.” She concluded, “His stupidity could cost him his life.”
One of the reasons that the tale is so powerful is that, despite the motivated reasoning that we engage in, we never realize we’re doing it. We think we are being rational, even if we have no idea why we’re really deciding to act that way. In “Telling More Than We Can Know,” a seminal paper in the history of social and cognitive psychology, Richard Nisbett and Timothy Wilson showed that people’s decisions are often influenced by minute factors outside their awareness—but tell them as much, and they rebel. Instead, they will give you a list of well-reasoned justifications for why they acted as they did. The real reasons, however, remain outside their knowledge. In fact, even when Nisbett and Wilson pinpointed the precise inspiration for a choice—in one case, someone casually walking by a curtain to make it swing back and forth in a manner that suggested the swing of a pendulum, prompting the subject herself to swing a rope as a pendulum to solve a puzzle—the vast majority of people persisted in a faulty interpretation. It wasn’t the power of suggestion, they insisted. It was an insight they had reached after meticulous thought and evaluation of alternatives.
In the 1970s, a new kind of art started to gain traction in the New York art market: nineteenth-century American painting. The paintings had existed for over a century, to be sure, but they had never been particularly popular. Suddenly, they were all the rage. By the end of the decade, they were commanding prices in the hundreds of thousands, selling at auctions at the best houses and gracing the walls of the chicest art collectors. The art world is notoriously fickle. Trends come and go. Artists who sold for nothing become popular. Artists who were popular sell for nothing. But in this particular case, part of the market wasn’t altogether accidental; it was, rather, the doing of an incredibly successful confidence man, and one who was master of the tale if ever there were.
Ken Perenyi feels no guilt about his past as a fine art forger who duped galleries, collectors, and auction houses alike into buying his faux-nineteenth-century creations. He’s downright gleeful. “I loved what I did,” he tells me one winter afternoon, sitting in his Florida living room. “It was a contest of wits. I regret nothing—well, the only regret I have in my life is that the FBI walked in on me.” One of the things he’s most proud of is his ability, with a few accomplices, to convince people that a Butterworth is exactly what they’d been wanting for their collection all along. “It was the ground floor of a rapidly developing new market,” Perenyi recalls. No one knew quite what to expect, or quite what they wanted—so Perenyi was all too happy to plant the seeds of suggestion, and then create the perfect painting to make that suggestion a reality, all the while letting the gallery owner or collector think he was the one calling the shots, asking for the precise painting he wanted, playing Perenyi for the fool by extracting a good deal on the price. By 1978, the Sotheby’s catalogue contained two full-page prints of Butterworth paintings that would be up for sale. Both were hot commodities—the nineteenth-century sweet spot. And both had been painted in the last few years by Perenyi.
Perenyi was never formally charged. After the FBI caught up with him, he was eventually let off with a warning not to do it again. (He now paints “legitimate forgeries,” that is, the same fakes but without passing them off as such.) He doesn’t know why, but he suspects it would be too embarrassing for the major auction houses: “The nineteenth-century American painting department is the jewel in the crown of Sotheby’s. They started it. They developed it. It had never been tainted by any scandal,” he speculates. “If my story came out, they would have to say, ‘Oh, my god. How deep does this go? How many did we sell?’” Perenyi didn’t just enter the market. He helped create it, convincing marks of what they’d always wanted, and then nicely filling that request—because don’t they deserve the best?
Not only does our conviction of our own exceptionalism and superiority make us misinterpret events and mischaracterize decisions; it also hits us a second time long after the event in question. Because of this, we rewrite the past in a way that makes us less likely to learn from it, selectively recalling everything good and conveniently forgetting the bad. We rewrite positive events to make us more central in their unfolding. As for negative events, sometimes we don’t even remember they occurred. In other words, someone like Frampton, after being released, will likely fail to learn something about the future from his behavior in the past.
Memory is a tricky thing, and once we’ve been taken once, it becomes all the more likely that we will fall for a con again. There is no better mark, many a con artist will tell you, than one who has already been duped. When Bluma Zeigarnik, a psychologist from the Gestalt school, discovered her eponymous effect—we remember interrupted tasks better than completed ones; our minds haven’t quite given up working on them, and we feel a strong need to attain some sort of closure—she also noted a far less frequently cited exception. As it turns out, we don’t remember all uninterrupted tasks equally. Zeigarnik found that for some people the effect was reversed. If a person felt she had performed poorly, the task was promptly dismissed from the mind. The privileged position reserved for unfinished business was no longer so privileged if the business in question wasn’t a particularly good one. For a con artist, that tendency is
pure gold: you will try your best to dismiss any moments when you acted like a dupe, rationalize them away as flukes. So next time the tale rolls around, you will once more think it’s your lucky chance.
In 1943, Saul Rosenzweig, a psychologist at Clark University and Worcester State Hospital, further elaborated on Zeigarnik’s exception. What if, he wondered, he took it one step further: an interrupted task that signaled personal failure, whereas simple completion would mean success? Rosenzweig recruited a group of students to complete a series of jigsaw puzzles—pictures of everyday objects like a boat, a house, or a bunch of grapes—each the size of a square foot. Each student would be allowed to complete only half of them; the other half, as in Zeigarnik’s prior setups, would be interrupted. Not all the puzzle-completing sessions, however, transpired in quite the same way.
In one case, Rosenzweig had recruited students from the student employment office for a small hourly fee. These students were told that they’d be evaluating puzzles for a future study; what the researchers were interested in was figuring out how well the puzzles worked for the purposes of their research. “This is not in any way a test of your ability or anything else about you,” each student was explicitly told. “Don’t hurry or feel in any way constrained.” And one more thing: “Do not be surprised if I interrupt you before you finish,” he said. “I doubtless shall do so if I find out what I want to know about a particular puzzle before you finish.”
The other group, however, was set up for an altogether different experience. This time, they weren’t recruited by just anyone; they were the freshman advisees of the director of the clinic, and he personally invited each of them to take part. This time around, the puzzles were presented as a test of intelligence, “so that you may be compared with the other persons taking the test.” Each puzzle would count the same in the final score, but different puzzles had different levels of difficulty, and so the time allowed to work on each would vary. “If you do not solve any puzzle in the allotted time, I shall naturally be obliged to stop you.” And one more thing: “Your work will be interpreted as representing the full extent of your ability, so do your best.” As if they weren’t going to already.
Immediately after the last puzzle, each student in the study was asked to list as many of the puzzles as he could remember, in any order they occurred to him. When Rosenzweig compared the lists, he found exactly what he had hypothesized. The first group exhibited just the expected Zeigarnik effect: their recall of the interrupted puzzles far exceeded their recall of the completed ones. In the second group, however, the effect was completely reversed; now memory for the completed tasks surpassed recall of the interrupted ones by a long shot. It was, Rosenzweig concluded, a battle of excitement and pride: excitement at working on something in the first case, and pride at finishing in the second. (Despite the shaky ethical standards adhered to in social experimentation in 1943, the poor students in the second group were promptly debriefed to disclose the true nature of the study. They did not leave feeling that their intellect had taken a sudden plunge.)
Cons are often underreported because, to the end, the marks insist they haven’t been conned at all. Our memory is selective. When we feel that something was a personal failure, we dismiss it rather than learn from it. And so, many marks decide that they were merely victims of circumstance; they had never been taken for a fool. In June 2014, a so-called suckers list of people who had fallen for multiple scams surfaced in England, It had been passed on from shady group to shady group, sold to willing bidders, until law enforcement had gotten hold of its contents. It was 160,000 names long. When authorities began contacting some of the individuals on the list, they were met with surprising resistance. I’ve never been scammed, the victims insisted. You must have the wrong information.
Of course, it’s not particularly pleasant to dwell on moments that put our skills or personalities in question. We’d much rather pretend they never happened. And even if we do remember, we’re much more likely to shift some of the blame in other directions. The test was rigged and unfair. It was her fault. He was being mean. She didn’t give me a chance. He asked for it. I was tired/hungry/stressed/overwhelmed/thirsty/bored/worried/preoccupied/unlucky. Unfortunately, by this kind of dismissal, we fail to learn what we could have done differently—and in the case of the con, we fail to properly assess our risk of getting, well, conned. We fall for the tale because we want to believe its promise of personal gain—and don’t much feel like recalling any reasons why that promise may be more smoke and mirrors than anything else.
In fact, Baruch Fischhoff, a social psychologist at Carnegie Mellon who studies how we make decisions, even has a name for instances of past misdirection: the knew-it-all-along effect or, as it’s more commonly known, hindsight bias. I knew it was a scam the whole time. So the fact that I don’t think that this scheme is a scam now speaks all the more highly for its integrity. The confidence man need not even convince us by this point. We’re quite good at getting over that hurdle ourselves.
We don’t see what the evidence says we should see. We see what we expect to see. As Princeton University psychologist Susan Fiske puts it, “Instead of a naïve scientist entering the environment in search of the truth, we find the rather unflattering picture of a charlatan trying to make the data come out in a manner most advantageous to his or her already-held theories.” That charlatan isn’t the con artist who’s out there. That charlatan is us conning ourselves.
* * *
Alas, our belief in our own superiority persists in the most unfortunate, and ironic, of places: in our assessment of the extent to which we believe in our own superiority. Of course, we realize that some things are simply too good to be true, there’s no such thing as a free lunch, and any number of other clichés we bandy about for just this purpose. We understand this in general. And yet. The illusion of unique invulnerability to all those biases is a tough one to break. We simply never think that, in any specific instance, it applies to us. In 1986, Linda Perloff and Barbara Fetzer, psychologists at the University of Illinois at Chicago, published the results of a series of studies aimed at testing how our beliefs in our own personal vulnerability may differ from our beliefs about vulnerability more broadly. Over and over, they found, people tended to underestimate the extent to which they were susceptible to any bad turn in life: their risks were reliably lower than the risk of the “average” person, at least according to their own estimate.
When Perloff and Fetzer tried to get their subjects to adjust their estimates by changing the comparison point from “average” to someone they knew—say, a friend or family member—in the hopes of making the risks seem more concrete, the attempt unexpectedly backfired. It didn’t make them feel more vulnerable at all; instead, it made them think that their friends and family were similarly less vulnerable. Sure, it could happen in general, but it won’t happen to me or my friends and family. In other words, instead of adjusting their own risk, their overconfidence widened to enfold the others in their lives. Whenever we have an opening to do so, the authors concluded, we will compare downward—that is, we will place ourselves and those closest to us at lower risk than the abstract mass of others, whether that risk is for a heart attack or a crime.
This is true of just about every better-than-average effect. When it comes to our friends, our relatives, our coworkers, even complete strangers, we tend to be fairly good at spotting biases even as we miss them completely in ourselves. In one set of studies, Stanford University students and random travelers at the San Francisco airport were shown to be capable of evaluating the susceptibility of the average American or their fellow students to a range of subjective evaluations—but when it came to themselves, they remained completely blind. Completely and, it seemed, almost willfully—much like Frampton or de Védrines. Even when the experimenters described the bias and pointed out that people tended to weigh themselves more positively on positive attributes and less than average on negative ones, the overwhelming majority insisted their init
ial rating was accurate—and 13 percent went even further, to say they’d been too modest. Others see the world through a prism of subjectivity; my view, however, is accurate. I’m fairly good at being objective, if I may say so myself.
In the summer of 2014, I had the chance to speak to a rather unusual family: one in which two grown siblings had, completely independently, gotten conned, trapped in schemes that had little to do with each other. Dave was the victim of an unfortunate ticket exchange on Craigslist. Unable to go to a show, he posted an ad looking to exchange his tickets for another evening. A few days later, he received a reply: Ashley was willing to exchange, but she unfortunately had e-tickets. (Trying to protect himself from getting scammed, he had asked for paper tickets only.) Dave was a bit worried, but no one else had offered to trade, and he really wanted to go to the show. And Ashley seemed legitimate. A quick Google search revealed a LinkedIn profile and a valid-seeming, upstanding occupation. They went ahead with the switch. Everything seemed good, up until the moment he and his girlfriend got to the show: their e-tickets had already been scanned, the guard informed them. They had been the victims of a well-known ticket fraud, where one person legitimately buys electronic tickets but then proceeds to sell them to multiple buyers.
Meanwhile, on the other coast, Dave’s sister Debbie found herself out some fifty dollars after buying bogus magazine subscriptions from a man who’d come to her door with a tale of trial and redemption: he had been in prison and was now working his way toward a better life. She hadn’t wanted to buy a subscription—she didn’t need any more magazines—but his story, and his mention of the tax deduction she’d receive, swayed her. She had pledged to be more charitable, and here was a good deed that would make her a kinder person. When she later checked up on the organization, its Web site had disappeared. The magazines, of course, never arrived.
The Confidence Game Page 21