The Confidence Game
Page 5
There are universal folk beliefs, true. The only problem is, they are just as universally wrong. “The empirical literature just doesn’t bear it out,” says Leanne ten Brinke, a psychologist at the University of California at Berkeley whose work focuses on detecting deception. They persist because they fit our image of how a liar should behave. We want liars to exhibit signs of discomfort, like fidgeting, hemming and hawing, being inconsistent, flushing. We want liars to avert their gaze. They should feel shame and want to hide. Children as young as five already think that shifting your eyes away is a sign of deceit. In fact, if we are told beforehand that someone is lying, we are more likely to see them turning their eyes away from us. But that desire is not grounded in what liars actually do. Just because we want someone to feel ashamed, it doesn’t mean they do—or that they aren’t perfectly capable of hiding it in any event.
The mismatch between our conception of a liar and the reality—that there’s no “Pinocchio’s nose,” as ten Brinke put it—is surely one reason that, despite our confidence, our ability to tell a lie from the truth is hardly different from chance.
Paul Ekman doesn’t just study the prevalence of lying. His more central work focuses on our ability to discern deception. Over more than half a century of research, he has had over fifteen thousand subjects watch video clips of people either lying or telling the truth about topics ranging from emotional reactions to witnessing amputations to theft, from political opinions to future plans. Their success rate at identifying honesty has been approximately 55 percent. The nature of the lie—or truth—doesn’t even matter.
Over time, Ekman did find that one particular characteristic could prove useful: microexpressions, or incredibly fast facial movements that last, on average, between one fifteenth and one twentieth of a second and are exceedingly difficult to control consciously. The theory behind microexpressions is relatively straightforward: lying is more difficult, theoretically, than telling the truth. And so, with the added strain on our mind, we might show “leakage,” or these instantaneous behavioral tells that seep out despite our attempts to control them.
Microexpressions, though, are too fleeting and complex for any kind of untrained expert to spot: out of Ekman’s fifteen thousand subjects, only fifty people could consistently point them out. About 95 percent of us miss them—and if we’re in the world of virtual con artists, or ones that strike over the phone, no amount of microexpression reading will do us any good. And as it turns out, even if we could read every minute sign, we would not necessarily be any better equipped to spot the liars among us—especially if they are as masterful at their craft as that prince of deception, the grifter.
Last summer, I had the chance to talk to one of Ekman’s original fifty human lie detectors, so to speak. She goes by Renée; her work, she explains, is too sensitive for any further identification. These days, she consults for law enforcement and trains others to spot lies. But, she admits, she is not infallible when it comes to the practiced deceivers she now deals with—not the liars of the videotapes in a psych study, but the people who lie as part of who they are, the real masters of the game. “Those people aren’t always an open book,” she told me. They don’t lie like the amateurs. They are craftsmen. For them, lying isn’t uncomfortable, or cognitively draining, or in any way an anomaly from their daily routine. It is what they do and has, over time, become who they are. Take psychopaths, she says. “The smart, intelligent psychopath is a super liar.” Someone like Ted Bundy, say. “He scares me and makes me uncomfortable,” she says with a shudder. “People like him seem to have the ability of the truth wizard, but they have no conscience. A superintelligent psychopath is my match.” She names a few more, among them serial killer Richard Kuklinski, better known as Iceman. “If you watch him in his interviews, he is cold to the core.” Normally, Renée says, she trusts herself. But the best liars are a difficult match for even the best truth-seers.
Even then, it’s not a skill that can be easily learned. “I don’t think my ability is trainable,” Renée admits. “If we could, we’d be doing it already. I can give others tools, but they won’t be at the same level.”
What’s more, Ekman says, cognitive load may come from many areas, not just deception. Even with microexpressions, there is no surefire way of knowing whether someone is actually being untruthful. We can read signs of extreme pressure, but we don’t know where that pressure necessarily comes from. We might be worried, nervous, or anxious about something else. It’s one of the reasons lie detectors are also notoriously unreliable. Our physiology is just as subject to minute pressures as our physiognomy, not necessarily from the strain of deceiving. Sometimes this signals lying. Sometimes it signals other types of cognitive load, like stress, fatigue, or emotional distress. And in all cases it is impossible to be absolutely certain.
With con artists, lie detection becomes even trickier. “A lie,” Ekman says, “is a deliberate choice to mislead a target without any notification.” Plus, the more you lie, the fewer identifying signs, even tiny ones, you display.
Even professionals whose careers are based on detecting falsehood are not always great at what they do. In 2006, Stefano Grazioli, Karim Jamal, and Paul Johnson constructed a computer model to detect fraudulent financial statements—usually, the purview of an auditor. Their software correctly picked out the frauds 85 percent of the time. The auditors, by contrast, despite their professional confidence and solid knowledge of the typical red flags, picked out fewer than half—45 percent—of the fraudulent statements. Their emotions, it turns out, often got in the way of their accuracy. When they found a potential discrepancy, they would often recall a case where there was a perfectly reasonable explanation for it, and would then apply it there as well. Their assumptions probably gave people the benefit of the doubt more generously than they should have. Most people don’t commit fraud, so chances are, this one isn’t, either.
In fact, even when you know exactly what you’re looking for, you may find yourself further from accuracy than you would like. In August 2014, Cornell University researchers David Markowitz and Jeffrey Hancock analyzed the papers of social psychologist Diederik Stapel. They had chosen Stapel for a very specific reason. Three years earlier, in September 2011, it was revealed that he had perpetrated academic fraud on a massive level. By the time the investigation concluded, in November 2012, it was evident that data for fifty-five papers had clear evidence of fraud; they either had been massaged or, in the egregious cases, were completely fabricated. Stapel had never even run many of the studies in question; he’d merely created the results that would support the theory that, he was sure, was accurate.
When Markowitz and Hancock tested whether the false publications differed linguistically from the genuine ones, they found one consistent tell: the deceitful papers used far more words related to the nature of the work itself—how and what you measure—and to the accuracy of the results. If there’s not much substance, you “paper” more: you elaborate, you paint beautiful prose poems, and you distract from lack of substance. (Who doesn’t remember doing a bit of the same on a college essay, to hide evidence of less than careful reading?) But however useful these tools of linguistic analysis may have been, they are far from perfect. Close to a third of Stapel’s work eluded proper classification based on the traits Markowitz and Hancock had identified: 28 percent of papers were incorrectly flagged as falsified while 29 percent of the false papers escaped detection. A real grifter, even on paper, covers his tracks remarkably well, and as much as we may learn about his methods, when it comes to using them to ferret out his wiles, we will oftentimes find ourselves falling short.
But why would this be the case? Surely it would be phenomenally useful to have evolved to be better at spotting liars, at protecting ourselves from those who’d want to intrude on our confidence for malicious ends?
* * *
The simple truth is that most people aren’t out to get you. We are so bad at spotting deception because it’s better for us
to be more trusting. Trust, and not adeptness at spotting deception, is the more evolutionarily beneficial path. People are trusting by nature. We have to be. As infants, we need to trust that the big person holding us will take care of our needs and desires until we’re old enough to do it ourselves. And we never quite let go of that expectation. In one study, Stanford University psychologist Roderick Kramer asked students to play a game of trust. Some could just play as they wanted, but others were led to believe that the partner they were playing with might be untrustworthy. Our default, Kramer found, was trust. Those students who were specifically told that there might be some wrongdoing ended up paying more attention to possible signs of untrustworthiness than those who had no negative expectations. In reality, the partner behaved in the same way in either case, but his behavior was read differently in the two conditions: we read behavior as trustworthy unless we’re explicitly told otherwise.
And that may be a better thing than not. Higher so-called generalized trust, studies show, comes with better physical health and greater emotional happiness. Countries with higher levels of trust tend to grow faster economically and have sounder public institutions. People who are more trusting are more likely to start their own business and volunteer. And the smarter you are, the more you are likely to trust: a 2014 survey by two Oxford psychologists found a strong positive relationship between generalized trust, intelligence, health, and happiness. People with higher verbal ability were 34 percent more likely to trust others; those with higher question comprehension 11 percent more likely. And people with higher levels of trust were 7 percent more likely to be in better health, and 6 percent more likely to be “very” happy rather than “pretty” happy or not happy at all.
And in some sense, this excess optimism in others’ basic decency is a good thing, at least most of the time. Remaining in a state of pleasant deception is often preferable to confronting the truth. It’s nice to think you look beautiful in everything you wear. That you’re radiant today despite a lack of sleep. That your invitation really was turned down because your guests had an inescapable conflict. That your article or project idea or pitch was rejected because, despite being wonderful, it really just wasn’t a “good fit.” Or any of the other white lies we hear dozens of times a day that we don’t give a second thought to, just because they smooth the flow of normal social interactions.
As well as making us feel better, not spotting lies can make us perform better. In 1991, Joanna Starek and Caroline Keating followed the progress of a Division I college swim team from upstate New York. They wanted to know if swimmers who were better at self-deception—ignoring negative stimuli about themselves and interpreting ambiguous evidence as positive—performed any differently from those who were more honest and perceptive. They had each swimmer take the Self-Deception Questionnaire, a test developed in the 1970s by psychologists Ruben Gur and Harold Sackeim, followed by a test of binocular rivalry, where each eye would see a different word and the swimmer would need to quickly report what she saw. Finally, they had the coach reveal which of the swimmers had qualified for the Eastern Seaboard Swimming and Diving Championships. The more adept a swimmer was at self-deception, the researchers found, the more likely she was to have made the cut. It wasn’t the people who saw the world most clearly who did best; it was, rather, those most skilled at the art of seeing the world as they wanted it to be. And the world-as-we-want-it-to-be is precisely what the con artist sells.
The irony is inescapable. The same thing that can underlie success can also make you all the more vulnerable to the grifter’s wares. We are predisposed to trust. Those who trust more do better. And those who trust more become the ideal, albeit unwitting, player of the confidence game: the perfect mark.
* * *
They say you can’t cheat an honest man. When it comes to confidence schemes, though, that simply isn’t true. Honesty has nothing to do with it. Honest men, after all, are often the most trusting, and trust, as we know, is deadly when it comes to the con.
Apple. Mr. Bates. Chump. Egg. Savage. Winchell. They say a sucker is born once a minute. There are about as many names for him. But at the end of the day, they all amount to the same thing: victim. The grifter’s mark is not greedy, no more so than anyone else. Nor are marks dishonest, any more than those of us who harbor fleeting suspicions of our own worth and exceptionalism. They are, simply, human.
Robin Lloyd wasn’t looking to get rich. She was just a poor college student who thought she’d finally caught a lucky break. It was 1982, and Robin was making her first ever trip to New York City. She’d grown up in the suburbs, and was in college at Smith, a small school in western Massachusetts. Spending time in urban environments was not something she’d ever given much thought to. One of her classmates, though, was a native New Yorker—she’d grown up in the Bronx—and invited her out for a weekend in the big city. Robin was excited. She had hardly any money, but the trip seemed well worth it.
On the first day of the trip, Robin and her friend made their way down from the Bronx to Broadway. It was tumultuous, exciting. It felt slightly dangerous, and that, too, was exciting in its own way. “It was the eighties, remember, and New York was not as cleaned up and cosmopolitan as it is now,” Robin tells me as we share a very New York bodega coffee—she has long since become a New Yorker herself. Everything was new and full of promise, a life she’d never even known existed in parallel to her own. And there, right on the sidewalk, was a loud-talking man seated behind a cardboard box. He was doing something at lightning speed with three playing cards, shuffling them around, flipping them, turning them this way and that. And money was being made: it looked to be some sort of game, and if you were good, it seemed, you could easily double your stake. All you had to do was follow the cards and bet on the right one—Follow the lady, as they say. “I remember being like a kid at the circus, so fascinated by him showing us how easy it was to win this game, that if you just threw down twenty bucks, the odds were so good you could double your money,” Robin says. She didn’t take the decision to play lightly. She had only two precious twenty-dollar bills in her pocket—her money for the entire two-day trip. “At this time in my life, I had no winter coat,” she remembers. “I didn’t even have three dollars to buy a Coca-Cola.” It was below freezing, and she wore a turtleneck, a sweatshirt, and a denim jacket on top. “I was getting by, but barely—I was getting through college.”
But something about this man’s patter seemed genuine; it was almost as if he saw her woes and wanted to help her with a quick influx of cash. And she’d just seen a lucky winner who’d doubled his money with ease and walked away elated. She decided to go for it. Hands slightly shaking—she was nervous—she put down a twenty. “Sure enough, it doubled.” She couldn’t believe her good fortune. But just as she was about to pick up her winnings, the man quickly interjected. Wouldn’t she like to double it again? “It was so exciting, the energy there. There was a crowd around us, and you want to win and want to believe so much.” And so, she acquiesced. She placed her last remaining bill on top.
The moment the cash left her hand, she regretted it. “I thought, this is not going to go well. That is much more money than I can afford to lose.” But for a second there, she had really believed that she would make it all back. “Right on the very next game, I lost it.” She had no more money to play with, and so, even as the sympathetic man urged her to try again and reverse her luck, she walked away, empty wallet in pocket. That evening, they were visiting a friend at Columbia. The girls ordered Chinese takeout. It would have been exciting—a real New York City thing to do—except there was only one thing Robin could think of. How was she going to come up with her three-dollar share for the food?
Three-card monte games are one of the most persistent and effective cons in history. They still line some New York City blocks over thirty years later. But we tend to dismiss the victims as rubes: who in their right mind would fall for something like that? Even Robin felt that way, calling herself a fool and
admitting her embarrassment at being pulled in so easily. “I probably deserved it,” she says. But that’s in retrospect. In the moment, it’s not nearly so simple. Robin was educated and intelligent (today, she’s an editor at Scientific American). She was a good judge of people—she was studying sociology, after all. She was frugal and not easily swept up by spur-of-the-moment whims. She doesn’t fit the typical profile of a sap. But she was up against forces far greater than she realized. Monte operators, like all good con men, are exceptional judges of character—and exceptional creators of drama, the sort of narrative sweep that can make everything seem legitimate, natural, even inevitable. They know what to say to whom, how to say it, when to create a “lucky” diversion, how to make it seem like the game is all about skill—legitimate skill, not a risky gamble. To someone who has never heard of a shell game (monte’s close cousin, where instead of cards you watch shells and guess which one holds the bead) or a monte gang (a group of conspirators working together to make the game seem legitimate), it’s a dangerous proposition. When I mentioned to Robin that the winner she’d seen was a shill, a part of the monte gang planted there to lure people in, she expressed surprise. To this day she hadn’t realized that that was how the game worked. “The rational part of me knows I was conned. But there’s still a part of me that feels like I was unlucky.”
Over the years, many researchers have tried to identify what it is that separates those susceptible to cons—ideal marks—from those immune to them. It would be great, after all, to pinpoint exactly the things most likely to have you fooled, and to beat them once and for all. Wouldn’t it be wonderful if there were a shot that could inoculate you against all forms of chicanery?
We certainly have some strong ideas about marks. When representatives of all Better Business Bureau offices in the United States were asked to think about what things separated scam victims from non-victims, a few trends emerged. Some were obvious: gullibility, a trusting nature, a proneness to fantasy, and greed were perceived to be the traits that set victims apart. Victims were also seen as less intelligent and educated, poorer, more impulsive, and less knowledgeable and logical. And older: your grandmother is more likely to be fooled than you. But is the perception true?