The Elephant in the Brain_Hidden Motives in Everyday Life

Home > Other > The Elephant in the Brain_Hidden Motives in Everyday Life > Page 9
The Elephant in the Brain_Hidden Motives in Everyday Life Page 9

by Robin Hanson


  In contrast, using self-deception to preserve self-esteem or reduce anxiety is a sloppy hack and ultimately self-defeating. It would be like trying to warm yourself during winter by aiming a blow-dryer at the thermostat. The temperature reading will rise, but it won’t reflect a properly heated house, and it won’t stop you from shivering.16

  Alternatively, imagine you’re the general in charge of a large army. You’re outnumbered and surrounded by the enemy with no clear line of escape. As you contemplate your next move on a large paper map, you realize how easy it would be to erase the mountain range that’s blocking your troops, or to draw a pass through the mountains where none actually exists. Having an escape route would certainly be a relief! But the map isn’t the territory; you can’t erase the actual mountains. Whatever you do to the map, the enemy will still have you surrounded. And by lying about reality, you’re setting yourself up to make bad decisions that will lead to even worse outcomes.

  A general who made a habit of indulging in such flights of fancy would quickly lose the war to one who didn’t. And the same is true for our minds. We therefore need a better reason for deceiving ourselves than mere psychic comfort.

  NEW SCHOOL: SELF-DECEPTION AS MANIPULATION

  In recent years, psychologists—especially those who focus on evolutionary reasoning—have developed a more satisfying explanation for why we deceive ourselves. Where the Old School saw self-deception as primarily inward-facing, defensive, and (like the general editing the map) largely self-defeating, the New School sees it as primarily outward-facing, manipulative, and ultimately self-serving.

  Two recent New School books have been Trivers’ The Folly of Fools (2011) and Robert Kurzban’s Why Everyone (Else) Is a Hypocrite (2013). But the roots of the New School go back to Thomas Schelling, a Nobel Prize–winning economist17 best known for his work on the game theory of cooperation and conflict.

  In his 1967 book The Strategy of Conflict, Schelling studied what he called mixed-motive games. These are scenarios involving two or more players whose interests overlap but also partially diverge. Thanks to the overlap, the players have an incentive to cooperate, but thanks to the divergence, they’re also somewhat at odds with each other. If this sounds familiar, it’s because humans (and our primate ancestors) have been playing mixed-motive games with each other for millions of years. It’s what we do every day, what our minds were built for. Nevertheless, as Schelling demonstrated, mixed-motive games can incentivize strange, counterintuitive behavior.

  A classic example is the game of chicken, typically played by two teenagers in their cars. The players race toward each other on a collision course, and the player who swerves first loses the game.18 Traditionally it’s a game of bravado. But if you really want to win, here’s what Schelling advises. When you’re lined up facing your opponent, revving your engine, remove the steering wheel from your car and wave it at your opponent. This way, he’ll know that you’re locked in, dead set, hell-bent—irrevocably committed to driving straight through, no matter what. And at this point, unless he wants to die, your opponent will have to swerve first, and you’ll be the winner.

  The reason this is counterintuitive is because it’s not typically a good idea to limit our own options. But Schelling documented how the perverse incentives of mixed-motive games lead to option-limiting and other actions that seem irrational, but are actually strategic. These include

  •Closing or degrading a channel of communication. You might purposely turn off your phone, for example, if you’re expecting someone to call asking for a favor. Or you might have a hard conversation over email rather than in person.

  •Opening oneself up to future punishment. “Among the legal privileges of corporations,” writes Schelling, “two that are mentioned in textbooks are the right to sue and the ‘right’ to be sued. Who wants to be sued! But the right to be sued is the power to make a promise: to borrow money, to enter a contract, to do business with someone who might be damaged. If suit does arise, the ‘right’ seems a liability in retrospect; beforehand it was a prerequisite to doing business.”19

  •Ignoring information, also known as strategic ignorance. If you’re kidnapped, for example, you might prefer not to see your kidnapper’s face or learn his name. Why? Because if he knows you can identify him later (to the police), he’ll be less likely to let you go. In some cases, knowledge can be a serious liability.

  •Purposely believing something that’s false. If you’re a general who firmly believes your army can win, even though the odds are against it, you might nevertheless intimidate your opponent into backing down.

  In other words, mixed-motive games contain the kind of incentives that reward self-deception.

  There’s a tension in all of this. In simple applications of decision theory, it’s better to have more options and more knowledge. Yet Schelling has argued that, in a variety of scenarios, limiting or sabotaging yourself is the winning move. What gives?

  Resolving this tension turns out to be straightforward. Classical decision theory has it right: there’s no value in sabotaging yourself per se. The value lies in convincing other players that you’ve sabotaged yourself. In the game of chicken, you don’t win because you’re unable to steer, but because your opponent believes you’re unable to steer. Similarly, as a kidnapping victim, you don’t suffer because you’ve seen your kidnapper’s face; you suffer when the kidnapper thinks you’ve seen his face. If you could somehow see his face without giving him any idea that you’d done so, you’d probably be better off.

  By this line of reasoning, it’s never useful to have secret gaps in your knowledge, or to adopt false beliefs that you keep entirely to yourself. The entire value of strategic ignorance and related phenomena lies in the way others act when they believe that you’re ignorant. As Kurzban says, “Ignorance is at its most useful when it is most public.”20 It needs to be advertised and made conspicuous.

  Another way to look at it is that self-deception is useful only when you’re playing against an opponent who can take your mental state into account. You can’t bluff the blind forces of Nature, for example. When a hurricane is roaring toward you, it’s no use trying to ignore it; the hurricane couldn’t care less whether or not you know it’s coming. Sabotaging yourself works only when you’re playing against an opponent with a theory-of-mind. Typically these opponents will be other humans, but it could theoretically extend to some of the smarter animals, as well as hypothetical future robots or aliens. Corporations and nation-states also use some of these self-sabotaging tactics vis-à-vis each other and the public at large. Self-deception, then, is a tactic that’s useful only to social creatures in social situations.

  It’s hard to overstate the impact of what Schelling, Trivers, Kurzban, and others are arguing. Their conclusion is that we, humans, must self-deceive. Those who refuse to play such mind games will be at a game-theoretic disadvantage relative to others who play along. Thus we are often wise to ignore seemingly critical information and to believe easily refuted falsehoods—and then to prominently advertise our distorted thinking—because these are winning moves.

  As Trivers puts it, “We deceive ourselves the better to deceive others.”21

  WHY DO WE BELIEVE OUR OWN LIES?

  Still there’s an important lingering question. If the goal of self-deception is to create a certain impression in others, why do we distort the truth to ourselves? What’s the benefit of self-deception over a simple, deliberate lie?

  There are many ways to answer this question, but they mostly boil down to the fact that lying is hard to pull off. For one thing, it’s cognitively demanding. Huckleberry Finn, for example, struggled to keep his stories straight and was eventually caught in a number of lies. And it’s even harder when we’re being grilled and expected to produce answers quickly. As Mark Twain may have said elsewhere, “If you tell the truth, you don’t have to remember anything.”22

  Beyond the cognitive demands, lying is also difficult because we have to overcome ou
r fear of getting caught. People get angry when they’re lied to—a reaction almost as universal as lying itself. (Even wasps who catch other wasps lying are known to retaliate in response.23) Therefore, aside from sociopaths and compulsive liars, most of us are afraid to tell bald-faced lies, and we suffer from a number of fear-based “tells” that can give us away. Our hearts race, our skin heats up, we start sweating and fidgeting. Maybe we have an eye twitch, nervous tic, awkward gulp, or cracking voice.24

  In light of this, often the best way to get others to believe something is to make it a reality. When you’re playing chicken, it won’t do much good to yell at your opponent, “Hey, I’ve torn off my steering wheel!” He won’t believe you until he sees that you’ve actually done it. Similarly, often the best way to convince others that we believe something is to actually believe it. Other people aren’t stupid. They’re aware that we often have an incentive to lie to them, so they’re watching us, eagle-eyed, for any signs of deception. They’re analyzing our words (often comparing them to things we said days, weeks, or months ago), scrutinizing our facial expressions, and observing our behaviors to make sure they conform to our stated motives.

  The point is, our minds aren’t as private as we like to imagine. Other people have partial visibility into what we’re thinking. Faced with the translucency of our own minds, then, self-deception is often the most robust way to mislead others. It’s not technically a lie (because it’s not conscious or deliberate), but it has a similar effect. “We hide reality from our conscious minds,” says Trivers, “the better to hide it from onlookers.”25

  Modeling the world accurately isn’t the be-all and end-all of the human brain. Brains evolved to help our bodies, and ultimately our genes, get along and get ahead in the world—a world that includes not just rocks and squirrels and hurricanes, but also other human beings. And if we spend a significant fraction of our lives interacting with others (which we do), trying to convince them of certain things (which we do), why shouldn’t our brains adopt socially useful beliefs as first-class citizens, alongside world-modeling beliefs?

  Wear a mask long enough and it becomes your face.26 Play a role long enough and it becomes who you are. Spend enough time pretending something is true and you might as well believe it.27

  Incidentally, this is why politicians make a great case study for self-deception. The social pressure on their beliefs is enormous. Psychologically, then, politicians don’t so much “lie” as regurgitate their own self-deceptions.28 Both are ways of misleading others, but self-deceptions are a lot harder to catch and prosecute.

  SELF-DECEPTION IN PRACTICE

  There are at least four ways that self-deception helps us come out ahead in mixed-motive scenarios. We’ll personify them in four different archetypes: the Madman, the Loyalist, the Cheerleader, and the Cheater.

  The Madman

  “I’m doing this no matter what,” says the Madman, “so stay outta my way!”

  When we commit ourselves to a particular course of action, it often changes the incentives for other players. This is how removing the steering wheel helps us win the game of chicken, but it’s also why businesspeople, gang leaders, athletes, and other competitors try to psych out their opponents.

  Rick Lahaye explains how athletes suffer when they don’t play the Madman:

  Athletes use small cues of tiredness from close competitors to give themselves a boost and keep pushing forward during a race (e.g., a marathon runner thinking, “Do you see him breathe? He’s almost done. Just keep pushing for one more bit and you will beat him.”). Because of this, athletes conceal (negative) information about [themselves] to competitors. If you show any “signs of weakness,” the opponent will see a chance for success and will be more willing to keep spending energy.29

  It was also one of Richard Nixon’s strategies for the war in Vietnam. As he explained to his chief of staff Bob Haldeman:

  I call it the Madman Theory, Bob. I want the North Vietnamese to believe I’ve reached the point where I might do anything to stop the war. We’ll just slip the word to them that, “for God’s sake, you know Nixon is obsessed about communism. We can’t restrain him when he’s angry — and he has his hand on the nuclear button” and Ho Chi Minh himself will be in Paris in two days begging for peace.30

  Of course, Nixon’s plan didn’t work out as well as he hoped, but his reasoning was valid. People often defer to the crazy ones, and our minds respond to that incentive by being a little bit crazy ourselves.

  The Loyalist

  “Sure, I’ll go along with your beliefs,” says the Loyalist, thereby demonstrating commitment and hoping to earn trust in return.

  In many ways, belief is a political act. This is why we’re typically keen to believe a friend’s version of a story—about a breakup, say, or a dispute at work—even when we know there’s another side of the story that may be equally compelling. It’s also why blind faith is an important virtue for religious groups, and to a lesser extent social, professional, and political groups. When a group’s fundamental tenets are at stake, those who demonstrate the most steadfast commitment—who continue to chant the loudest or clench their eyes the tightest in the face of conflicting evidence—earn the most trust from their fellow group members. The employee who drinks the company Kool-Aid, however epistemically noxious, will tend to win favor from colleagues, especially in management, and move faster up the chain.

  In fact, we often measure loyalty in our relationships by the degree to which a belief is irrational or unwarranted by the evidence. For example, we don’t consider it “loyal” for an employee to stay at a company when it’s paying her twice the salary she could make elsewhere; that’s just calculated self-interest. Likewise, it’s not “loyal” for a man to stay with his girlfriend if he has no other prospects. These attachments take on the color of loyalty only when someone remains committed despite a strong temptation to defect. Similarly, it doesn’t demonstrate loyalty to believe the truth, which we have every incentive to believe anyway. It only demonstrates loyalty to believe something that we wouldn’t have reason to believe unless we were loyal.

  There’s a famous Chinese parable illustrating the Loyalist function of our beliefs:

  Zhao Gao was a powerful man hungry for more power. One day he brought a deer to a meeting with the emperor and many top officials, calling the deer a “great horse.” The emperor, who regarded Zhao Gao as a teacher and therefore trusted him completely, agreed that it was a horse—and many officials agreed as well. Others, however, remained silent or objected. This was how Zhao Gao flushed out his enemies. Soon after, he murdered all the officials who refused to call the deer a horse.31

  Zhao Gao’s ploy wouldn’t have worked if he had called the deer a deer. The truth is a poor litmus test of loyalty.

  The Cheerleader

  “I know this is true,” the Cheerleader says. “Come on, believe it with me!”

  This kind of self-deception is a form of propaganda. As Kurzban writes, “Sometimes it is beneficial to be . . . wrong in such a way that, if everyone else believed the incorrect thing one believes, one would be strategically better off.”32

  The goal of cheerleading, then, is to change other people’s beliefs. And the more fervently we believe something, the easier it is to convince others that it’s true. The politician who’s confident she’s going to win no matter what will have an easier time rallying supporters than one who projects a more honest assessment of her chances. The startup founder who’s brimming with confidence, though it may be entirely unearned, will often attract more investors and recruit more employees than someone with an accurate assessment of his own abilities.

  When we deceive ourselves about personal health, whether by avoiding information entirely or by distorting information we’ve already received, it feels like we’re trying to protect ourselves from distressing information. But the reason our egos need to be shielded—the reason we evolved to feel pain when our egos are threatened—is to help us maintain a pos
itive social impression. We don’t personally benefit from misunderstanding our current state of health, but we benefit when others mistakenly believe we’re healthy. And the first step to convincing others is often to convince ourselves. As Bill Atkinson, a colleague of Steve Jobs, once said of Jobs’s self-deception, “It allowed him to con people into believing his vision, because he has personally embraced and internalized it.”33

  The Cheater

  “I have no idea what you’re talking about,” the Cheater says in response to an accusation. “My motives were pure.”

  As we discussed in Chapter 3, many norms hinge on the actor’s intentions. Being nice, for example, is generally applauded—but being nice with the intention to curry favor is the sin of flattery. Similarly, being friendly is generally considered to be a good thing, but being friendly with romantic intentions is flirting, which is often inappropriate. Other minor sins that hinge on intent include bragging, showing off, sucking up, lying, and playing politics, as well as selfish behavior in general. When we deceive ourselves about our own motives, however, it becomes much harder for others to prosecute these minor transgressions. We’ll see much more of this in the next chapter.

  In other cases, it’s not our intentions that determine whether a norm was violated, but our knowledge. Learning about a transgression sometimes invokes a moral or legal duty to do something about it.34 If we see a friend shoplift, we become complicit in the crime. This is why we might turn a blind eye or strive to retain plausible deniability—so that, when questioned later, we’ll have nothing to hide.

  * * * * *

  Again, in all of these cases, self-deception works because other people are attempting to read our minds and react based on what they find (or what they think they find). In deceiving ourselves, then, we’re often acting to deceive and manipulate others. We might be hoping to intimidate them (like the Madman), earn their trust (like the Loyalist), change their beliefs (like the Cheerleader), or throw them off our trail (like the Cheater).

 

‹ Prev