Book Read Free

The Republican Brain

Page 5

by is Mooney


  Indeed, according to evolutionary psychologists Leda Cosmides and John Tooby of the University of California-Santa Barbara, the emotions are best thought of as a kind of control system to coordinate brain operations—Matrix-like programs for running all the other programs. And when the control programs kick in, human reason doesn’t necessarily get the option of an override.

  How does this set the stage for motivated reasoning?

  Mirroring this evolutionary account, psychologists have been talking seriously about the “primacy of affect”—emotions preceding, and often trumping, our conscious thoughts—for three decades. Today they broadly break the brain’s actions into the operations of “System 1” and “System 2,” which are roughly analogous to the emotional and the reasoning brain.

  System 1, the older system, governs our rapid fire emotions; System 2 refers to our slower moving, thoughtful, and conscious processing of information. Its operations, however, aren’t necessarily free of emotion or bias. Quite the contrary: System 1 can drive System 2. Before you’re even aware you’re reasoning your emotions may have set you on a course of thinking that’s highly skewed, especially on topics you care a great deal about.

  How do System 1’s biases infiltrate System 2? The mechanism is thought to be memory retrieval—in other words, the thoughts, images, and arguments called into one’s conscious mind following a rapid emotional reaction. Memory, as embodied in the brain, is conceived of as a network, made up of nodes and linkages between them—and what occurs after an emotional reaction is called spreading activation. As you begin to call a subject to mind (like Sarah Palin) from your long-term memory, nodes associated with that subject (“woman,” “Republican,” “Bristol,” “death panels,” “Paul Revere”) are activated in a fanlike pattern—like a fire that races across a landscape but only burns a small fraction of the trees. And subconscious and automatic emotion starts the burn. It therefore determines what the conscious mind has available to work with—to argue with.

  To see how it plays out in practice, consider a conservative Christian who has just heard about a new scientific discovery—a new hominid finding, say, confirming our evolutionary origins—that deeply challenges something he or she believes (“human beings were created by God”; “the book of Genesis is literally true”). What happens next, explains Stony Brook University political scientist Charles Taber, is a subconscious negative (or “affective”) response to the threatening new information—and that response, in turn, guides the type of memories and associations that are called into the conscious mind based on a network of emotionally laden associations and concepts. “They retrieve thoughts that are consistent with their previous beliefs,” says Taber, “and that will lead them to construct or build an argument and challenge to what they are hearing.”

  In other words, when we think we’re reasoning we may instead be rationalizing. Or to use another analogy offered by University of Virginia social psychologist Jonathan Haidt: We may think we’re being scientists, but we’re actually being lawyers. Our “reasoning” is a means to a predetermined end—winning our “case”—and is shot through with classic biases of the sort that render Condorcet’s vision deeply problematic. These include the notorious “confirmation bias,” in which we give greater heed to evidence and arguments that bolster our beliefs, and seek out information to reinforce our prior commitments; as well as its evil twin the “disconfirmation bias,” in which we expend disproportionate energy trying to debunk or refute views and arguments that we find uncongenial, responding very defensively to threatening information and trying to pick it apart.

  That may seem like a lot of jargon, but we all understand these mechanisms when it comes to interpersonal relationships. Charles Dickens understood them, even if not by name. If I don’t want to believe that my spouse is being unfaithful, or that my child is a bully—or, as in Great Expectations, that a convict is my benefactor—I can go to great lengths to explain away details and behaviors that seem obvious to everybody else. Everybody who isn’t too emotionally invested to accept them, anyway.

  That’s not to suggest that we aren’t also motivated to perceive the world accurately—we often are. Or that we never change our minds—we do. It’s just that we sometimes have other important goals besides accuracy—including identity affirmation and protecting our sense of self. These can make us highly resistant to changing our beliefs when, by all rights, we probably should.

  Since it is fundamentally rooted in our brains, it should come as no surprise that motivated reasoning emerges when we’re very young. Some of the seeds appear to be present at least by age four or five, when kids are able to perceive differences in the “trustworthiness” of information sources.

  “When 5-year-olds hear about a competition whose outcome was unclear,” write Yale psychologists Paul Bloom and Deena Skolnick Weisberg, “they are more likely to believe a person who claimed that he had lost the race (a statement that goes against his self interest) than a person who claimed that he had won the race (a statement that goes with his self-interest).” For Bloom and Weisberg, this is the very capacity that, while admirable in general, can in the right context set the stage for resistance to certain types of information or points of view.

  The reason is that where there is conflicting opinion, children will decide upon the “trustworthiness” of the source—and they may well, in a contested case, decide that Mommy and Daddy are trustworthy, and the teacher talking about evolution isn’t. This will likely occur for emotional, motivated, or self-serving reasons.

  As children develop into adolescents, motivated reasoning also develops. This, too, has been studied, and one of the experiments is memorable enough to describe in some detail.

  Psychologist Paul Klaczynski of the University of Northern Colorado wanted to learn how well adolescents are capable of reasoning on topics they care deeply about. So he decided to see how they evaluated arguments about whether a kind of music they liked (either heavy metal or country) led people to engage in harmful or antisocial behavior (drug abuse, suicide, etc.). You might call it the Tipper Gore versus Frank Zappa experiment, recalling the 1980s debate over whether rock lyrics were corrupting kids and whether some albums needed to have parental labels on them.

  Ninth and twelfth graders were presented with arguments about the behavioral consequences of listening to heavy metal or country music—each of which contained a classic logical fallacy, such as a hasty generalization or tu quoque (a diversion). The students were then asked how valid the arguments were, to discuss their strengths and weaknesses, and to describe how they might design experiments or tests to falsify the arguments they had heard.

  Sure enough, the students were found to reason in a more biased way to defend the kind of music they liked. Country fans rated pro-country arguments as stronger than anti-country arguments (though all the arguments contained fallacies), flagged more problems or fallacies in anti-country arguments than in pro-country ones, and proposed better evidence-based tests of anti-country arguments than for the arguments that stroked their egos. Heavy metal fans did the same.

  Consider, for example, one adolescent country fan’s response when asked how to disprove the self-serving view that listening to country music leads one to have better social skills. Instead of proposing a proper test (for example, examining antisocial behavior in country music listeners) the student instead relied on what Klaczynski called “pseudo-evidence”—making up a circuitous rationale so as to preserve a prior belief:

  As I see it, country music has, like, themes to it about how to treat your neighbor. So, if you found someone who was listening to country, but that wasn’t a very nice person, I’d think you’d want to look at something else going on in his life. Like, what’s his parents like? You know, when you’ve got parents who treat you poorly or who don’t give you any respect, this happens a lot when you’re a teenager, then you’re not going to be a model citizen yourself.

  Clearly, this is no test of the argument that co
untry music listening improves your social skills. So the student was pressed on the matter—asked how this would constitute an adequate experiment or test. The response:

  Well . . . you don’t really have to, what you have to look for is other stuff that’s happening. Talk to the person and see what they think is going on. So you could find a case where a person listens to country music, but doesn’t have many friends or get along very well. But, then, you talk to the person and see for yourself that the person’s life is probably pretty messed up.

  Obviously this student was not ready or willing to subject his or her beliefs to a true challenge. “Adolescents protect their theories with a diverse battery of cognitive defenses designed to repel attacks on their positions,” wrote Klaczynski.

  In another study—this time, one that presented students with the idea that their religious beliefs might lead to bad outcomes—Klaczynski and a colleague found a similar result. “At least by late adolescence,” he wrote, “individuals possess many of the competencies necessary for objective information processing but use these skills selectively.”

  The theory of motivated reasoning does not, in and of itself, explain why we might be driven to interpret information in a biased way, so as to protect and defend our preexisting convictions. Obviously, there will be a great variety of motivations, ranging from passionate love to financial greed.

  What’s more, the motivations needn’t be purely selfish. Even though motivated reasoning is sometimes also referred to as “identity-protective cognition,” we don’t engage in this process to defend ourselves alone. Our identities are bound up with our social relationships and affiliations—with our families, communities, alma maters, teams, churches, political parties. Our groups. In this context, an attack on one’s group, or on some view with which the group is associated, can effectively operate like an attack on the self.

  Nor does motivated reasoning suggest that we must all be equally biased. There are still checks one can put on the process. Other people, for instance, can help keep us honest—or, conversely, they can affirm our delusions, making us more confident in them. Societal institutions and norms—the norms of science, say, or the norms of good journalism, or the legal profession—can play the same role.

  There may also be “stages” of motivated reasoning. Having a quick emotional impulse and then defending one’s beliefs in a psychology study is one thing. Doing so repeatedly, when constantly confronted with challenging information over time, is something else. At some point, people may “cry uncle” and accept inconvenient facts, even if they don’t do so when first confronted with them.

  Finally, individuals may differ in their need to defend their beliefs, their internal desire to have unwavering convictions that do not and cannot change—to be absolutely convinced and certain about something, and never let it go. They may also differ in their need to be sure that their group is right, and the other group is wrong—in short, their need for solidarity and unity, or for having a strong in-group/out-group way of looking at the world. These are the areas, I will soon show, where liberals and conservatives often differ.

  But let’s table that for now. What counts here is that our political, ideological, partisan, and religious convictions—because they are deeply held enough to comprise core parts of our personal identities, and because they link us to the groups that bulwark those identities and give us meaning—can be key drivers of motivated reasoning. They can make us virtually impervious to facts, logic, and reason. Anyone in a politically split family who has tried to argue with her mother, or father, about politics or religion—and eventually decided “that’s a subject we just don’t talk about”—knows what this is like, and how painful it can be.

  And no wonder. If we have strong emotional convictions about something, then these convictions must be thought of as an actual physical part of our brains, residing not in any individual brain cell (or neuron) but rather in the complex connections between them, and the pattern of neural activation that has occurred so many times before, and will occur again. The more we activate a particular series of connections, the more powerful it becomes. It grows more and more a part of us, like the ability to play guitar or juggle a soccer ball.

  So to attack that “belief” through logical or reasoned argument, and thereby expect it to vanish and cease to exist in a brain, is really a rather naïve idea. Certainly, it is not the wisest or most effective way of trying to “change brains,” as Berkeley cognitive linguist George Lakoff puts it.

  We’ve inherited an Enlightenment tradition of thinking of beliefs as if they’re somehow disembodied, suspended above us in the ether, and all you have to do is float up the right bit of correct information and wrong beliefs will dispel, like bursting a soap bubble. Nothing could be further from the truth. Beliefs are physical. To attack them is like attacking one part of a person’s anatomy, almost like pricking his or her skin (or worse). And motivated reasoning might perhaps best be thought of as a defensive mechanism that is triggered by a direct attack upon a belief system, physically embodied in a brain.

  I’ve still only begun to unpack this theory and its implications—and have barely drawn any meaningful distinctions between liberals and conservatives—but it is already apparent why Condorcet’s vision fails so badly. Condorcet believed that good arguments, widely disseminated, would win the day. The way the mind works, however, suggests that good arguments will only win the day when people don’t have strong emotional commitments that contradict them. Or to employ lingo sometimes used by the psychologists and political scientists working in this realm, it suggests that cold reasoning (rational, unemotional) is very different from hot reasoning (emotional, motivated).

  Consider an example. You can easily correct a wrong belief when the belief is that Mother’s Day is May 8, but it’s actually May 9. Nobody is going to dispute that—nobody’s invested enough to do so (we hope), and moreover, you’d expect most of us to have strong motivations (which psychologists sometimes call accuracy motivations) to get the date of Mother’s Day right, rather than defensive motivations that might lead us to get it wrong. By the same token, in a quintessential example of “cold” and “System 2” reasoning, liberals and conservatives can both solve the same math problem and agree on the answer (again, we hope).

  But when good arguments threaten our core belief systems, something very different happens. The whole process gets shunted into a different category. In the latter case, these arguments are likely to automatically provoke a negative subconscious and emotional reaction. Most of us will then come up with a reason to reject them—or, even in the absence of a reason, refuse to change our minds.

  Even scientists—supposedly the most rational and dispassionate among us and the purveyors of the most objective brand of knowledge—are susceptible to motivated reasoning. When they grow deeply committed to a view, they sometimes cling to it tenaciously and refuse to let go, ignoring or selectively reading the counterevidence. Every scientist can tell you about a completely intransigent colleague, who has clung to the same pet theory for decades.

  However, what’s unique about science is that it has its origins in a world-changing attempt to weed out and control our lapses of objectivity—what the great 17th-century theorist of scientific method, Francis Bacon, dubbed the “idols of the mind.” That attempt is known as the Scientific Revolution, and revolutionary it was. Gradually, it engineered a series of processes to put checks on human biases, so that even if individual researchers are prone to fall in love with their own theories, peer review and the skepticism of one’s colleagues ensure that, eventually, the best ideas emerge. In fact, it is precisely because different scientists have different motivations and commitments—including the incentive to refute and unseat the views of their rivals, and thus garner fame and renown for themselves—that the process is supposed to work, among scientists, over the long term.

  Thus when it comes to science, it’s not just the famous method that counts, but the norms shared
by individuals who are part of the community. In science, it is seen as a virtue to hold your views tentatively, rather than with certainty, and to express them with the requisite caveats and without emotion. It is also seen as admirable to change your mind, based upon the weight of new evidence.

  By contrast, for people who have authoritarian personalities or dispositions—predominantly political conservatives, and especially religious ones—seeming uncertain or indecisive may be seen as a sign of weakness.

  If even scientists are susceptible to bias, you can imagine how ordinary people fare. When it comes to the dissemination of science—or contested facts in general—across a nonscientific populace, a very different process is often occurring than the scientific one. A vast number of individuals, with widely varying motivations, are responding to the conclusions that science, allegedly, has reached. Or so they’ve heard.

  They’ve heard through a wide variety of information sources—news outlets with differing politics, friends and neighbors, political elites—and are processing the information through different brains, with very different commitments and beliefs, and different psychological needs and cognitive styles. And ironically, the fact that scientists and other experts usually employ so much nuance, and strive to disclose all remaining sources of uncertainty when they communicate their results, makes the evidence they present highly amenable to selective reading and misinterpretation. Giving ideologues or partisans data that’s relevant to their beliefs is a lot like unleashing them in the motivated reasoning equivalent of a candy store. In this context, rather than reaching an agreement or a consensus, you can expect different sides to polarize over the evidence and how to interpret it.

 

‹ Prev