Book Read Free

The Enigma of Reason: A New Theory of Human Understanding

Page 28

by Dan Sperber


  It should therefore be no surprise that Kunda and other specialists of motivated reasoning have shown that people do not simply adopt beliefs as they see fit. They look for reasons, making sure that they can provide some justification for their opinions—and they drop even cherished beliefs when they fail to find justifications for them. For instance, people have an overall preference for believing they are better than average—smarter, better at socializing, more sensible, and so forth. However, they do not simply believe what would make them happiest: that they are the best at everything. Instead, they selectively self-enhance, only providing inflated assessments when they are somehow defensible.

  For instance, people tend to think they are more intelligent than the average—that’s an easy enough belief to defend: they can be good at math, or streetwise, or cultured, or socially skilled, and so on. By contrast, there aren’t two ways of being, say, punctual. Since people can’t think of ways to believe they are more punctual than the average, they just give up on this belief, or on other beliefs similarly hard to justify.25

  Even if reasoning is not wishful thinking, its function could still be to make us feel good. A first problem with this hypothesis is that reasoning often has the opposite effect. Bertillon might have been engulfed by the pleasure of complete self-confidence, but he may also have been distressed by every new proof of Dreyfus’s devilish ingenuity. The journalist Jonathan Kay interviewed many conspiracy theorists in the writing of Among the Truthers (people who do not accept the standard account of 9/11). He found people suffering from “debilitating emotional agony” caused by “sudden exposure to the magnitude of evil threatening the world.”26 If reasoning is supposed to make people feel good, it fails abysmally. Other cases are easy to conjure—the jealous husband who persuades himself his wife is cheating on him, the pessimist who keeps finding reasons why humanity is bound to self-destroy, the hypochondriac who looks for the symptoms of yet another disease.27

  Not only does a feel-good explanation fail to fit the facts, it doesn’t make evolutionary sense, either. It confuses the proximal and the ultimate levels of explanation.28 A proximal explanation aims at pinpointing the psychological or neurological causes of a behavior. For instance, if Michael gets thirsty and drinks some water, the pleasure he derives from drinking could be a proximal explanation for his behavior: he drank the water because he anticipated that it would make him feel good.

  Ultimate explanations, by contrast, answer questions at the evolutionary level. At the ultimate level, feeling good is no more than a means to an end. For evolution, hedonic states—pleasure, pain, happiness, despair—serve the purpose of motivating animals to perform certain actions critical to their survival and reproduction. We feel pleasure while quenching our thirst because drinking is necessary for survival. We experience pain when touching a burning log so that we withdraw our hand and avoid long-term damage. We like spending time with friends because having partners and allies has been crucial to reproductive success in human evolution. For the same reason, we despair if our friends abandon us. An individual who would find drinking painful but would enjoy feeling his hand roast or who would resent the affection of friends and revel in their loathing would not be well equipped to survive and reproduce. So whether or not reasoning helps people feel good, it cannot have evolved to this end.

  Adaptive Lags in Reasoning

  The interactionist approach can account for the various epistemic distortions introduced by reason—overconfidence, polarization, belief perseverance. Chapters 11 and 12 pointed out two major features of the production of reasons: it is biased—people overwhelmingly find reasons that support their previous beliefs—and it is lazy—people do not carefully scrutinize their own reasons. Combined, these two traits spell disaster for the lone reasoner. As she reasons, she finds more and more arguments for her views, most of them judged to be good enough. These reasons increase her confidence and lead her to extreme positions.

  Many psychologists might agree with this diagnostic. However, such an explanation should only be a first step, soon followed by: Why on earth would reasoning behave that way? When an artifact fails to produce the desired results, this might be because it is broken, but it might also be because it is operating in abnormal conditions. If your pen doesn’t work upside down, if your car doesn’t start with an empty tank, it is not because they are out of order but because they are not designed to function in such conditions. Biological devices also have normal conditions: the conditions to which they are adapted.29 The normal conditions for human lungs are formed by the earth’s atmosphere around ground level. Our lungs work splendidly in these conditions, but less well or not at all in abnormal conditions—high altitudes, under water, in a tank full of helium, and so forth.

  In our interactionist approach, the normal conditions for the use of reasoning are social, and more specifically dialogic. Outside of this environment, there is no guarantee that reasoning acts for the benefits of the reasoner. It might lead to epistemic distortions and poor decisions. This does not mean reasoning is broken, simply that it has been taken out of its normal conditions. In the same way, when objects take on new colors under the sodium lighting of an underground parking lot, our color perception is not broken; it is simply working in an abnormal environment. The artificial lights that have replaced the lighting we encountered during our evolution—chiefly, the sun—mislead our color perception.

  This explanation—that reasoning now often works in an abnormal environment—is incomplete. If a bomb explodes inside the bomber plane rather than when it hits the intended target, the engineer in charge does not get kudos by pointing out that the explosion was exactly of the predicted force. When the bomb explodes is at least as important as how it explodes. Similarly, when a cognitive mechanism is triggered is at least as important as how it works once triggered.

  The basic trigger of reasoning is a clash of ideas with an interlocutor. This clash prompts us to try to construct arguments to convince the other or at least to defend one’s own position. This trigger works also in the absence of an actual interlocutor, in anticipation of a possible disagreement. Sometimes this anticipation may be quite concrete: a meeting is already scheduled to try to resolve a disagreement or to debate opposing ideas. At other times, we might just anticipate a chance encounter with, say, a political opponent, and mentally prepare and rehearse arguments we would then be eager to use. There are even times when we replay debates that have already taken place and think, alas, too late of arguments that we should have used.

  Sasha, for instance, is about to ask his mother to let him go to the all-night party at Vanessa’s. He has been rehearsing his arguments: he has been working very well at school; his homework for the next week is done; the party will be a small affair, nothing wild, nothing his mother should worry about. The more he thinks about it, the more Sasha becomes convinced that his request is perfectly reasonable and that his mother should, of course, say yes.

  Several things can happen then. Sasha might convince his mother that there are no serious objections to his going to the party. Or his mother might convince him that it is not such a good idea after all—she has heard from other parents that the party might be crashed by older kids who would bring beer and perhaps even drugs. Also, he seems to be forgetting that there is an exam next week for which he has not yet prepared. Listening to his mother’s arguments, Sasha might want to argue back. At the end of the back-and-forth, either one will have convinced the other, or at least both will have given reasons to justify their points of view.

  By contrast, if his mother just said no without paying attention to his argument, or if he never mustered the courage to ask, Sasha would probably see the arguments he never gave as compelling; he would see himself as a victim of parental injustice and incomprehension. Reasoning in anticipation of a discussion is fine—as long as the discussion actually takes place.

  What is problematic isn’t solitary reasoning per se, but solitary reasoning that remains solitary.
Reasoning, however, is bound to sometimes remain in one’s head, as people cannot fully anticipate when they will be called to defend their opinions. Just as one can be taken aback by an unanticipated quest for justification, one can prepare for a confrontation of points of view that never materializes. The latter case may well be more common because of the difference in costs between the two types of failures. Being caught unprepared to defend an opinion or an action that others might object to is likely to be worse, and hence less common, than rehearsing a defense that in the end will not serve.

  Modern environments distort our ability to anticipate disagreements. This is one of many cases in which the environment changed too quickly for natural selection to catch up. For example, our modern environments make some psychoactive substances, from coffee to cigarettes to alcohol, widely available. Some of these substances, such as cigarettes, are clearly bad for their users’ fitness (in both meanings of the word). Yet we haven’t evolved an innate disposition to avoid these substances in the same way that we have innate dispositions to avoid poisonous foods. Arguably, the explanation is that these substances would have been much rarer during our evolution and that they became common enough too recently for our brains to adapt to the change.

  Have environmental changes thrown off-balance our ability to anticipate disagreements in the same way they made our reactions to psychoactive substances dangerous? Life in a modern, affluent society is different in myriad ways from life in the ancestral environment, and some of these differences are bound to affect the way we reason. For instance, before the invention of the printing press and the advent of modern media, people were typically made aware that somebody in their own group had opinions different from theirs thanks to interaction with that person. Finding out about difference of opinion and trying to resolve them commonly occurred through repeated exchanges of arguments that could be anticipated and mentally rehearsed. Nowadays we are inundated with the opinions of people we will never meet: editorialists, anchormen, bloggers. We are also expected to have an opinion on many different topics—from politics to music to food—and to be able to defend this opinion when challenged, giving us reasons to prepare for a variety of debates that might never occur.

  And this only scratches the surface of the problem. More dramatic changes affect the workings of reason. Big-city dwellers meet more strangers in any single day than their ancestors did in their lifetime. Many of these strangers have different cultural backgrounds. It is easy to see how this novel mix generates possibilities for disagreement, making it considerably more complex to properly anticipate the need for justifications.

  Some cognitive mechanisms have been so fully repurposed by the modern world that they bear only a small resemblance to their ancestral form—witness the transformations brought by literacy to our ability to recognize simple arbitrary shapes.30 While we do not believe that reason has undergone such dramatic transformations, environmental changes have certainly had an effect on when reason is triggered, on how it functions, and even on what goals it achieves. Reason is used now in a variety of ways that differ from its evolved function—from displaying one’s smarts in a formal debate to uncovering the laws of physics. Unfortunately, some of those new uses of reason, such as preparing for debates that never come, turn out to be potentially harmful to the reasoner. As Keynes put it, “It is astonishing what foolish things one can temporarily believe if one thinks too long alone.”31

  14

  A Reason for Everything

  In Chapter 13, when solitary uses of reason led people astray, it was because they started out with a strong intuition—that Dreyfus was guilty, that this was the right answer to the problem, and so on. The myside bias, coupled with lax evaluation criteria, make us pile up superficial reasons for our initial intuition, whether it is right or wrong. Often enough, however, we don’t start with a strong intuition. On some topics, we have only weak intuitions or no intuitions at all—a common feeling at the supermarket, when faced with an aisle full of detergents or toilet papers. Or sometimes we have strong but conflicting intuitions—economics or biology? Allan or Peter? Staying at home with the kids or going back to work?

  These should be good conditions for the individualist theory to shine. Reason has a perfect opportunity to act as an impartial arbiter. When the reasoner has no clear preconception, the myside bias is held at bay and reason can then guide the reasoner’s choice, presumably for the better. Perhaps it is from such cases that beliefs about the efficiency of reason are born in the mind of philosophers. They make it a duty to examine precisely these cases in which intuitions are weak or conflicting. And if there is not enough conflict, philosophers excel at stirring it up: Are you sure other people exist? Think again! There is a philosophical theory, solipsism, that says other people don’t exist or, at least, that you can never be sure that they do. Situations where intuitions are absent, weak, or conflicting might provide perfect examples of reason working in line with the expectations of the classical theory: reach a status quo between different intuitions, and only then let reason do its job. Let’s look at what reason does in such cases, starting with a clever experiment conducted by Lars Hall, Petter Johansson, and Thomas Strandberg.

  As you walk along the street, a young man approaches you with a clipboard and asks whether you would be willing to take part in a short survey. For once, you accept. He hands you the clipboard with two pages of statements on political, moral, and social issues such as “If an action might harm the innocent, it is morally reprehensible to perform it.” You must indicate how you feel about each statement on a scale going from “Completely disagree” to “Completely agree.” You fill in the survey and hand the clipboard back. You’re not quite done, though: the young man passes the clipboard back and asks you to explain some of your ratings. You happily do so—after all, you take pride in being an informed, thoughtful citizen with sound opinions.

  What you haven’t realized is that in the few seconds during which he held the clipboard, the young man—who is in fact an experimenter—has, by means of a simple trick, replaced some of the statements on the page with statements having the exactly opposite meaning. For instance, the statement about harming the innocent would now read, “If an action might harm the innocent, it is morally permissible to perform it” (with “permissible” having replaced “reprehensible”) If some statements have been flipped, the answers haven’t, so that for these statements, the sheet now indicates that you hold the exact opposite of the opinion you asserted one minute earlier. If you had indicated that you strongly agreed with the first statement, the sheet now says that you strongly agree with the second statement, which means the opposite.

  Fewer than half of the participants noticed that something was wrong with the new answers. The majority went on justifying positions contrary to those they had professed a few minutes earlier, especially if their opinions weren’t too strong to start with.1

  Our boundless ability to produce reasons for just about anything we believe (or we think we believe, as shown by Hall and colleagues) has become a staple of social psychology since the pioneering experiments of Richard Nisbett and Tim Wilson in the 1970s that we evoked in Chapter 7. In one experiment, Nisbett and Wilson were standing outside malls pretending to sell stockings.2 Some passersby stopped at their stall, made a choice, and, when asked, happily justified their decision: “This one looks more resistant”; “I prefer the color of that one.” But the psychologists knew all these explanations to be bogus: they had mischievously displayed strictly identical pairs of stockings. That all the stockings were the same did not stop people from expressing preferences, which must have been based, then, on the position of the pairs in the display (many participants showed a right-most bias, for instance).

  In such experiments participants start out with weak intuitions. In the substitution of statements experiment, it is mostly those participants who had expressed only mild agreement or disagreement and who therefore didn’t have strong intuition on the issue who fa
iled to detect the manipulation. In the second experiment, the stockings were all the same, so whatever preference was created by their position would have been very weak. Still, reason doesn’t do the job classically assigned to it. It does not objectively assess the situation in order to guide the reasoner toward sounder decisions. Instead, it just finds reasons for whatever intuition happens to be a little bit stronger than the others. Humans are rationalization machines. As Benjamin Franklin put it, “So convenient it is to be a reasonable creature, since it enables one to find or make a reason for everything one has a mind to do.”3

  There are cases, however, where reason has a demonstrable impact on people’s decision—but not one that fits with the intellectualist approach.

  When Reasoning Makes a Difference

  We find Tim Wilson again, except that this time he is dealing in posters, not stockings, and there is no trick: all the posters are different. The experiment is straightforward. Some participants are asked to rate five posters, period. Others have to rate the same five posters, but also to explain their ratings.4 Being asked to reason affected participants’ choices, such as by making them give higher ratings to humorous posters.

  Many other studies have demonstrated that reason can make a difference. Some experiments require people to justify their decisions;5 others give participants some extra time to reflect on their choices;6 still others pit decisions based on feelings against decisions based on reasoning.7 Each time, people who reason more act differently from those who reason less or not at all. A mere rationalization machine is not supposed to influence decisions. What is happening? Is reason helping people make better decisions?

 

‹ Prev