Book Read Free

The Enigma of Reason: A New Theory of Human Understanding

Page 26

by Dan Sperber


  To evaluate people’s ability to construct arguments in a genuine argumentative setting, Lauren Resnick and her colleagues asked small groups of students to discuss a controversial topic. Participants were able to exchange several arguments every minute, bringing new ideas to the table and criticizing each other’s suggestions and arguments. Thanks to this back-and-forth, the participants performed well and the researchers were “impressed by the coherence of the reasoning displayed. Participants … appear to build complex argument and attack structure. People appear to be capable of recognizing these structures and of effectively attacking their individual components as well as the argument as a whole.”10 Deanna Kuhn and her colleagues reached a similar conclusion in a more quantitative study:11 they found that the students were able to produce much better arguments—about, say, the death penalty—after arguing about that topic with their peers. They clarified the links between premises and conclusions, added evidence to support their opinions, and relied on a wider range of argument types.

  At the end of Chapter 11, we saw how the myside bias could turn into an efficient way of dividing cognitive labor, with each individual finding arguments for her side and evaluating arguments for the other side. The process we have just described is another elegant division of cognitive labor. Instead of engaging in a costly and potentially fruitless search for a knockdown argument, reasoners rely on the interlocutors’ feedback, tailoring their arguments to the specific objections raised.

  Fallacy? What Fallacy?

  People exercise low quality control on their own reasons and are easily satisfied with relatively weak justifications and arguments. From the interactionist point of view, this is readily explained by the fact that reason evolved to work in an interactive setting. As predicted, people end up formulating better, more pointed arguments in the back-and-forth of a dialogue than when reasoning on their own.

  What about quality control on other people’s reasons, though? Here the interactionist approach makes very different predictions. If one can afford to be lax when evaluating one’s own reasons, one ought to be more demanding when evaluating others’ reasons. Otherwise we would accept the silliest excuses as good justifications and the most blatant fallacies as good arguments. We would be all too easily manipulated. This prediction goes against a common idea that people are easily gulled by sophistry. This common idea, however, is wrong, in part because it relies on misguided criteria for evaluating arguments. When more sensible criteria are used, experiments reveal that people tend to accept reasons when they should, and only when they should.

  A common criterion used to distinguish good from bad arguments is whether they can be categorized as an informal fallacy, from the ad populum to the ad hominem. Lists of such fallacies were already produced in classical antiquity and today can be easily found online. The issue is that for almost each and every type of fallacy in such lists, there are counterexamples in the form of arguments that meet the criteria to be declared fallacious but that in real life are quite acceptable or even good arguments, arguments that might convince a rational audience.12

  Here is a tu quoque (“you too”) fallacy:

  Yoshi: You shouldn’t drink since you’ll be driving!

  Makiko: Weren’t you yourself caught driving under the influence a month ago?

  Makiko’s objection is supposed to be a fallacious argument against the advice given by Yoshi. After all, the fact that he does not follow his own advice does not make it wrong. On the other hand, if one suspects the speaker has a good reason not follow his own advice, then the tu quoque argument would be quite reasonable:

  Yoshi: You shouldn’t eat these chocolates that Aunt Hélène brought us; they are not very good!

  Makiko: Didn’t you almost finish the box?

  In this case, Makiko’s objection does cast a reasonable doubt on the reliability of Yoshi’s advice.

  Here is an ad ignorantiam fallacy (arguing that a claim is true because it is not known to be false):

  The policeman: I am convinced that Ishii is a spy. I could find no evidence that he is not.

  The policeman’s argument is supposed to be fallacious because not knowing that a proposition is false is generally not a good argument that it is true. On the other hand, there are cases where such an argument is indeed quite good:

  The policeman: I am convinced that Ishii is a law-abiding citizen. I could find no evidence that he is not.

  We could multiply the examples but our point would each time be the same: tu quoque is fallacious except when it is not; ad ignorantiam is fallacious except when it is not; in fact, most if not all fallacies on the list are fallacious except when they are not. This is often implicitly acknowledged when the fallacies are given a more careful definition. The tu quoque fallacy is an inappropriate use of the fact that the interlocutor does not abide by her own judgment, the ad ignorantiam fallacy is an inappropriate use of the fact that the claim examined is not known to be false, and so on.

  This way of defining fallacies gives us license to invent indefinitely many new types: the Mount Everest fallacy (inappropriate comparison to the Mount Everest in an argument), the chicken soup fallacy (inappropriate use of facts about chicken soup in an argument), and so on. After all, the political philosopher Leo Strauss invented the reductio ad Hitlerum fallacy, which consists in arguing against an opinion by inappropriately comparing it to a view that Adolf Hitler might have held. Fallacious references to Hitler are much more frequent than fallacious references to Mount Everest, so why not enjoy the fun of labeling such references with a special name?

  The standard view of informal fallacies has been attacked by psychologists Ulrike Hahn, Mike Oaksford, and their colleagues.13 They have detailed the variables that should make some arguments more convincing than others. For instance, in the appeal to ignorance, one of the main variables is: How likely are we to find positive evidence if we look for it? The spy argument is weak because it is difficult, even for a policeman, to find evidence that someone is not a spy—if spies were easy to recognize, they would not remain spies for long. By contrast, if a policeman looks for evidence that an individual is a delinquent when such is indeed the case, he is quite likely to find the evidence. If he does not, this is in fact evidence that the individual is not a delinquent. Moreover, spies are, fortunately, much rarer than good citizens, further undermining the spy argument.

  Hahn and Oaksford’s case is not only theoretical. They have tested their hypotheses by manipulating relevant variables and asking people to evaluate the resulting arguments. What they found, with a variety of examples, is that people rate arguments in a rational way and don’t easily fall prey to genuinely fallacious arguments.

  For instance, here’s one of the arguments—an ad ignorantiam—that they asked participants to rate:

  This drug is likely to have no side effects because five meticulously controlled, large-scale clinical trials have failed to find any side effect.

  When given this argument, people rate it as being quite strong. Participants also react appropriately when the argument changes. For instance, they are less convinced if only two trials failed to reveal any side effect. Similar results were obtained for different types of arguments. On the whole, the evidence shows that when people are presented with truly fallacious arguments, they are reasonably good at rejecting them.14

  Evaluating One’s Own Reasons as if They Were Someone Else’s

  The experiments we reviewed suggest that there is an asymmetry between how people produce reasons—they are relatively lax about quality control—and how they evaluate others’ reasons—they are much more demanding. With Emmanuel Trouche, Petter Johansson, and Lars Hall, Hugo conducted a tricky experiment that aimed at making this asymmetry as plain as possible. It involved getting people to evaluate their own arguments as if they were someone else’s.15

  In the first phase of the experiment, participants tackled five simple reasoning problems regarding the products sold in a fruit and vegetable shop. For i
nstance, they might be told that in the shop, “none of the apples are organic.” From this, they had to draw a conclusion as quickly and intuitively as possible, choosing among several options such as “Some fruits are not organic” and “We cannot tell anything for sure about whether fruits are organic in this shop.” One participant whom we will call Rawan, for instance, selected as the correct conclusion, “Some fruits are not organic.”

  In the second phase, participants were asked to give reasons for their intuitive answers to each of the five problems they had answered in the first phase of the experiment. As they did so, they could, if they wanted, modify their initial answer. Given what we know about the production of reasons, we shouldn’t expect much to happen at this juncture. Most participants should produce reasons that support their intuition without carefully checking how good their reasons are and without revising their initial selection. Indeed, only 14 percent of participants changed their minds, and the change was as likely to be for the better as for the worse. Rawan was among those who didn’t change their minds. For her answer to the organic fruit problem, she offered the following justification: “Because none of the apples are organic, and an apple is one type of fruit, we can say that some of the fruits in the store are not organic.”

  In the third phase of the experiment, participants were given one by one the same five problems, and reminded of the answers they had given. They were then told about another participant who, on an earlier day, had answered differently, and they were given this participant’s answer and argument. On the basis of this argument, they could change their mind and accept the other participant’s answer or they could stick to their original answer.

  With one of the five problems, we played a trick on the participants and told them that their answer had been different from what it had actually been. We told Rawan, for example, that she had answered, “We cannot tell anything for sure about whether fruits are organic in this shop.” Moreover, we told her that someone else had selected the conclusion, “Some fruits are not organic” and given as justification “Because none of the apples are organic, and an apple is one type of fruit, we can say that some of the fruits in the store are not organic” (which had been Rawan’s own selection and argument).

  We hoped to have the perfect setup to test the asymmetry between the production and the evaluation of reasons: participants were led to evaluate an argument they had given a few minutes earlier as if it was someone else’s. And it worked. About half of the participants, Rawan included, did not notice that they had been tricked into thinking their own reason was somebody else’s.

  Would participants we had successfully misled at least agree with the argument they had themselves produced a moment before? Well, no. Even though they had deemed the argument good enough to be produced, they became much more critical of it when they thought it was someone else’s, and more than half of the time, they found it wanting. Reassuringly, there was a tendency for participants to be more likely to reject their own bad reasons than their own good reasons. Rawan, who had initially given a good reason, found the same reason convincing when she thought someone else had given it.

  Accepting Good Arguments

  So far, we have stressed one important feature of reason evaluation: it should be demanding enough to reject poor reasons. But it is just as important that it should accept good reasons. Reasoning, we argued in Chapter 10, serves a function both for communicators and for their audiences. In a situation where a communicator wants to make a claim that the audience is unlikely to accept just on her authority, reasoning generates arguments that the audience might evaluate and accept, and hence accept the claim. For the audience, reasoning is a tool of epistemic vigilance. It serves to evaluate arguments provided by a communicator so as to reject claims that are poorly supported and to accept claims that are well supported. Indeed, the whole point of epistemic vigilance is not just to reject dubious information but also to accept good information. For this we must be able to change our minds when presented with good enough reasons to do so.

  To test this prediction, with Emmanuel Trouche and Jing Shao we conducted a series of experiments using the following problem, which we can call the Paul and Linda problem:16

  Paul is looking at Linda and Linda is looking at John. Paul is married but John is not. Is a person who is married looking at a person who is not married?

  The three possible answers are “Yes,” “No,” and “Cannot be determined.” Think about it for a minute—it’s a fun one.

  Most people answer “Cannot be determined,” thinking that knowing whether Linda is married or not is necessary to answer the question for sure. But consider this argument:

  Linda is either married or not married. If she is married, then she is looking at John, who is not married, so the answer is “Yes.” If she is not married, then Paul, who is married, is looking at her, so the answer is “Yes” again. So the answer is always “Yes.”

  If you are anything like our participants (Americans and Chinese recruited online), then you are likely to accept the argument. When we gave the participants the argument, more than half changed their minds immediately. By contrast, if you had figured out the problem on your own and we had told you, “The answer is ‘Cannot be determined’ because we don’t know whether Linda is married or not,” then you would never have changed your mind. The way people evaluate these arguments is remarkably robust.

  When we gave our participants the argument for the correct answer, we didn’t tell them it was our argument (as experimenters). We told them it had been given by another participant in a previous experiment. So they had no reason to trust her more than they trusted themselves. To make things worse, we told some participants that the individual who gave them the argument was really, really bad at this kind of task. We told others that the individual who gave the argument would make some money if the participants got the problem wrong. So they expected an argument from someone who they thought was either stupid or out to trick them. And yet when they saw the argument, most accepted it. Indeed, although the participants had said that they did not trust the individual giving them the argument one bit, this lack of trust barely affected their likelihood of accepting the argument.

  We also asked some participants to think hard about the problem and to justify their answers. A few of them did get it right. But most got it wrong, and because they had thought hard about it, they were really sure that their wrong answer was right. Most of them said that they were “extremely confident” or even “as confident as the things I’m most confident about.” But that didn’t make them less likely to change their mind when confronted with the argument above than participants who had had their doubts. Even though they could have sworn that the conclusion was wrong, when they read the argument, they were swayed all the same.

  The Two Faces of Reason

  In this chapter and in Chapter 11, we have looked at how reasoning produces arguments and at how it evaluates the arguments of other people. The results can be summarized in Table 2.

  The “production of reasons” row is really bad for the intellectualist approach. When people reason on their own, they mostly produce reasons that support their decisions or their preconceived ideas, and they don’t bother to make sure that the reasons are strong. As we’ll see in Chapter 13, this is a recipe for disaster: not only is the solitary use of reason unlikely to correct mistaken intuitions, but it might even make things worse.

  The fact that people are good at evaluating others’ reasons is the nail in the coffin of the intellectualist approach. It means that people have the ability to reason objectively, rejecting weak arguments and accepting strong ones, but that they do not use these skills on the reasons they produce. The apparent weaknesses of reason production are not cognitive failures; they are cognitive features.

  This picture of reason fits with the predictions of the interactionist approach. People produce reasons as predicted by the theory: they find reasons for their side—a good thing if their
goal is to change others’ minds—and they start out not with the strongest reasons but with reasons that are easier to find, thus making the best of the feedback provided by dialogic settings. People also evaluate others’ reasons as expected, rejecting weak reasons but accepting strong enough reasons, even if that means revising strong beliefs or paying attention to sources that don’t inspire trust.

  Table 2 The two faces of reason

  Bias Quality control

  Production of reasons Biased: people mostly produce reasons for their side Lazy: people are not very exigent toward their own reasons

  Evaluation of others’ reasons Unbiased: people accept even challenging reasons, if those reasons are strong enough Demanding: people are convinced only by good enough reasons

  If we take an interactionist perspective, the traits of argument production typically seen as flaws become elegant ways to divide cognitive labor. The most difficult task, finding good reasons, is made easier by the myside bias and by sensible laziness. The myside bias makes reasoners focus on just one side of the issue rather than having to figure out on their own how to adopt everyone’s perspective. Laziness lets reason stop looking for better reasons when it has found an acceptable one. The interlocutor, if not convinced, will look for a counterargument, helping the speaker produce more pointed reasons. By using bias and laziness to its advantage, the exchange of reasons offers an elegant, cost-effective way to solve a disagreement.

  13

  The Dark Side of Reason

  Arthur Conan Doyle’s novel The Hound of the Baskervilles begins when a Dr. Mortimer tries to hire Sherlock Holmes’s services:

  “I came to you, Mr. Holmes, because … I am suddenly confronted with a most serious and extraordinary problem. Recognizing, as I do, that you are the second highest expert in Europe ———”

 

‹ Prev