The Enigma of Reason: A New Theory of Human Understanding

Home > Other > The Enigma of Reason: A New Theory of Human Understanding > Page 5
The Enigma of Reason: A New Theory of Human Understanding Page 5

by Dan Sperber


  Some of the Bakers Are Athletes

  The unrequited love of psychology of reasoning for logic has had costly consequences. Many eminent psychologists chose, for instance, to investigate how people perform with Aristotelian categorical syllogisms. Why? Well, these syllogisms had been at the center of classical logic for more than two thousand years. Surely, then, they had to play a major role in psychology.

  When all splittable hairs have been split, there turn out to be 256 possible forms of categorical syllogisms that could each be experimentally tested (twice as many when notational variants are included). To this end, many researchers invested years of work. Pity, too, the thousands of participants in these experiments who were given long series of dull and repetitive problems to solve, one after the other, in the style of the following:

  Some of the bakers are athletes.

  None of the bakers is a canoeist.

  What, if anything, follows?

  Only 24 of these 256 syllogistic forms are logically valid. It is not clear which of these valid and invalid syllogisms ever occur in human actual reasoning and how often they do.

  Of course, in science the study of marginal or practically unimportant phenomena can be of major scientific relevance—think of the common fruit fly, also known as Drosophila melanogaster, and its place in modern biology—but this is hardly a case in point. In a review article published in 2012, after half a century of intensive studies, Sangeet Khemlani and Philip Johnson-Laird identified twelve competing theories of syllogistic reasoning, none of which, they say, “provides an adequate account.” “The existence of 12 theories of any scientific domain,” they add, “is a small disaster.”6 This indictment is made all the more powerful and even poignant by the fact that one of the twelve theories, and arguably the most influential, is Johnson-Laird’s own mental-model account of syllogisms.

  Proponents of different approaches to reasoning (mental logic, mental models, more recently Bayesian inference, and so on) have used the study of categorical syllogisms as evidence that their own approach is the best—evidence, however, that only the already-converted have found convincing.

  There is another group of scholars, apart from psychologists, committed to the idea that classical syllogisms are still highly relevant: theologians, who have been teaching and using “syllogistics” since the Middle Ages. To give just one example, here is how Father Wojciech Giertych, the household theologian of Pope Benedict XVI, explained why women are not suited for priesthood: “Men are more likely to think of God in terms of philosophical definitions and logical syllogisms.”7 Not convinced? The relevance of the whole battery of Aristotelian syllogisms to psychology is, we are tempted to quip, equally mysterious.

  “Never Do an Experiment If You Know Why You’re Doing It!”

  Few psychologists of reasoning, if any, had a greater impact on the field than Peter Wason. “Wason’s way of doing research,” Johnson-Laird told us,8 “was pretty eccentric, e.g., never do an experiment if you know why you’re doing it!”

  In 1966, Wason introduced a new experimental design, the four-card selection task, which became—and remains to this day—a main tool and focus of research in the discipline. It wouldn’t be completely wrong—just exaggerated and unfair to the few researchers who have resisted its lure—to say that the psychology of reasoning has to a large extent become the psychology of the Wason task. If Wason invented this experiment without knowing what purpose it would serve, then, it must be reckoned, this turned out to be an amazingly successful shot in the dark.

  Figure 5. The four cards of the Wason selection task.

  Here is how Wason’s experiment goes. “In front of you are four cards,” the experimenter tells you. “Each card has a letter on one side and a number on the other. Two cards (with an E and a K) have the letter side up; the two others (with a 2 and a 7) have the number side up” (see Figure 5).

  “Your task is to answer the following question: Which of these four cards must be turned over to find out whether the following rule is true or false of these four cards: ‘If there is an E on one side of a card, then there is a 2 on the other side’?”

  Which cards would you select?

  The structure of the experiment derives from a standard type of inference in classical logic, conditional syllogisms, which we encountered in Chapter 1. Figure 3 in Chapter 1 laid out the four schemas of conditional syllogism; Figure 6 in this chapter shows how the selection task is built on these four schemas.

  The “rule” of the selection task is the major premise of a conditional syllogism of the form “if P, then Q” (in our example, if there is an E on one side of a card, then there is a 2 on the other side). Each card provides one of the four possible minor premises (in our example, the E card represents the minor premise P, there is an E; the K card represents not-P, there isn’t an E; the 2 card represents Q, there is a 2; and the 7 card represents not-Q, there isn’t a 2). As you may remember, only two of these minor premises, P and not-Q, allow valid deductions (called modus ponens and modus tollens, respectively); trying to make a similar deduction from the two other possible minor premises, not-P and Q, yields the fallacies of “denial of the antecedent” and of “affirmation of the consequent.”

  The correct answer, then, is to select just the E and the 7 cards. The rule entails a prediction about what should be on the other side of these two cards, a prediction that could be tested by turning these two cards over. Should the other side of either of these two cards fail to be as predicted, the rule would be falsified. The rule, on the other hand, doesn’t entail any prediction as to what should be on the hidden side of the K and the 2 cards: they are therefore irrelevant. In particular, contrary to a common intuition, turning over the 2 card is useless. Suppose there isn’t an E on the other side. So what? All the rule says is that an E must be paired with a 2; it does not say that only an E can be paired with a 2.

  Figure 6. The selection task and the four schema of conditional inference.

  You selected the E and the 7 cards? Congratulations! You made another selection? Don’t feel too bad. Only about 10 percent of participants make the right choice anyhow.

  Once psychologists start experimenting with the Wason task, it is hard for them to stop. Many have become addicts. Why? Well, the design of the selection task lends itself to endless variations. You can alter the instructions, modify the content of cards, or invent a variety of contexts. You can then observe what happens and see in particular whether more participants give the correct answer than with Wason’s original version of the task. If this happens, write an article. If it doesn’t, try again. Moreover, not just psychologists but also philosophers, students, and sometimes your roommate or your cousin easily come up with conjectures to explain why participants respond the way they do and with suggestions for new variations. The selection task has proved an everlasting topic of conversation where many people, pros and amateurs, can have a go.

  Ideally, of course, the selection task should owe its success to being, like the microscope for biology (to which we have heard it being compared), a superior tool that provides crucial evidence and helps answer fundamental questions. Has any theoretical breakthrough been made thanks to the selection task? No, whenever experimental evidence has been claimed to provide crucial support for a genuine theoretical claim, alternative interpretations have been proposed. As a result, much of the work done with the task has had as its goal to explain the task itself,9 with the psychology of human reasoning serving just as a dull backdrop to colorful debates about experiments.

  Much of the early research aimed at improving people’s poor performance with the selection task. Would training help? Not much. Feedback? Hardly. Changing the wording of the rule? Try again. Monetary rewards for good performance? Forget it. Then could variations be introduced in the nonlogical content of the selection task (replacing numbers and letters with more interesting stuff) that would cause people to perform better? Yes, sometimes, but explanations proved elusive.

>   So a lot of noise has been produced, but what about light? A few findings relevant not just to understanding the task but to understanding the mind were generally stumbled upon rather than first predicted and then confirmed. What the story of the selection task mainly illustrates is how good scientists can go on and on exploring one blind alley after the other.

  Ironically, the most important finding ever to come out of fifty years of work with the task is that people don’t even use reasoning to resolve a task that was meant to reveal how they reason.

  In the early 1970s, Jonathan Evans made a puzzling discovery by testing a simple variation on the standard selection task.10 Take the usual problem with the rule, “If there is an E on one side of a card, then there is a 2 on the other side” and the four cards E, K, 2, and 7. As we saw, only about 10 percent select the E and the 7 cards, even though this is the logically correct solution. Now just add a “not” in the rule, like this: “If there is an E on one side of a card, then there is not a 2 on the other side.” Show the same cards. Ask the same question. Now, a majority of participants give the right answer.

  Don’t jump to the startling conclusion that a negation in the rule turns participants into good logical reasoners. Actually, in both conditions (with and without the negation in the rule), most participants make exactly the same selection, that of the E and the 2 cards, as if the presence of the negation made no difference whatsoever. It so happens that this selection is incorrect in the standard case but correct with the negated rule. (How so? Well, the affirmative rule makes no prediction on the letter to be found on the hidden side of the 2 card, but the negative version of the rule does: an E on the hidden side of the 2 card would falsify the negated rule. So with the negated rule, the 2 card should be selected.)

  This shows, Evans argued, that people’s answers to the Wason task are based not on logical reasoning but on intuitions of relevance: they turn over the cards that seem intuitively relevant. And why do the E and the 2 seem intuitively relevant? Because, explains Evans, they are mentioned in the rule, whereas other letters and numbers are not, and that’s almost all there is to it.11

  The long and convoluted story of the selection task well explains how and why the psychology of human reasoning ended up pivoting away from its early obsession with classical logic to new challenges.

  Dual Process?

  Look at work on the selection task and look more generally at experimental psychology of reasoning, and you will see psychologists at pains to be as thorough as possible. This makes it even more puzzling and disheartening to see how modest the progress, how uninspiring the overall state of the art—and this in a period where the study of cognition has undergone extraordinary developments. In many domains—vision, infant psychology, and social cognition, to name but three—there have been major discoveries, novel experimental methods, and clear theoretical advances at a more and more rapid pace. Every month, journals publish new and exciting results. There are intense debates, but with a clear common sense of purpose and the strong feeling of shared achievement, nothing of the sort in the psychology of reasoning. True, there are schools of thought that each claim major breakthroughs, but, for good or bad reasons, none of these claims has been widely accepted.

  Still, if a survey was made and psychologists of reasoning were asked to mention what has been the most important recent theoretical development in the field, a majority—with a minority strongly dissenting—would name “dual process theory”: the idea that there are two quite distinct basic types of processes involved in inference and more generally in human psychology.

  A basic insight of dual process theory is that much of what people do in order to resolve a reasoning task isn’t reasoning at all but some other kind of process, faster than reasoning, more automatic, less conscious, and less rule-governed. In the past twenty years, different versions of the approach have been developed. Talk of “system 1” and “system 2” is becoming almost as common and, we fear, often as vacuous as talk of “right brain” and “left brain” has been for a while.

  Actually, an early sketch of dual process theory had been spelled out by Jonathan Evans and Peter Wason in a couple of articles published in 1975 and 1976 and quickly forgotten. As we saw, just by adding a “not” in the rule of the selection task, Evans had demonstrated that people make their selection without actually reasoning. They merely select the cards that they intuitively see as relevant (which happens to yield an incorrect response with the original rule and the correct response with the negated rule). Selection, then, is based on a type 1 intuitive process.

  Evans and Wason redid the experiment, this time asking people to explain their selection, and then they did reason, no question about it. They reasoned not to resolve the problem—that they had done intuitively—but to justify their intuitive solution. When their solution happened to be logically correct (which typically occurred with the negated rule), they provided a sensible logical justification. When their solution happened to be incorrect, people gave, with equal confidence, a justification that made no logical sense. What conscious reasoning—a type 2 process—seemed to do was just provide a “rationalization” for a choice that had been made prior to actual reasoning.

  There were three notable ideas in this first sketch of the dual process approach. The first was a revival of an old contrast, stressed by—among many others—the eighteenth-century Scottish philosopher David Hume and the nineteenth-century American philosopher William James, between two modes of inference, one occurring spontaneously and effortlessly and the other—reasoning proper—being on the contrary deliberate and effortful. A second, more novel idea was that people may and often do approach the same inferential task in the two modes. In the selection task, for instance, most participants produce both a spontaneous selection of cards and a reasoned explanation of their selection. The third idea was the most provocative: what type 2 deliberative processes typically do is just rationalize a conclusion that had been arrived at through intuitive type 1 processes. This idea so demeans the role of reasoning proper that Evans and Wason’s dual process approach was met with reticence or incredulity.12

  This early dual process approach to reasoning was not often mentioned, let alone discussed, in the next twenty years. When it did reappear on the front of the scene, gone were the youthful excesses; written off was the idea that reasoning just rationalizes conclusions that had been arrived at by other means. And so, in 1996, Evans and the philosopher David Over published a book, Rationality and Reasoning,13 where they advocated a “dual process theory of thinking” but with type 1 processes seen as rational after all and type 2 processes “upgraded from a purely rationalizing role to form the basis of the logical component of performance.” Moreover, the original assumption that the two types of processes occur in a rigid sequence—first the spontaneous decision, and then the deliberate rationalization—was definitely given up in favor of an alternative that had been suggested in passing in 1976, namely, that the two types of processes interact. Whereas the earlier Evans-and-Wason version of the dual process approach undermined humans’ claims of rationality, the later Evans-and-Over version vindicates and might even be said to expand these claims.

  Apparently, the time was ripe. That same year, 1996, the American psychologist Steven Sloman published “The Empirical Case for Two Systems of Reasoning” where, drawing on his expertise in artificial intelligence, he proposed a somewhat different dual process (or as he called it, “dual system” approach).14 In 1999, the Canadian psychologist Keith Stanovich, in his book Who Is Rational?, drew on his expertise on individual differences in reasoning to propose another dual process approach.15 In his Nobel Prize acceptance speech in 2002, Daniel Kahneman endorsed his own version of an approach that had been in many respects anticipated in his earlier work with Amos Tversky.16 Many others have contributed to this work, some with their own version of the approach, others with criticisms.

  A typical device found in most accounts of a dual process approach is a
table layout of contrasting features. Here are examples of contrasts typically found in such tables:

  Type 1 processes Type 2 processes

  Fast Slow

  Effortless Effortful

  Parallel Serial

  Unconscious Conscious

  Automatic Controlled

  Associative Rule-based

  Contextualized Decontextualized

  Heuristic Analytic

  Intuitive Reflective

  Implicit Explicit

  Nonverbal Linked to language

  Independent of general intelligence Linked to general intelligence

  Independent of working memory Involving working memory

  Shared with nonhuman animals Specifically human

  The gist of these contrasts is clear enough: on the one side, features that are commonly associated with instincts in animals and intuition in humans; on the other side, features that are associated with higher-order conscious mental activity, in other terms with “thinking” as the term is generally understood. At first blush, such a distinction looks highly relevant to understanding human psychology in general and inference in particular: yes, we humans are capable both of spontaneous intuition and of deliberate reasoning. So, dual process approaches seem to be a welcome, important, and, if anything, long overdue development. How could one object to such an approach?

  Well, one might object to the vagueness of the proposal. Aren’t these features nicely partitioned on the two-column table somewhat intermingled in reality? For instance, we all perform simple arithmetic inferences automatically (a type 1 feature), but they are rule-based (a type 2 feature). So, are these simple arithmetic inferences a type 1 or a type 2 process? Moreover, many of the contrasts in such tables—between conscious and unconscious processing, for instance—may involve a difference of degree rather than a dichotomy of kinds. These and other similar examples undermine the idea of a clear dichotomy between two types of processes.

 

‹ Prev