by Dan Sperber
Figure 2. How Eratosthenes computed the circumference of the earth.
As a young man, Kaczynski was unquestionably a brilliant reasoner. He had entered Harvard in 1958, at age sixteen. For his doctoral dissertation at the University of Michigan, he solved a mathematical problem that had eluded his professors for years, prompting the University of Berkeley to hire him. Two years later, however, he abandoned mathematics and academe to live in a shack in Montana, where he became an avid reader of social science and political work. Both his readings and his writings focused on what he saw as the destructive character of modern technology. Viewing technological progress as leading to disasters for the environment and for human dignity is not uncommon in Western thought, but Kaczynski went further: for him, only a violent revolution causing the collapse of modern civilization could prevent these even greater disasters.
To help trigger this revolution, Kaczynski began in 1978 to send bombs to universities, businesses, and individuals, killing three people and injuring many others. He wrote a long manifesto and managed to have it published in the New York Times and in the Washington Post in 1995 by promising that he would then “desist from terrorism.” The Unabomber, as the FBI had named him, was finally arrested in 1996 and now, as we write, serves a life sentence without the possibility of parole in a Colorado jail, where he goes on reading and writing.
What had happened to the brilliant young mathematician? Had Kaczynski’s reason failed him, turning him into the “raving lunatic” described by the press? Kaczynski’s family arranged for his defense to try to make him plead insanity. The defense of reason would no doubt concur: unreason had to be the culprit. It is unlikely, however, that Kaczynski suffered at the time of his arrest from any major mental disorder. He was still a smart, highly articulate, extremely well-read man. Defective reasoning, the prosecution of reason would insist, cannot be blamed for his actions. To see this, all you need do is read the Unabomber’s manifesto:
The Industrial Revolution and its consequences have been a disaster for the human race …. They have destabilized society, have made life unfulfilling, have subjected human beings to indignities, … and have inflicted severe damage on the natural world. The continued development of technology will worsen the situation …. The industrial-technological system may survive or it may break down …. If the system survives, the consequences will be inevitable: There is no way of reforming or modifying the system so as to prevent it from depriving people of dignity and autonomy. If the system breaks down the consequences will still be very painful. But the bigger the system grows the more disastrous the results of its breakdown will be, so if it is to break down it had best break down sooner rather than later. We therefore advocate a revolution against the industrial system.10
This, surely, is a well-constructed argument. Most of us would disagree with the premise that technological progress is a plain disaster, but actually, many well-respected philosophers and social theorists have defended similar views. What singles out Kaczynski, the prosecution of reason would claim, is that he pushed this radically pessimistic viewpoint to its logical consequences and acted accordingly. As one of his biographers put it: “Kaczynski, in short, had become a cold-blooded killer not despite of his intellect, but because of it.”11
So, the defense of reason would counter, the prosecution wants you to believe that the problem with Ted Kaczynski, the Unabomber, is that he was reasoning too much. His manifesto is indeed more tightly reasoned than much political discourse. What made him notorious, however, were not his ideas but his crimes. Nowhere in his writings is there even the beginning of a proper argument showing that sending bombs to a few powerless academics—his former colleagues—would kick-start a “revolution against the industrial system.” When you are told that excessive reliance on reasoning led someone to absurd or abhorrent conclusions, look closely at the evidence, and you will find lapses of reason: some premises were not properly examined, and some crucial steps in the argument are simply missing. Remember: a logical demonstration can never be stronger than its weakest part.
Expert Witnesses for the Prosecution
Since historical illustrations, however arresting, are not sufficient to make their cases, defense and prosecution of reason would turn to expert witnesses. Neither side would have any difficulty in recruiting psychologists to support their cause. Specialists of reasoning do not agree among themselves. Actually, the polemics in which they are engaged are hot enough to have been described as “rationality wars.” This very lack of agreement among specialists who, one hopes, are all good reasoners, is particularly ironic: sophisticated reasoning on reasoning does not come near providing a consensual understanding of reasoning itself.
The prosecution of reason might feel quite smug. Experimental psychology of reasoning has been fast developing since the 1960s, exploiting a variety of ingenious experiments. The most famous of these present people with problems that, in principle, could easily be resolved with a modicum of simple reasoning. Yet most participants in these experiments confidently give mistaken answers, as if the participants were victims of some kind of “cognitive illusion.” These results have been used in the rationality wars to argue that human reason is seriously defective. Reason’s defenders protest that such experiments are artificial and misleading. It is as if the experiments were aimed at tricking sensible people and making them look foolish rather than aimed at understanding the ordinary workings of reason. Of course, psychologists who have devised these experiments insist that, just as visual illusions reveal important features of ordinary, accurate vision, cognitive illusions reveal important features of ordinary reasoning.12 Philosophers, science writers, and journalists have, however, focused on the seemingly bleak implications of this research for the evaluation of human rationality and have, if anything, exaggerated their bleakness.
When you do arithmetic, it does not matter whether the numbers you add or subtract happen to be numbers of customers, trees, or stars, nor does it matter whether they are typical or surprising numbers for collections of such items. You just apply rules of arithmetic to numbers, and you ignore all the rest. Similarly, if you assume that reasoning should be just a matter of applying logic to a given set of premises in order to derive the conclusions that follow from these premises, then nothing else should interfere. Yet there is ample evidence that background knowledge and expectations do interfere in the process. This, many argue, is the main source of bad reasoning.
Here is a classic example.13 In July 1980, Björn Borg, who was then hailed as one of the greatest tennis players of all time, won his fifth consecutive Wimbledon championship. In October of that year, Daniel Kahneman and Amos Tversky, two Israeli psychologists working in North America who would soon become world-famous, presented a group of University of Oregon students with the following problem:
Suppose Björn Borg reaches the Wimbledon finals in 1981. Please rank order the following outcomes from most to least likely:
1. Borg will win the match.
2. Borg will lose the first set.
3. Borg will lose the first set but win the match.
4. Borg will win the first set but lose the match.
Seventy-two percent of the students assigned a higher probability to outcome 3 than to outcome 2. What is so remarkable about this? Well, if you have two propositions (for instance, “Borg will lose the first set” and “Borg will win the match”), then their conjunction (“Borg will lose the first set but win the match”) cannot be more probable than either one of the two propositions taken separately. Borg could not both lose the first set and win the match without losing the first set, but he could lose the first set and not win the match. Failing to see this is an instance of what is known as the “conjunction fallacy.” More abstractly, take two propositions that we may represent with the letters P and Q. Whenever the conjunction “P and Q” is true, so must be both P and Q, while P could be true or Q could be true and “P and Q” false. Hence, for any two propositions P and Q, cl
aiming that their conjunction “P and Q” is more probable than either P or Q taken on its own is clearly fallacious.
Kahneman and Tversky devised many problems that caused people to commit the conjunction fallacy and other serious blunders. True, as they themselves showed, if you ask the same question not about Björn Borg at Wimbledon but rather about an unknown player at an ordinary game, then people do not commit the fallacy. They correctly rank a single event as more probable than the conjunction of that event and another event. But why on earth should people reason better about an anonymous tennis player than about a famous champion?
Here is another example from our own work illustrating how the way you frame a logical problem may dramatically affect people’s performance.14 We presented people with the following version of what, in logic, is known as a “pigeonhole problem”:
In the village of Denton, there are twenty-two farmers. All of the farmers have at least one cow. None of the farmers have more than seventeen cows. How likely is it that at least two farmers in Denton have the exact same number of cows?
Only 30 percent gave the correct answer, namely, that it is certain—not merely probable—that at least two farmers have the same number of cows. If you don’t see this, perhaps the second version of the problem will help you.
To another group, we presented another version of the problem that, from a logical point of view, is strictly equivalent:
In the village of Denton, there are twenty-two farmers. The farmers have all had a visit from the health inspector. The visits of the health inspector took place between the first and the seventeenth of February of this year. How likely is it that at least two farmers in Denton had the visit of the health inspector on the exact same day?
This time, 70 percent of people gave the correct answer: it is certain.
As the Borg and the farmers-cows problems illustrate, depending on how you contextualize or frame a logical problem—without touching the logic of it—most people may either fail or succeed. Isn’t this, the prosecution would argue, clear evidence that human reason is seriously defective?
Expert Witnesses for the Defense
While many psychologists focused on experiments that seem to demonstrate human irrationality, other psychologists were pursuing a different agenda: to identify the mental mechanisms and procedures that allow humans to reason at all.
There is little doubt that some simple reasoning (in a wide sense of the term) occurs all the time, in particular when we talk to each other. Conjunctions such as “and,” “or,” and “if” and the adverb “not” elicit logical inferences of the most basic sort. Take a simple dialogue:
Jack (to Jill): I lent my umbrella to you or to Susan—I don’t remember whom.
Jill: Well, you didn’t lend it to me!
Jack: Oh, then I lent to Susan.
Jill: Right!
No need for Jack or Jill to have studied logic to come to the conclusion that Jack lent his umbrella to Susan.15 But what is the psychological mechanism by means of which such inferences are being performed? According to one type of account, understanding the word “or” or the word “not” amounts to having in mind logical rules that somehow capture the meaning of such words. These rules govern deductions licensed by the presence of these “logical” words in a statement. Here is a rule for “or” (using again the letters P and Q to represent any two propositions):
“Or” rule: From two premises of the form “P or Q” and “not P,” infer Q.
Several psychologists (Jean Piaget, Martin Braine, and Lance Rips, in particular16) have argued that we perform logical deduction by means of a “mental logic” consisting in a collection of such logical rules or schemas. When Jack and Jill infer that Jack lent his umbrella to Susan, what they do is apply the “or” rule.
According to an alternative explanation, “mental model theory” (developed by Philip Johnson-Laird and Ruth Byrne),17 no, we don’t have a mental logic in our head. What we have is a procedure to represent and integrate in our mind the content of premises by means of models comparable to schematic pictures of the situation. We then read the conclusions off these models. In one model, for instance, Jack lent his umbrella to Jill. In an alternative model, he lent it to Susan. If Jack’s statement is true, then the two mental models can neither be both right nor be both wrong. When we learn that the “lent to Jill” model is wrong, then we are left with just the “lent to Susan” model, and we can conclude that Jack lent his umbrella to Susan.
Much work in the psychology of reasoning has been devoted to pitting against one another the “mental logic” and the “mental models” approaches. You might wonder: What is the difference between these two accounts? Aren’t they both stating the same thing in different terms? Well, true, the two theories have a lot in common. They both assume that humans have mechanisms capable of producing genuine logical inferences. Both assume that humans have the wherewithal to think in a rational manner, and in this respect, they contrast with approaches that cast doubt on human rationality.
Figure 3. The four schemas of conditional inference.
The picture drawn by “mental logicians” and “mental modelers” is not quite rosy, however. Both approaches recognize that all except the simplest reasoning tasks can trip people and cause them to come to unwarranted conclusions. As they become more complex, reasoning tasks rapidly become forbiddingly difficult and performance collapses. But what makes a reasoning task complex? This is where the two theories differ. For mental logicians, it is the number of steps that must be taken and rules that must be followed. For mental modelers, it is the number of models that should be constructed and integrated to arrive at a certain conclusion.
The defense of reason would want these two schools to downplay their disagreements and to focus on a shared positive message: humans are equipped with general mechanisms for logical reasoning. Alas, the prosecution would find in the very work inspired by these two approaches much evidence to cast doubt on this positive message.
If there is one elementary pattern of reasoning that stands out as the most ubiquitous, the most important both in everyday and in scholarly reasoning, it is what is known as conditional reasoning—reasoning with “if …, then …” (see Figure 3). Such reasoning involves a major premise of the form “if P, then Q.” For instance:
If you lost the key, then you owe us five dollars.
If pure silver is heated to 961°C, then it melts.
If there is a courthouse, then there is a police station.
If Mary has an essay to write, then she will study late in the library.
The first part of such statements, introduced by “if,” is the antecedent of the conditional, and the second part, introduced by “then,” is the consequent. To draw a useful inference from a conditional statement, you need a second premise, and this minor premise can consist either in the affirmation of the antecedent or in the denial of the consequent. For instance:
If there is a courthouse, then there is a police station. (major premise: the conditional statement)
There is a courthouse. (minor premise: affirmation of the antecedent)
——————
There is a police station. (conclusion)
Or:
If there is a courthouse, then there is a police station. (major premise: the conditional statement)
There is no police station. (minor premise: denial of the consequent)
——————
There is no courthouse. (conclusion)
These two inference patterns, the one based on the affirmation of the antecedent (known under its Latin name, modus ponens) and the one based on the denial of the consequent (modus tollens), are both logically valid: when the premises are true, the conclusion is necessarily true also.
But what about using as the minor premise the denial of the antecedent (rather than its affirmation) or the affirmation of the consequent (rather than its denial)? For instance:
If there is a courthouse, then there is a pol
ice station. (major premise: the conditional statement)
There is no courthouse. (minor premise: denial of the antecedent)
——————
There is no police station. (conclusion?)
Or:
If there is a courthouse, then there is a police station. (major premise: the conditional statement)
There is a police station. (minor premise: affirmation of the consequent)
——————
There is a courthouse. (conclusion?)
These two inference patterns (known by the name of their minor premise as “denial of the antecedent” and “affirmation of the consequent”) are invalid; they are fallacies. Even if both premises are true, the conclusion does not necessarily follow—you may well, for instance, have a police station but no courthouse.
Surely, the prosecution would exclaim, all this is simple enough. Shouldn’t people, if the defense were right, reliably perform the two valid inferences of conditional reasoning and never commit the two fallacies? Alas, the expert witnesses of the defense have demonstrated in countless experiments with very simple problems that such is not the case—far from it. True, nearly everybody draws the valid modus ponens inference from the affirmation of the antecedent. Good news for the defense? Well, the rest is good news for the prosecution: only two-thirds of the people, on average, draw the other valid inference, modus tollens, and about half of the people commit the two fallacies.18 And there is worse …
Will She Study Late in the Library?
In a famous 1989 study, Ruth Byrne demonstrated that even the valid modus ponens inference, the only apparently safe bit of logicality in conditional reasoning, could all too easily be made to crumble.19 Byrne presented participants with the following pair of premises: