The Enigma of Reason: A New Theory of Human Understanding

Home > Other > The Enigma of Reason: A New Theory of Human Understanding > Page 16
The Enigma of Reason: A New Theory of Human Understanding Page 16

by Dan Sperber


  Does this mean that, when we attribute reasons to ourselves or to others, we must be using as a premise the presumption that we (or they) are rational and tend to come to conclusions for which we (or they) have good reasons? No, from a modularist point of view, regularities—such as the regularity that humans tend to think and act rationally—can be exploited by modular procedures without being represented as premises at all. We may, in other terms, exploit properties of human rationality in order to attribute reasons to others or to ourselves without having to entertain any general thought about human rationality itself.

  There is both something appealing and something quite problematic about giving a central role to rationality in the discovery of reasons (whether in a standard or in a modularist perspective). What is appealing is the highlighting of a close link between reasons and rationality. It seems commonsensical that if we weren’t rational, we wouldn’t understand reasons, let alone care for them. If reasons weren’t rational to some sufficient degree, we would not even recognize them as reasons.

  What is much more problematic is the idea that a general human tendency to think rationally (together with some specific information about each case) should be enough to guide us in identifying reasons. For any rational belief or intention, there is an indefinite variety of quite different possible reasons, all compatible with the kind of limited evidence that might be available, which would rationally justify it.

  Perhaps Lin’s assertion that it had been raining wasn’t based at all on the observation of the puddles. For all you know, it might have been based on his superior competence at recognizing and interpreting changes in the air’s temperature and humidity; or perhaps he had noticed, while you were both in the conference room, someone entering with a wet umbrella; or he may have had quite different reasons that you have no clue about. What this means is that just looking for possible objective reasons that might have been accessible to a person doesn’t come near being an effective heuristic to discover that person’s actual reasons.

  When we talked about mindreading in Chapter 6, we encountered a similar problem. Rationality doesn’t by itself provide an adequate basis to attribute beliefs and intentions. The attribution of such mental states, we argued, takes advantage of the modular organization of the mind. What it exploits is not rationality as a general feature but the specific features of various cognitive competencies that humans share (and that jointly make them rational beings). This, we now suggest, is also true of the attribution of reasons. To attribute reasons to others or to themselves, humans rely less on their overall rationality than on the effectiveness of some specific competencies.

  Back to Molly at the party. She challenges you to explain why you said she was upset. To answer, you have to draw a backward inference from your initial intuition about her mood to reasons that would justify your intuition. You do not know for sure what might have triggered this intuition, but you are well equipped to make plausible assumptions.

  You are not just a rational being; you have the typical human ability to recognize emotions. You have a well-developed expertise to use facial expressions, tone of voice, bodily movements, and so on as evidence of mood, allowing you to “read” the mood of others. While you cannot remember the cognitive process that led to your intuition—processes of intuitive inference are essentially opaque—you remember things you noticed when you met Molly this evening and that are standard indicators of moods: she was not smiling, her tone of voice was strained, and so on. So, you pick from what you remember (or from what you are presently noticing) pieces of evidence that might best justify your intuition that Molly is upset; you then infer that these were the reasons for your intuition.

  Typically, your intuition about a person’s mood is triggered by a combination of factors, many of which you are not even aware of, and that each makes some contribution to your intuition in the context of all the others. When you single out some pieces of evidence as being the reasons for your intuition you are typically exaggerating their weight as evidence, but this may be the condition for producing a relevant narrative.

  Your memory, as we saw in Chapter 3, is not a mere recall of past registrations. It is constructive, and it often “remembers” features that help make better sense of what happened, even when, in fact, you hadn’t observed these features at the time. It may be, for instance, that you hadn’t noticed that Molly’s tone of voice was strained until she asked you why you thought she was upset. Still, because it fits so well with your intuition that Molly was upset, this feature is injected into your memory of what you think caused your intuition in the first place. The strength of the reasons you now invoke is itself inferred. It is inferred from the confidence you have in your own intuition: if your intuition feels right, then your reasons for this intuition must be strong. You look for plausible strong reasons and assume that they are the reasons that motivated you.

  We intuitively infer our reasons for some specific intuition not on the general presumption of our own rationality, but on a much narrower confidence in the specific kind of competence that produced this intuition. Our feeling of rightness when we intuit the mood of a friend is based on our sense not that we are rational beings but that we are competent at judging people’s mood and particularly the mood of people we know well.

  When you have to infer the reasons that led another person to a given conclusion, your task is, again, to draw a backward inference from this conclusion to the kind of reasons that could explain and, at least to some extent, justify it.8 When you walked out of the building with Lin and he said, “It has been raining,” you looked around and, seeing puddles, you assumed that their presence provided a good reason for Lin’s assertion. Why do you intuit that the puddles provide a reason to infer that it has been raining? Because, just like Lin, you have the competence to recognize evidence of the weather. When Lin’s statement causes you to pay attention to the telltale puddles, you yourself intuit that it must have been raining. Since you trust your own inference from the puddles to the rain, you make the higher-level inference that the presence of such puddles is a good reason, for others or for yourself, to conclude that it just rained, and you further infer that this may well have been Lin’s reason.

  What about cases where we don’t share others’ intuitions? Suppose that when Lin said it had been raining, you saw the puddles but didn’t intuit that it must, only that it might have been raining. In fact, paying more attention, you notice that the puddles—and there aren’t that many—are all in the same limited area in front of you. You are now more disposed to infer that just the grounds in front of you happen to have been watered and that, no, it didn’t rain. Still, puddles did evoke in you the possibility of rain: they were a prima facie reason to infer that it might have been raining, but it turns out not a good enough reason. Hence your intuition would still be that Lin’s reason for stating that it had been raining probably was that he had noticed the puddles, but you would now judge this to be a poor reason. If, on the other hand, no possible evidence of rain had come to your mind, then you might have no intuition about Lin’s reasons. You might just be and remain puzzled by his statement.

  The attribution of reasons, to others or to oneself, needn’t be more than a rather superficial affair. One searches the environment or memory for some actual or plausible piece of information (Molly’s strained voice, puddles) that could be invoked both to explain and to justify an intuition. If such a piece of information is found, it is assumed to be the actual reason for one’s own intuition, a probable reason for someone else’s.

  Contrary to the commonsense view, what happens is not that we derive intuitive conclusions from reasons that we would somehow possess. What we do, rather, is derive reasons for our intuitions from these intuitions themselves by a further process of intuitive backward inference. We infer what our reasons must have been from the conclusions we intuitively arrived at. We typically construct our reasons as an after-the-fact justification.

  We attribute reasons to
others in the same way: to the extent that we trust their competence, we tend to trust their intuitions and to infer their reasons through the same process of backward inference. When we don’t trust their competence, a similar process of backward inference will settle for apparent reasons that we ourselves find too weak or flawed to justify their intuitive conclusions but that they may have found good enough. When we believe others to be mistaken, we are typically content to attribute to them blatantly poor reasons.

  We infer our reasons so that they should support our intuitive conclusions. We assess the strength of other people’s reasons on the basis of our degree of agreement with their conclusions. Does this mean that this search for reasons is a purely cosmetic affair, a way of dressing up our naked biases just to look good in our own eyes and to have others look good or bad depending on whether we agree or disagree with them? Is there no cognitive benefit to be expected from the process? No, this wouldn’t make much sense. If everybody just stood by their initial intuitions, come what may, reasons would be altogether irrelevant.

  Reasons, we have argued, are for social consumption. People think of reasons to explain and justify themselves. In so doing, they accept responsibility for their opinions and actions as justified by them; they implicitly commit themselves to norms that determine what is reasonable and that they expect others to observe. In giving reasons, people take the risk of seeing their reasons challenged. They also claim the right to challenge the reasons of others. Someone’s reputation is, to a large extent, the ongoing effect of a conversation spread out in time and social space about that person’s reasons. In giving our reasons, we try to take part in the conversation about us and to defend our reputation. We influence the reputation of others by the way we evaluate and discuss their reasons.9

  So, no, we don’t invoke reasons for some inane ego-boost, but, yes, the very way we infer our reasons is biased in our favor. We want our reasons to justify us in the eyes of others. Because they are going to be submitted to others’ judgment, reasons may be rethought and revised to be better accepted. Sometimes this means revising, moreover, the conclusions that our reasons support: changing opinion or course of action so as to better be able to justify ourselves. Reasons and conclusions may, in the end, have to be mutually readjusted.

  There is, we are assuming, a dedicated metarepresentational module, the job of which is to infer reasons, ours and those of others. Its job is not to provide a psychologically accurate account of the reasons that motivate people. In fact, the implicit psychology—the presumption that people’s beliefs and actions are motivated by reasons—is empirically wrong. Giving reasons to justify oneself and reacting to the reasons given by others are, first and foremost, a way to establish reputations and coordinate expectations.

  Does it follow that the reasons we give and expect others to give are merely adjusted to some local consensus, some culturally constructed notion of rationality, and that one shouldn’t expect people’s reasons to be rational in an objective sense of the term? No, this doesn’t follow at all. The reasons people give play a much more important role than just signaling that they are norm-abiding members of their social group. People get the good reputation they care about when they are seen as reliable sources of information and as effective partners in cooperation. There is no way they could maintain over time such a reputation without the basic kind of objective rationality that makes them draw cognitively sound inferences and act effectively. To serve their reputational purpose, the personal reasons people invoke should be recognized by others as representing objective reasons, and the best way to secure this recognition is, at least, to invoke reasons that, objectively, are good or at least not too bad.

  A cultural community may favor certain types of reasons such as reliance on specific authorities. It may unequally recognize the competence of women and men, young and old, socially inferior and superior, in invoking reasons. It may condone some irrational reasons, such as premonitory dreams. What a community cannot do is build a battery of reasons all of its own. Everywhere, people’s intuitions about reasons are anchored in cognitive competencies that, to a large extent, they share as members of the human species, competencies that contribute to humans’ cognitive efficiency, that is, rationality in a basic sense of the term. Without such cognitive anchoring, we doubt that any norms of rationality could ever emerge and be maintained in a social group.

  Our reasons tend to be rational because, in the first place, our intuitions tend to be rational. What humans do and, presumably, other animals don’t do is add to their spontaneous inferences higher-level representations of reasons for these inferences. These reasons are not what makes human inferences rational in the biologically relevant sense of cognitively efficient. What these reasons help do, rather, is represent our inferences as rational in a different, socially relevant sense of the term where being rational means, precisely, being based on personal reasons that can be articulated and assessed.10 The public representations of beliefs and intentions as guided by personal reasons are a fundamental aspect of human social interaction. These representations, we suggest, are produced by a dedicated metarepresentational module. All our reasons are, directly or indirectly, outputs of this module.

  Could, then, human Reason (with the capital R used in classical philosophy) be a module? Have we found it? Should the next step be the localization of the Reason module in the brain? No, no, and no. Classical ideas about Reason are not about a psychological mechanism but about an essential and transcendental feature of the human mind as a whole. In any case, we have not found any module; we are merely speculating, with, we hope, sensible arguments, that the identification of reasons might well be the job of a dedicated module. If we are right and there is such a module, it would have indeed to be realized in some neural structure, but that structure needn’t occupy—and occupy alone—a single locus in the brain. In any case, such a module for inferring reasons wouldn’t correspond to Reason as classically understood.

  Descartes and many other philosophers have sung praises of Reason (while other thinkers have been less lyrical). The module we are talking about, if it exists, would not be something humans would want to brag about as they have bragged about Reason. Still, the closest thing to classical Reason to be found in the human mind/brain may well be this module. We will therefore call it the reason module with a modest, lowercase r.

  Can the Reason Module Reason?

  The reason module is, at least in part, aimed at producing justifications and is very much biased in our favor. How could the reasons it produces ever improve on our intuitive inferences if they are inferred from them through backward inference? How could it help in reasoning? How could our evaluation of other people’s reasons ever be more than a projection of our self-serving prejudices? How could we ever be convinced by the reasons of others to change our own views?

  Part of the answer is that our first-order intuitions (about Molly’s mood, the rain, and the vast variety of things about which we have such intuitions) are delivered by a great many modules, while our metarepresentational intuitions about reasons for our first-order intuitions are delivered by one metarepresentational module that just works on reasons. First-order modules draw all kinds of inferences about objects in their domain of competency by exploiting regularities in their domain. The metarepresentational module involved draws inferences in its domain of competence; it draws, that is, inferences about the relationship between reasons and conclusions. To do so, it attends to relevant properties of this relationship. Some intuitions may come easily to us and feel quite strong, but finding reasons that feel as strong as these intuitions themselves may not come so easily. Low confidence in reasons for an intuition may undermine initially high confidence in that intuition.

  We may, for instance, have an evolved disposition to accept important risks for exceptional opportunities. Such a disposition, possibly advantageous on average in a distant ancestral past, may, in the modern world, easily be exploited by swindlers.


  Jeb, for instance, has an immediate intuition that if he responds favorably to a message he just received from the widow of a rich banker asking him to help her transfer millions of dollars to his country, he will become extremely rich, but then he may have trouble finding credible reasons—reasons that he could share with his family and friends—in support of this intuition. He may initially dismiss his friend Nina’s warning that this is a scam, but then he might soon enough find good reasons for her warning. His skeptical intuitions about reasons are different from and better than his enthusiastic intuition about his good fortune.

  Others often express intuitions that differ from ours. When trying to explain these intuitions that we do not share, we may nevertheless intuitively attribute to them reasons that we find good or even compelling, leading us to revise our initial intuition. A friend of Nikos and Sofia asks them to solve the following problem (actually, a problem much studied in recent psychology of reasoning).11 A bat and a ball together cost 1.10 euros. The bat costs one euro more than the ball. How much does the ball cost? For Nikos, the intuitive answer is that the bat costs ten cents. To his surprise, Sofia answers, “The ball costs five cents.” Seeing him puzzled, she continues, “If the ball costs five cents, then the bat must cost 1.05 euros …” Before she finishes, he sees it: the difference is one euro, as required! This, he intuits, must be Sofia’s reason for her answer, and it is a good reason. He therefore rejects his own initial intuition that the ball must cost ten cents and accepts Sofia’s answer.

  The strength of our first-order intuitions and that of our corresponding metarepresentational intuitions need not match. Reasons may strengthen or undermine our first-order intuitions, and sometimes lead to revisions. Reasons, then, need not be mere stamps of approval on our first-order intuitions.

  Don’t, however, take Jeb’s or Nikos’s example as typical. Jeb’s initial intuition went against common wisdom. It was clear enough that he would have to justify himself if he acted on it; plausible justifications were hard to come by. And then he had Nina’s help in finding reasons to reconsider. Similarly, Nikos might not have revised his solution to the bat-and-ball problem if Sofia had not come up with a different solution and started explaining it. Few of our intuitions are as blatantly stupid as Jeb’s or as demonstrably false as Nikos’s. Most of our first-order intuitions are at least plausible, and backward inference usually yields plausible second-order reasons to justify them.

 

‹ Prev