The Enigma of Reason: A New Theory of Human Understanding

Home > Other > The Enigma of Reason: A New Theory of Human Understanding > Page 38
The Enigma of Reason: A New Theory of Human Understanding Page 38

by Dan Sperber


  Reason properly understood as a tool for social interaction is certainly not perfect, but flawed it is not. Second part of the enigma of reason solved.

  While the argumentative theory of reasoning has been generally well received, it has often been misunderstood in two ways not just by people critical of the theory but also—and this has been more worrying—by people who were attracted to it.

  A first misunderstanding that we encountered again and again consists in attributing to us the view that argumentation is just a way to manipulate and deceive others and that it has no real intellectual merit. This very cynical view of reasoning and argumentation must have some appeal—possibly that of making one feel superior to naïve ordinary folks. To the risk of disappointing some of our readers, this is a view we do not hold and a cynicism we do not share.

  Of course, people are sometimes deceived by an argument. This can happen, however, only because most arguments are not deceitful and are easier to evaluate than just would-be authoritative pronouncements. Without the possibility to objectively evaluate arguments, why would anybody ever take an argument seriously? Reasoning is not only a tool for producing arguments to convince others; it is also, and no less importantly, a tool for evaluating the arguments others produce to convince us. The capacity to produce arguments could evolve only in tandem with the capacity to evaluate them.

  Intellectualists are committed to the view that reason should be demanding and objective both in the production and in the evaluation of arguments. They cannot but observe with resignation that human reason actually is not up to what it should be.

  The interactionist approach, on the other hand, makes two contrasting predictions. In the production of arguments, we should be biased and lazy; in the evaluation of arguments, we should be demanding and objective—demanding so as not to be deceived by poor or fallacious arguments into accepting false ideas, objective so as to be ready to revise our ideas when presented with good reasons why we should.

  The first prediction—that the production of reasons is lazy and biased—is not, strictly speaking, a prediction at all. The data we “predicted,” or rather retrodicted, were already there in full view. What the interactionist approach does (and the intellectualist approach fails to do) is make sense of this evidence.

  The second prediction—that evaluation is demanding and objective—is a genuine prediction. There is hardly any direct evidence on the issue in the literature, and the little there is is inconclusive. This second prediction is original to the interactionist approach. Ask an intellectualist psychologist of reasoning to predict whether people will be better at producing or evaluating arguments, and chances are your interlocutor won’t predict any difference or won’t even see the rationale of the question.

  Just as widespread as the view that people—at least other people—are biased and lazy is the view that they are gullible: they accept the most blatantly fallacious arguments; and that they are pigheaded: they reject perfectly valid arguments. If people were both gullible and pigheaded, it would be all too easy to spread false new ideas and all too difficult to dispel old mistaken views. The exchange of ideas would, if anything, favor bad ideas. This pessimistic view is widely shared. Group discussion, in particular, is often reviled,2 an attitude well expressed by the saying, “A camel is a horse designed by a committee.”

  Actually, camels are a marvel of nature, group discussions often work quite well, and the study of these discussions provides good indirect evidence in favor of our second prediction. In problem solving, the performance of a group tends to be much better than the average individual performance of the group’s members and, in some cases, even better than the individual performance of any of its members: people can find together solutions that none of them could find individually. We reviewed some of this evidence in Chapter 15.3 There is much further evidence in the literature supporting our prediction that people are more demanding and objective in evaluation than in production. This evidence is, alas, indirect. We have begun, however, testing our prediction directly. Stay tuned!

  In this book, we have highlighted the remarkable success of reasoning in a group. This, however, had often given rise to a second misunderstanding of our approach.

  According to the interactionist approach, reason didn’t evolve to enhance thinking on one’s own but as a tool for social interaction. We produce reasons to justify ourselves and to convince others. This exchange of reasons may benefit every interlocutor individually. It may also, on some occasions, benefit a group. Why not envisage, then, that the exchange of reasons and the mechanism of reason itself could have evolved for the benefit of the group rather than for the benefit of individuals?

  The idea that Darwinian selection works at several levels and in particular at the level of groups has been much developed and discussed lately. It has been argued in particular that group-level selection has played a major evolutionary role in making human cooperation and morality possible.4 Couldn’t the evolution of reason, then, be a case of group-level rather than individual-level selection for cognitive cooperation? No, ours is definitely not a group-level selection hypothesis. In fact, it would be inconsistent with the interactionist approach to reason to think of it as a group-level adaptation.

  Group-level selection favors the pursuit of collective benefits over that of individual benefits. Reason as we have described it is, by contrast, a mechanism for the pursuit of individual benefits. An individual stands to benefit from having her justifications accepted by others and from producing arguments that influence others. She also stands to benefit from evaluating objectively the justifications and arguments presented by others and from accepting or rejecting them on the basis of such an evaluation. These benefits are achieved in social interaction, but they are individual benefits all the same.

  In interactions where reasons play a role, the people interacting may have converging or diverging interests. The exchange of reasons may play an important role in either case. Argumentation, for instance, plays a major role in negotiations, where interests diverge, often quite strongly. To the extent that members of a group share their interests, they can trust one another, and people who trust one another have a very reduced use or no use at all for justifications and arguments. Group selection would favor systematic trust and trustworthiness in a group. Reason as we describe it is an adaptation to social life where trust has to be earned and remains limited and fragile.

  Group discussion is not always efficient. When people have their ideas closely aligned to start with, it leads to polarization. When people start with conflicting ideas and no common goal, it tends to exacerbate differences. Group discussion is typically beneficial when participants have different ideas and a common goal. The collective benefits reaped in such cases should be seen, we suggest, as a side effect of a mechanism that serves individual interests.

  To say as we do that reason is an individual-level rather than a group-level adaptation doesn’t mean that it has consequences only for individuals and not for social groups and networks. In Chapters 17 and 18, we have evoked various ways in which the individual dispositions of many reasoners can be harnessed for moral, political, or scientific goals. More generally, an issue well worth investigating is that of the population-scale effects of this individual disposition. What role does reason play in the success or failure of different cultural ideas and practices? Conversely, while we have shown that reason is a human universal, much more must be done to find out to what extent and in which ways it can be harnessed, enriched, and codified differently in various cultural traditions (the cultural history of logics being only one quite interesting aspect of the question).

  And now to conclude this conclusion: reason has stood for far too long on a broken pedestal, overhanging other faculties but with an awkward tilt. What we hope to have done is put it back where it belongs, level with other cognitive mechanisms but quite upright, and, as other evolved mechanisms, powerful in complex and subtle ways—and endlessly fascinating. />
  Notes

  Introduction

  1. For a more detailed answer, see Dawkins 1996.

  2. Kahneman 2011.

  3. Mercier 2016a; Mercier and Sperber 2011.

  1. Reason on Trial

  1. Descartes 2006.

  2. Ibid., p. 13.

  3. Ibid., p. 18.

  4. Ibid., p. 5.

  5. Ibid.

  6. Cited, translated, and discussed in Galler 2007.

  7. Luther 1536, LW 34:137 (nos. 4–8).

  8. The story is well told in Nicastro 2008. Dan Sperber used it in answering John Brockman’s question “What Is Your Favorite Deep, Elegant, or Beautiful Explanation?” (Brockman 2013).

  9. Actually, there was no universally recognized value of the stade at the time: several states each had their own version. What was the length of the stade Eratosthenes used? There is some doubt on the issue, and some doubt therefore on the accuracy of his measurement of the circumference of the earth. What is not in doubt are the inventiveness and rationality of his method.

  10. Kaczynski 2005.

  11. Chase 2003, p. 81.

  12. A theme well developed in Piattelli-Palmarini 1994.

  13. Tversky and Kahneman 1983.

  14. Mercier, Politzer, and Sperber in press.

  15. As shown, for instance, in Prado et al. 2015.

  16. Braine and O’Brien 1998; Piaget and Inhelder 1967; Rips 1994.

  17. E.g., Johnson-Laird and Byrne 1991.

  18. See, e.g., Evans 1989.

  19. Byrne 1989. In Byrne’s experiments, the character who may study late in the library is not named. For ease of exposition, we call her Mary.

  20. E.g., Bonnefon and Hilton 2002; Politzer 2005.

  21. The role of relevance in comprehension is emphasized in Sperber and Wilson 1995.

  2. Psychologists’ Travails

  1. Interpreting Aristotle on the issue is not easy. See, for instance, Frede 1996.

  2. Kant 1998, p. 106.

  3. For different perspectives on the experimental psychology of reasoning, see Adler and Rips 2008; Evans 2013; Holyoak and Morrison 2012; Johnson-Laird 2006; Kahneman 2011; Manktelow 2012; Manktelow and Chung 2004; Nickerson 2012.

  4. Exceptions include Evans 2002 and Oaksford and Chater 1991.

  5. These and related issues have been highlighted by the philosopher Gilbert Harman in his book Change in View (1986), but this hasn’t had the impact we believe it deserved on the psychology of reasoning.

  6. Khemlani and Johnson-Laird 2012, 2013.

  7. F. X. Rocca, “Why Not Women Priests? The Papal Theologian Explains,” Catholic News Service, January 31, 2013, available at http://www.catholicnews.com/services/englishnews/2013/why-not-women-priests-the-papal-theologian-explains.cfm.

  8. Personal communication, November 21, 2012.

  9. Our preferred explanation of the task is Sperber, Cara, and Girotto 1995.

  10. Evans 1972; Evans and Lynch 1973.

  11. For further striking evidence that, in solving the selection task, people do not reason but just follow intuitions of relevance (of a richer kind than Evans suggested), see Girotto et al. 2001.

  12. Evans and Wason 1976, p. 485; Wason and Evans 1975.

  13. Evans and Over 1996.

  14. Sloman 1996.

  15. Stanovich 1999.

  16. Kahneman 2003a.

  17. For some strong objections to dual system approaches, see Gigerenzer and Regier 1996; Keren and Schul 2009; Kruglanski et al. 2006; Osman 2004.

  3. From Unconscious Inferences to Intuitions

  1. Hume 1999, p. 166.

  2. When we say of an organism that it has “information” about some state of affairs, we mean roughly that it is in a cognitive state normally produced only if the state of affairs in question obtains. You have, for instance, information that it is raining when you come to believe that it is raining through mechanisms that evolved and developed so as to cause such a belief only if it is raining. This understanding and use of “information” is inspired by the work of Fred Dretske (1981), Ruth Millikan (1987, 1993), and other authors who have developed similar ideas. We are aware of the complex issues such an approach raises. While we don’t need to go into them for our present purpose, we do develop the idea a bit deeper in Chapter 5, when we discuss the notion of representation (see also Floridi 2011; Jacob 1997).

  3. Darwin 1938–1939, p. 101.

  4. Steck, Hansson, and Knaden 2009; Wehner 2003; Wittlinger, Wehner, and Wolf 2006.

  5. Wehner 1997, p. 2. For the figure, see Wehner 2003.

  6. On the understanding of perception as unconscious inference, see Hatfield 2002. On Helmholtz, see Meulders 2010. On Ptolemy, see Smith 1996.

  7. Shepard 1990.

  8. Bartlett 1932.

  9. Ibid., p. 204.

  10. Strickland and Keil 2011.

  11. Miller and Gazzaniga 1998.

  12. Grice 1989.

  13. Kahneman 2003a.

  14. See, for instance, Proust 2013; Schwartz 2015.

  15. Thompson 2014.

  4. Modularity

  1. Vouloumanos and Werker 2007.

  2. Pinker 1994.

  3. Marler 1991.

  4. To use Frank Keil’s apt expression (Keil 1992).

  5. Kanwisher, McDermott, and Chun 1997.

  6. Csibra and Gergely 2009; Gergely, Bekkering, and Király 2002; see also Nielsen and Tomaselli 2010. These studies were based on earlier experiments of Meltzoff 1988.

  7. Rakoczy, Warneken, and Tomasello 2008; Schmidt, Rakoczy, and Tomasello 2011; Schmidt and Tomasello 2012.

  8. Dehaene and Cohen 2011.

  9. Schlosser and Wagner 2004.

  10. Fodor 1983.

  11. As is well illustrated in the work of, for instance, Clark Barrett, Peter Carruthers, Leda Cosmides and John Tooby, Rob Kurzban, Steven Pinker, and Dan Sperber. For an in-depth discussion, see Barrett 2015.

  5. Cognitive Opportunism

  1. Smith 2001.

  2. For instance, Oaksford and Chater 2007; Tenenbaum et al. 2011.

  3. Needham and Baillargeon 1993. We thank Renée Baillargeon for kindly providing us with an improved version of the original figure.

  4. Hespos and Baillargeon 2006; see also Luo, Kaufman, and Baillargeon 2009.

  5. Fodor 1981, p. 121.

  6. Winograd 1975.

  7. We are using a broad, naturalistic notion of representation inspired in particular by the work of Fred Dretske (1981, 1997), Pierre Jacob (1997), and Ruth Millikan (1987, 2004). For a more restricted view of representation that, while addressing philosophical concerns, pays close attention to empirical evidence, see Burge 2010.

  8. On the other hand, we are not committing here to any particular view on the metaphysics of representations and on the causal role, if any, of representational content. For a review and useful discussion, see Egan 2012.

  9. Another possibility is that some transmission of information is done not through direct module-to-module links but by broadcasting a module’s output on a “global workspace” where other modules can access it, as envisaged in theories of consciousness such as Baars’s and Dehaene’s (Baars 1993; Dehaene 2014).

  10. Tenenbaum, Griffiths, and Kemp 2006.

  11. Gallistel and Gibbon 2000; Rescorla 1988.

  12. As suggested long ago by Daniel Dennett (1971).

  13. For instance, Sperber 2005.

  6. Metarepresentations

  1. See, for instance, Carey 2009; Hirschfeld and Gelman 1994; Sperber, Premack, and Premack 1995.

  2. See Sperber 2000.

  3. Premack and Woodruff 1978.

  4. Gopnik and Wellman 1992; Perner 1991.

  5. Baillargeon, Scott, and He 2010; Leslie 1987.

  6. Baron-Cohen et al. 1985; Wimmer and Perner 1983.

  7. Onishi and Baillargeon 2005.

  8. Surian, Caldi, and Sperber 2007.

  9. For various uses of the notion of mental file, see Kovács 2016; Perner, Huemer, and Leahy 2015; Recanati 201
2. Our approach is closer to Kovács’s.

  10. Frith and Frith 2012; Kovács, Téglás, and Endress 2010; Samson et al. 2010.

  11. The distinction between personal and subpersonal levels was introduced by Dennett 1969. It has been interpreted in several ways, including by Dennett himself; see Hornsby 2000.

  12. There is an influential theory defended, in particular, by the philosopher Alvin Goldman and the neuroscientist Vittorio Gallese, according to which we understand what happens in other people’s minds by simulating their mental processes (e.g., Gallese and Goldman 1998). The approach that we suggest is at odds with standard versions of “simulation theory” and more compatible with others, where, however, the notion of “simulation” is understood in a technical sense, so much broader than the ordinary sense of the word that this becomes a source of misunderstandings.

  13. Some psychologists and philosophers (e.g., Apperly and Butterfill 2009; Heyes and Frith 2014) have argued that the mechanism that tracks the mental states of others is more rudimentary than we have suggested and doesn’t even detect beliefs or intentions as such. According to them, the more sophisticated understanding exhibited by four-year-old children who “pass” the standard false belief task (or by adults who enjoy reading a Jane Austen novel) is based on an altogether different “system 2” mindreading mechanism. The main advantage of this dualist hypothesis is to accommodate the new evidence regarding mindreading in infants with minimal revisions of earlier standard views of mindreading. Others—and we agree with them—argue in favor of revising these standard views and question the alleged evidence in favor of a dual-system understanding of mindreading (see Carruthers 2016; see also Brent Strickland and Pierre Jacob, “Why Reading Minds Is Not like Reading Words,” January 22, 2015, available at http://cognitionandculture.net/blog/pierre-jacobs-blog/why-reading-minds-is-not-like-reading-words).

 

‹ Prev