The Enigma of Reason: A New Theory of Human Understanding

Home > Other > The Enigma of Reason: A New Theory of Human Understanding > Page 10
The Enigma of Reason: A New Theory of Human Understanding Page 10

by Dan Sperber


  Some extremely specialized inferential modules are little more than cognitive reflexes. The Russian psychologist Pavlov famously conditioned dogs to salivate at the ring of a bell that, in the dogs’ experience, had been repeatedly followed by food. The study of such conditioned reflexes played a major role in the development of behaviorism, an approach to psychology that denied or at least ignored mental states. From a postbehaviorist, cognitive perspective, the conditioned reflex of Pavlov’s dogs is both cognitive and behavioral.11 It causes the dog to expect food—a cognitive response— and to salivate—a behavioral response.

  Here is what, presumably, is not happening in the dog’s mind. There is no general inferential procedure, no Aristotelian syllogism, which uses as premises two statement-like representations present in the dog’s mind and that we could paraphrase as “If the bell is ringing, food is coming” and “The bell is ringing.” There is no “if … then …” major premise in the dog’s mind to which a modus ponens rule of conditional inference could be applied. Rather, Pavlov’s conditioning has produced in the dog a wholly specialized module that exploits this bell–food regularity but doesn’t represent it.

  What gets represented in the dog’s mind each and every time the bell rings is the event of the bell ringing. This representation informs the dog’s cognitive system and, more specifically, the conditioned reflex module of the fact that the bell has been ringing, thus launching the procedure. This module has just one cognitive effect, which is that it produces an expectation of food, and one behavioral effect, which is salivating. In a cognitivist perspective, this is an inferential module all the same: its job is to derive a relevant conclusion (that food is coming) from an observation (that the bell is ringing). This reflex inference is cognitively sound as long as the bell–food regularity is maintained in the environment.

  About events in a wholly chaotic world, no relevant inference could ever be drawn. Logic would be pointless. Probabilities would be of no help. What makes relevant inferences possible—be they those of a physicist or those of a dog—is the existence in the world of dependable regularities. Some regularities, like the laws of physics, are quite general. Others, like the bell–food regularity in Pavlov’s lab, are quite transient and local. It is these regularities—the general and the local ones—that allow us, nonhuman and human animals, to make sense of our sensory stimulation and of past information we have stored. It allows us, most importantly, to form expectations on what may happen next and to act appropriately. No regularities, no inference. No inference, no action.

  Animals, including humans, have evolved to take advantage of regularities in their environment. They have not evolved to attend to all regularities or to regularities in general. Attempting to do so would be an absurd waste of time and energy. Rather, animals take into account only regularities that, sometimes directly and more often indirectly, matter to their reproductive success.

  Animals that move around exploit, to begin with, physical features of their environment that can help or hinder their locomotion. Foraging animals exploit regularities relevant to their finding food. Preys exploit regularities in the behavior of their predators; predators, in that of their preys. Sexually reproducing animals exploit regularities in the behavior of potential mates. Members of social species exploit regularities in the behavior of conspecifics. And so on. Even humans, whose curiosity may seem boundless and who hoard vast amounts of information that may never turn out to be of any use, ignore many regularities in their environment. You are likely, for instance, to be aware of more regularities in the behavior of mosquitoes than in that of dust bunnies even if there are more dust bunnies than mosquitoes near you. If you were immune to mosquito bites and allergic to dust bunnies, it might be the other way around.

  The fact that relevant inferences must exploit empirical regularities is, of course, compatible with the classical approach to inference. The classical method relies on formal procedures, or general inference rules, that apply to representations. The way to exploit empirical regularities in this framework is to represent them and to use these representations of regularities as major premises in inferences. “If … then …” statements (such as “If it is a snake, it is dangerous”) have a simple format for representing many regularities and for combining these general representations with representations of particular facts (such as “It is a snake”). From such a combination of general and particular (or major and minor) premises, formal rules may derive relevant conclusions (for instance, a so-called modus ponens rule would derive, “it is dangerous”). Alternatively, not just some but all regularities can be represented in probabilistic terms, and rules of probabilistic inferences can then be applied to these representations.

  Exploiting a large database of representations of regularities and of particular facts by means of a small set of formal inference rules makes for a formally powerful inferential system. Arguably, anything that can be inferred at all can be inferred in that way. Don’t assume, however, that such power and generality make for an optimal—or even a superior—inferential system, one that natural selection should have favored.

  The alternative to drawing inferences by means of logical or probabilistic methods working across the board in the same way is to use many specialized modules, each taking advantage of a given regularity by means of a procedure adjusted to the task.12 This is what presumably happens, for instance, in species that have an automatic fear of snakes (whether innate or acquired). A specialized inferential procedure takes as input the perception of a snake in the environment and produces as output a fear response (with its cognitive and behavioral aspects). Such a procedure relies neither on a premise describing the regularity that snakes are dangerous nor on a formal rule of conditional inference. It directly produces the right response when a snake has been detected, and otherwise it does nothing.

  Procedures that exploit a regularity don’t appear in evolution or in cognitive development by magic. They are biological or cognitive adaptations to the existence and relevance of the regularity they exploit. They contain, in that sense, information about the regularity (just as a key contains information about the lock it opens, or antibodies contain information about the antigens they neutralize).

  What, then, is the difference between the representation of a regularity and a procedure that directly exploits it if both the representation and the procedure somehow contain information about the regularity? Here is the answer. The representation of a regularity doesn’t do anything by itself, but it provides a premise that may be exploited by a variety of inferential procedures. A dedicated procedure does something: given an appropriate input, it produces an inferential output. What a dedicated procedure does not do is make the information it exploits available for other procedures. So, for instance, if you have two representations, “If it’s a snake, it is dangerous” and “If it is a scorpion, it is dangerous,” then formal rules may allow you to infer “Snakes and scorpions are dangerous” or “There are at least two species of dangerous animals.” On the other hand, you might have two danger-detecting procedures, one for snakes and the other for scorpions, and be unable to put the two together and make such simple inferences.

  Note that a cognitive system can contain the same information twice: in a procedure that directly exploits the information, and in a representation that serves as a premise for other kinds of procedures—you may have both a reflex fear of snakes and the knowledge that snakes are dangerous.

  Which of the two methods, exploiting regularities through specific procedures or through representations, is better? So put, the question is meaningless. What is better depends on costs and benefits that may vary across organisms, environment, situations, and purposes. When the purpose of an organism is to avoid being harmed by snakes, then a fast, reflex-like specialized module is likely to be the best option. When its purpose is to gain general knowledge about snakes, general statement-like representations and more formal argument patterns might be the way to go.

&n
bsp; There is no evidence that other animals are interested in any form of general knowledge (but let’s keep an open mind about the possibility). In the case of humans, all of them are definitely interested in not being harmed by snakes (and in other types of specific knowledge with practical import), and most are also interested in some general knowledge about snakes without immediate concern for its practical import. They want not just to exploit regularities but also to represent them. Does this means that humans are better off using just the classical method? Or both methods? Or is, as we will suggest, something merely resembling the classical method itself modularized in the human cognitive system?

  There are many relevant arguments in the controversies about modularity purporting to show that human inference is basically classical or basically modular. While we are more swayed by arguments in favor of a modular view (and have contributed arguments of our own),13 we strongly feel that the debate suffers by pitting one against another mere sketches of two alternative accounts.

  The classical approach has been around a much longer time and, as a result, it is much more developed both from a formal and from an experimental point of view. What remains quite sketchy, not to say problematic, however, in the classical picture is the way it explains or fails to explain how human reasoning may have evolved in the history of species, how it develops in individuals, how it succeeds in producing just the inferences that are relevant in a given situation rather than starting to produce all the mostly irrelevant inferences it is capable of producing. (This is the so-called frame-problem that doesn’t arise, or at least not to the same degree in a modular system.) What remains sketchy at best is also the way the classical picture tries to explain why people who reason from the same premises commonly arrive at divergent or even contradictory conclusions.

  The way we aim here to contribute to the debate is not by rehashing it but by fleshing out the modular picture and in particular by explaining how human reason fits into it.

  6

  Metarepresentations

  Is the mind really just an articulation of many modules? Animal minds, perhaps, but, critics argue, surely not the human mind! Animal inferences might be exclusively performed by modules that exploit regularities without ever representing them. Humans, on the other hand, are capable not just of exploiting but also of representing many empirical regularities. Regularities in the world aren’t just something humans take advantage of, as do other animals; they are also something that humans think and talk about. Humans, moreover, are capable of consciously using representations of empirical regularities to discover even more general regularities. We are not disputing this. How could we? After all, it is by exercising this capacity that we scientists make a living.

  More generally, doesn’t the very existence of reasoning demonstrate that humans are capable of going well beyond module-based intuitive inference? Doesn’t reason stand apart, above all, from these specialized inference modules? Don’t be so sure. Reasoning, we will argue, is a form of intuitive inference.

  The classical contrast between intuition and reasoning isn’t better justified than the old hackneyed contrast between animals and humans beings (and its invocation of reason as something humans possess and beasts don’t). To contrast humans not with other animals but simply with animals is to deprive oneself of a fundamental resource to understand what it is to be human and how indeed humans stand out among other animals. Similarly, to contrast reason with intuitive inference in general rather than with other forms of intuitive inference is to deprive oneself of the means to understand how and why humans reason.

  Folk Ontology

  If reason is based on intuitive inference, what, you may ask, are the intuitions about? The answer we will develop in Chapters 7, 8, and 9 is that intuitions involved in the use of reason are intuitions about reasons. But first, we need to set the stage.

  Intuitions about reasons belong to a wider category: intuitions about representations. The ability to represent representations with ease and to draw a variety of intuitive inferences about them may well be the most original and characteristic features of the human mind. In this chapter, we look at these intuitions about representations.

  Humans have a very rich “folk ontology.” That is, they recognize and distinguish many different basic kinds of things in the world, and they do so intuitively, as a matter of common sense. Folk ontology contrasts with scientific ontology, much of which is neither intuitive nor commonsensical at all. As humans grow up, their folk ontology is enriched and modified under the influence of both direct experience and cultural inputs. It may even be influenced by scientific or philosophical theories. Still, the most basic ontological distinctions humans make are common to all cultures (and some of these distinctions are, no doubt, also made by other animals).

  Everywhere, humans recognize inanimate physical objects like rocks and animate objects like birds; substances like water and flesh; physical qualities like color and weight; events like storms and births; actions like eating and running; moral qualities like courage and patience; abstract properties like quantity or similarity. Typically, humans have distinct intuitions about the various kinds of things they distinguish in their folk ontologies. This suggests—and there is ample supporting evidence—that they have distinct inferential mechanisms that to some extent correspond to different ontological categories.1

  Modules may evolve or develop, we have argued, when there is a regularity to be exploited in inference—and, needless to say, when it is adaptive to exploit it. Many of these regularities correspond to ontological categories. For instance, animate and inanimate objects move in quite different ways, and their movements typically present humans and other animals with very different risks and opportunities. There is a corresponding evolved capacity to recognize these two types of movements and treat them differently.

  Some relevant regularities, however, have to do less with basic properties of an ontological category than with a practical interest of humans (or of other animals). Various omnivorous animals, including humans, may have special modules for making inference about the edibility of plants, for example, although edible plant is not a proper ontological category. Actually, modules are task specific, problem specific, or opportunity specific as often as domain specific, if not more often. Still, ontology is a terrain that inferential modules typically exploit.

  Not only do humans represent many kinds of things in their thoughts and in their utterances, they also recognize that they are doing so. In their basic ontology—and here humans seem quite exceptional—there are not only things but also representations of things. In fact, for most things humans can represent, they can also represent its representation. They can represent rocks and the idea of a rock, colors and color words, numbers and numerals, states of affairs (say, that it is raining) and representations of these states of affairs (the thought or the statement that it is raining).

  Representations of things are themselves a very special kind of things in the world. Representations constitute a special ontological category (with subcategories), for which humans have specialized inferential mechanisms. Representations of representations, also known as higher-order representations or as metarepresentations, play a unique role in human cognition and social life.2 Apart from philosophers and psychologists, however, people rarely think or talk about representations as such. They talk, rather, about specific types of representations.

  People talk about beliefs, opinions, hopes, doubts, fears, desires, or intentions—all these occur in people’s minds and brains; they are mental representations. Or they talk about the public expression of such mental representations, spoken or written utterances as well as gestures or pictures—they are public representations.

  Mental and public representations are concrete objects that are differently located in time and space. A belief is entertained at a given time in someone’s head; a spoken statement is an acoustic event that occurs in the shared environment of interlocutors. A written statement or a picture
is not an event but an object in the environment. What makes these mental and public representations representations isn’t, however, their location, duration, or other concrete features. It is a more abstract property that in commonsense psychology is recognized as “meaning” or “content.” When we say that we share someone’s belief, what we mean is that we have beliefs of closely similar content. When we say of someone that she expressed her thoughts, what we mean is that the meaning of what she said matched the content of what she thought.

  Often, when people think or talk about a representation, they consider only its content, and they abstract away from the representation’s more concrete properties. They may say of an idea that it is true, contradictory, confused, profound, or poetic without attributing it to anyone in particular either as a thought or as a statement. When they do so, what they talk about are representations considered in the abstract (or “abstract representations” for short). Cultural representations such as Little Red Riding Hood, the Golden Rule, or multiplication tables are, most of the time, considered in the abstract, even though they must be instantiated in mental and public representations in order to play a role in human affairs.

  Since representations are recognized in our commonsense ontology, the question arises: What cognitive mechanisms do we have, if any, for drawing inferences about them? What kinds of intuitions do we have about representations? As we saw, there are several kinds of representations, each with distinct properties. There is no a priori reason to assume that humans have a module for drawing inferences about representations in general. It is not clear what regularities such a module might exploit. On the other hand, various types of representations present regularities that can be exploited to produce different kinds of highly relevant inferences.

 

‹ Prev