by Dan Sperber
The best explanation of many illusions is that they arise from the inferences we automatically draw to make sense of our sensations. In most cases these inferences are correct: when our position relative to objects around us changes, either we or they are moving. If it seems to us that we are not moving, it is reasonable—even if not fail-safe—to infer that they are.
Figure 8. Monsters in a tunnel.
In modern psychology, illusions provide crucial evidence for studying perception. We saw the remarkable example of Adelson’s checkerboard illusion in Chapter 1. Roger Shepard created another striking visual illusion (Figure 8).7 Look at this image of one monster chasing another in a tunnel. The chaser looks bigger than the chased, right? In fact, as you can measure yourself, their two images are exactly of the same size. They therefore project same-size images on the retina. Why, then, does one look bigger than the other? Because we don’t just read the size of the object perceived off retinal stimulation; we automatically use contextual evidence relevant to assessing an object’s distance in order to infer its size. In the picture, the images of the two monsters are at the same distance from our eyes. Still, because we spontaneously interpret the picture as representing a three-dimensional scene where the chaser is behind the chased, hence farther away from us, we see him as bigger. It so happens that, when we look at this particular picture, the information we use is misleading and our assumptions are false; we are prey to a visual illusion. The very fact, however, that this illusion is surprising tells us how confidently, in normal conditions, we rely on such unconscious inferences to provide us with true perceptions. In most cases, we are right to do so.
The inferential processes involved in perception are typically so fast—their duration is measured in milliseconds—that we are wholly unaware of them. Perception seems to be some kind of immediate and direct access to reality. Inference, which involves both the use of information about things distant in space or time and the risk of error, seems out of place in this direct relationship. Well, there is a risk of error in perception; misperceptions and illusions do occur; our perceptions are informed, and sometimes misinformed, by previous experience. All this goes to show that our “intuitions” about what it is to perceive shouldn’t be given more authority than, say, our intuitions about what it is to digest or to breathe. The conscious intuitions—or “introspections”—we have about our mental and other biological processes do not provide a reliable description, let alone an explanation of these processes. Rather, these intuitions—like all intuitions—are themselves something to be explained.
Still, you might object, how useful can it be to put together under a single category of “inference” wholly unconscious, superfast processes in perception and the conscious and slow processes—sometime painfully slow—that occur in reasoning? Isn’t this as contrived as it would be to insist that, say, jumping and flying should be studied as two examples of one and the same kind of process of “moving in the air”? Actually, this objection doesn’t work too well. There are fundamental discontinuities between jumping and flying, whereas automatic inference in perception and deliberate inference in reasoning are at the two ends of a continuum. Between them, there is a great variety of inferential processes doing all kinds of jobs. Some are faster, some slower; they involve greater or lesser degrees of awareness of the fact that some thinking is taking place.
That all inferential mechanisms stand on a continuum doesn’t mean that they are the same. What it suggests, rather, is that in spite of sharing the function of drawing inferences, they may well be quite different from one another. And reasoning? Reasoning is only one of these many mechanisms.
Inferences We Are Unaware Of
When you see the picture of the two monsters in the tunnel, you do not merely register visible features of the scene (which, as we pointed out, already involves some inference). You also interpret what you see. For instance, you assume that the two monsters are running (rather than, say, standing still on one foot). You assume that one is chasing the other (rather than trying to copy his movements). You assume that the chaser has hostile intentions and that the chased is afraid. Even though the two faces are identical, you interpret them differently.
Perception may involve some degree of freedom in interpreting what it is exactly that we perceive. While we just see one monster bigger than the other and we are not that easily persuaded that we are mistaken, we are more willing to entertain the idea, rather than one monster chasing the other—our first interpretation—that the two monsters might both be chased by a third even bigger monster who is off the picture. We came to our first interpretation spontaneously, but that this is an interpretation and not a mere registration of fact is something of which we can easily be made aware.
Memory, too, involves inference. The expression “stored in memory,” evoking as it does a storage place where things can be safely kept to be taken out when needed, turns out to be quite misleading. The British psychologist Frederick Bartlett published in 1932 a still-influential book, Remembering, where he introduced a now classical distinction between reproductive and reconstructive memory.8 If your task is to remember a random list of numbers, you learn them by rote and you indeed try to reproduce the list when you have to. But this is not at all typical of how memory works most of the time. As Bartlett wrote, we should get rid of the notion that “memory is primarily or literally reduplicative, or reproductive. In a world of a constantly changing environment, literal recall is extraordinarily unimportant.”9 So how do we remember?
Just as the mechanisms of perception are often best revealed by means of perceptual illusions, the normal mechanisms of memory are often revealed by tricking them into producing false recollections. Brent Strickland and Frank Keil, for instance, showed people short videos of someone kicking or throwing a ball.10 In half of the videos, the moment of contact (or release) was omitted. Immediately after each video, participants were shown a series of still pictures and had to indicate whether the picture had appeared in the video. When the whole sequence of events in the video, and in particular the movement of the ball, had implied that contact must have taken place, a majority of participants “remembered” having seen the contact event that actually had not been shown. What must have happened is that people inferentially reconstructed and “remembered” the sequence of events that had to have taken place, rather than what they had actually seen.
In a similar vein, Michael Miller and Michael Gazzaniga presented to participants in an experiment detailed color pictures of characteristic scenes of American life such as a grocery store, a barnyard, or a beach scene.11 The original pictures contained many typical items. In the beach scene, for instance, there were a beach ball, beach blankets, beach umbrellas, and the lifeguard’s life preserver. The pictures that the participants actually saw were doctored: a pair of such typical objects had been removed (different pairs for different groups of participants). What Miller and Gazzaniga surmised was that people would “remember” items that they had not actually seen.
Half an hour after having seen the pictures, participants were read a list of items and asked whether these items had been in the pictures they saw. Indeed, they misremembered having seen, say, the umbrella or the life preserver, which had been deleted, almost as often as they remembered having seen a beach ball and blankets, which had actually been there. How can this be?
In all cases, true and false memory, recall involves inference—inference, for instance, about the kicking of a ball that explains its subsequent trajectory, or inference about what there “must have been” in that picture of a beach. Often the inference is wholly unconscious and recall seems immediate and effortless. Sometimes, however, there is a hesitation—was there really an umbrella in the picture?—which gets rapidly resolved one way or another. How? By means of inferences that are correct most of the time but not always.
In perception and memory, inference is always at work. Most of the time, we are wholly unaware of its role. It is as if what we per
ceived was immediately present to us, and as if what we remember was retrieved just as it had been stored. Still, not so rarely, we become aware of having interpreted what we see, or of having reconstructed what we remember. Perception and recall lose some of their apparent immediacy and transparency. In these cases, we are aware of the fact that our perceiving or our remembering involves some intuitive insight.
That inference can be more or less conscious—or is more or less likely to become conscious at some point—is even better illustrated by what happens in verbal comprehension. Suppose that you are sitting in a café and you overhear a woman at the next table say to the man sitting with her, “It’s water.” You have no problem decoding what this ordinary English sentence means, but still, you don’t know what the woman meant. As the philosopher Paul Grice insisted, sentence meaning and speaker’s meaning are two quite different things.12
The man may have pointed to a wet spot on his shirt, and she might be reassuring him that it is only water. She may be complaining that her tea is too weak by saying hyperbolically, “It’s water.” It could also be that her meaning has nothing to do with the immediate situation; they may have been discussing what poses the greatest problem to the planet, and she might be maintaining that it is the shortage of fresh water supplies; and so on.
The woman’s interlocutor, unlike you, understands her meaning. Not, however, because of a superior command of English. What he has and you don’t is relevant contextual knowledge, knowledge about what they had said before, about each other, and about whatever experiences and ideas they happen to share. From this contextual knowledge and from the fragmentary indication given by the linguistic meaning of the words she used, he is in a position to infer what she meant. For instance, if he knows she likes strong tea and sees her frown after having taken a fist sip, he will as a matter of course understand her to mean that the tea is too weak.
Most of the time, the inferences involved in comprehension are done as if effortlessly and without any awareness of the process. It is as if we just picked up our interlocutor’s meaning from her words. At times, however, we hesitate. The man in our story may not have known that his companion liked her tea quite strong, and may have gone through a moment of puzzlement before grasping her meaning. He would have become aware, then, that he had to infer what she intended to convey. Comprehension always involves inference, even if, most of the time, we are not aware of it.
Intuitions
Intuitions contrast with ordinary perceptions, which we experience as mere recordings of what is out there without any awareness of the inferential work perception involves. In the illusion of the two monsters in a tunnel, for instance, seeing one as bigger than the other feels like a mere registration of a fact. On the other hand, interpreting the scene as one monster pursuing the other may be experienced as more active understanding. Asked why you believe one monster to be bigger than the other, you might answer, “I see it.” Asked why you believe that one is chasing the other, you might answer, “It seems intuitively obvious.”
Similarly, in the verbal exchange at the next table in the café, if the man interprets the woman’s statement “It’s water!” as meaning that the spot on his shirt is caused by a drop of water, it seems to him that he is merely picking her meaning from her words. If he furthermore interprets her to imply that, since it is merely water, he shouldn’t worry, then his understanding of this implicit meaning may well feel like an intuition.
Intuitions also contrast with the conclusions of conscious reasoning where we know—or think we know—how and why we arrive at them. Suppose you are told that the pictures of the two monsters in the tunnel are actually the same size; you measure them and verify that such is the case. You then accept this unintuitive conclusion with knowledge of your reasons for doing so. This is reasoned knowledge rather than mere intuitive knowledge.
Or the man in the café could reason: “Why is she telling me ‘It’s water’ with such a patronizing tone? Because she thinks I worry too much. Well, she is right—I was worrying about a mere drop of water! A drop of water doesn’t matter. It dries without leaving any trace. I shouldn’t worry so much.” When he comes to the conclusion that he shouldn’t worry so much, he pays attention to reasons in favor of this conclusion. Some kind of reasoning is involved.
A simple first-pass way to define intuitions is to say that they are judgments (or decisions, which can also be quite intuitive) that we make and take to be justified without knowledge of the reasons that justifies them. Intuition is often characterized as “knowing without knowing how one knows.” Our conscious train of thought is, to a large extent, a “train of intuitions.” Intuitions play a central role in our personal experience and also in the way we think and talk about the mind in general, our “folk psychology.”
A common idea in folk psychology is that our many and varied intuitions are delivered by a general ability itself called “intuition” (in the singular). Intuition is viewed as a talent, a gift that people have to a greater or lesser degree. Some people are seen as more gifted in this respect, as having better intuition than others. It is a stereotype, for instance, that women are more intuitive than men. But is there really a general faculty or mechanism of intuition?
Perception is the work not of a single faculty but of several different perceptual mechanisms: vision, audition, and so on. That much is obvious. Ordinary experience, on the other hand, doesn’t tell us whether, behind our sundry intuitions, there is a single general faculty. The idea of intuition as a kind of faculty, however, isn’t supported by any scientific evidence. What the evidence suggests, rather, is that our intuitions are delivered by a variety of more or less specialized inferential mechanisms.
Say, then, that there are many inferential mechanisms that deliver intuitions. Have these mechanisms some basic features in common that differentiate them from inferential mechanisms of perception on one side, and from reasoning on the other side? Actually, intuitive inferences are generally defined by features they lack more than by features they possess. This comes out with characteristic clarity in Daniel Kahneman’s figure of the “three cognitive systems,” perception, intuition, and reasoning (the latter two being the two systems of dual process and dual system theory) (Figure 9).13
Reasoning, in this picture, is positively defined by properties of the process it uses: slow, serial, controlled, and so on. Perception is positively defined by properties of the contents it produces: percepts, current stimulation, stimulus bound.
Intuition, on the other hand, is described as using the same kind of processes as perception, and producing the same kind of content as reasoning. While it may be handy to classify under the label “intuition” or intuitive inference all inferences that count neither as perception nor as reasoning, the category so understood is a residual one, without positive features of its own. This should cast doubt on its theoretical significance. Still, this needn’t be the end of the matter.
Figure 9. Daniel Kahneman’s “Three cognitive systems.”
If intuitions stand apart at least in folk psychology, it is not because they are produced by a distinct kind of mechanism—this is something folk psychology knows little or nothing about—but because they are experienced in a distinctive way. When we have an intuition, we experience it as something our mind produced but without having any experience of the process of its production. Intuitions, in other terms, even if they are not a basic type of mechanism, may well be a distinctive “metacognitive” category.
“Metacognition,” or “cognition about cognition,” refers to the capacity humans have of evaluating their own mental states.14 When you remember, say, where you left your keys, you also have a weaker or stronger feeling of knowing where they are. When you infer from your friend Molly’s facial expression that she is upset, you are more or less confident that you are right. Your own cognitive states are the object of a “metacognitive” evaluation, which may take the form either of a mere metacognitive feeling or, in some cases, of an ar
ticulated thought about your own thinking.
As the Canadian psychologist Valerie Thompson has convincingly argued, intuitions have quite distinctive metacognitive features.15 We want to make the even stronger claim that the only distinctive features that intuitions clearly have are metacognitive.
Intuitions are experienced as a distinct type of mental state. The content of an intuition is conscious. It would be paradoxical to say, “I have an intuition, but I am unaware of what it is about.” There is no awareness, on the other hand, of the inferential processes that deliver an intuition. Actually, the fact that intuitions seem to pop up in consciousness is part of their metacognitive profile. Intuitions are not, however, experienced as mere ideas “in the air” or as pure guesses. They come with a sense of metacognitive self-confidence that can be more or less compelling: intuitions are experienced as weaker or stronger. One has little or no knowledge of reasons for one’s intuitions, but it is taken for granted that there exist such reasons and that they are good enough to justify the intuition, at least to some degree. Intuitions also come with a sense of agency or authorship. While we are not the authors of our perception, we are, or so we feel, the authors of our intuitions; we may even feel proud of them.
So, rather than think of intuitions as mental representations produced by a special type of inferential process called “intuitive inference” or “intuition” (in Kahneman’s sense of process rather than product), it makes better sense to think of intuitive inferences as inferences, the output of which happens to be experienced as intuitions. “Intuitive inference,” in this perspective, stands between “unconscious inference” and “conscious inference.” These inferences are not distinguished from one another by properties of the inferential mechanisms involved but by the way the process of inference and its conclusion are or are not metacognized.