by Morton Hunt
We seem to be at the top of the ninth, score tied, and will have to see how the game plays out.
Now let us return to the story of cognitive psychology and look more closely at several of its major themes of recent decades.
Memory
In the 1960s, the cognitive revolution rapidly won the allegiance, at least in academia, of some senior psychologists, most junior ones, and most graduate students of psychology. At first, they concentrated on perception, the first step of cognition, but fairly soon they shifted their attention to the uses the mind makes of perceptions—its higher-level mental processes. By 1980, John Anderson, a theorist of those processes, defined cognitive psychology as the attempt “to understand the nature of human intelligence and how people think.”31
In information-processing theory, the essential first step is the storing of incoming data in memory, whether for part of a second or for a lifetime. As James McGaugh said in a 1987 lecture:
Memory is essential for our behavior. There is nothing of significance that is not based fundamentally on memory. Our consciousness and our actions are shaped by our experiences. And, our experiences shape us only because of their lingering consequences.32
How crucial memory is to thought is painfully apparent to anyone who has known a person suffering from advanced Alzheimer’s disease. He may frequently forget what he wants to say partway through a sentence, get lost walking down the driveway to his mailbox, fail to recognize his children, and become upset by the unfamiliarity of his own living room.
In 1955—before the start of the cognitive revolution—George Miller had given an address at a meeting of the Eastern Psychological Association that has been called a landmark for cognitive theorists working on memory. In his typically breezy manner, Miller called the talk “The Magical Number Seven, Plus or Minus Two,” and began by saying, “My problem is that I have been persecuted by an integer.” The integer was 7, and what seemed to Miller both magical and persecutory about it was, as many experiments had shown, that it is the number of digits that one can usually hold in immediate memory.33 (It is easy to remember briefly, after a moment’s study, a number like 9237314 but not one like 5741179263.)
It is both noteworthy and mysterious that immediate memory, the limiting factor in what we can pay attention to, is so tiny. The limitation serves a vital purpose: it drastically prunes the incoming data to what the mind, at any moment, urgently needs to attend to and make decisions about, a function that undoubtedly helped our primitive ancestors survive life in the jungle or the desert.34 But it raises perplexing questions. How can so small a field of attention handle the flood of perceptions we must attend to when driving a car or skiing? Or the welter of sounds and meanings when someone is talking to us—or when we are trying to say something to them?
One answer, Miller said, making good use of an idea that had lain fallow in psychology for a century, is that immediate memory is not limited to seven digits but to seven—more or less —items: seven words or names, for instance, or “chunks” such as FBI, IBM, NATO, telephone area codes, or familiar sayings, all of which contain far more information than single digits but are as easily remembered.
But even with chunking, the capacity of immediate memory is insignificant compared with the enormous amount of material—everyday experiences, language, and general information of all sorts—that we learn and store away in long-lasting memory and call up again as needed.
To explain this disparity and determine how memory works, cognitive psychologists conducted a great many experiments during the 1960s, 1970s, and 1980s; the findings, pieced together, gave shape to an information-processing picture of human memory. In it, memory consists of three forms of storage, ranging from a fraction of a second to a lifetime. Experiences or items of information needed only for an instant fade away as soon as used, but those needed longer are transformed and held for longer, or even worked into the semipermanent or permanent register of long-term memory. Researchers and theorists portrayed the three types and the transfer of information among them in flow charts something like the one on p. 608.
The briefest form of memory consists of sensory “buffers” in which incoming sensations are first received and held. By means of the tachistoscope, researchers verified that buffers exist and also measured how long memories endure in them before disappearing. In a classic experiment in 1960, the psychologist George Sperling flashed on a screen, before attentively watching volunteers, patterns of letters like this:
The letters appeared for a twentieth of a second, too brief a time for the volunteers to have seen all of them, although immediately afterward they could write down the letters of any one line. (A tone, right after the flash, told them which line to record.) They could still “see” all three lines when they heard the tone, but by the time they had written down one line, they could no longer remember the others; the memory had vanished in less than a second. (Experiments by others yielded comparable results with sounds.) Evidently, incoming perceptions are stored in buffers, from which they vanish almost at once—fortunately, for if they lasted longer, we would see the world as a continuous blur.35
FIGURE 40
An information-processing model of human memory
Since, however, we need to retain somewhat longer the things we are currently concerned with, there must be another and longer-lasting form of temporary storage. When we pay attention to material in a sensory buffer, we process it in any of several ways. A digit becomes not just a perceived shape but a symbol—a 4 gets a name (four) and a meaning (the quantity it stands for); similarly, words we read or hear get meanings. This processing transfers whatever we are attending to from the buffers to the immediate or short-term memory that Miller was talking about.
In lay usage, short-term memory refers to the retention of events of recent hours or days, but in technical usage it denotes whatever is part of current mental activity but is not retained after use. This form of memory is brief. We have all looked up a phone number, dialed it, gotten a busy signal, and had to look up the number again to redial it. Yet we can retain it for many seconds or even minutes by continuously repeating it to ourselves—psychologists call this activity “rehearsal”—until we have used it.
To measure the normal duration of short-term memory, therefore, researchers had to prevent rehearsal. A team at Indiana University did so by telling their subjects that they were to try to remember a set of three consonants, a very easy task, but that as soon as they had seen them, they were to count backward by threes in time with a metronome; this preempted their attention and made rehearsal impossible. The researchers cut the volunteers’ backward counting short at different times to see how long they would retain the three consonants; none did so longer than eighteen seconds. Many later experiments confirmed that the decay rate of short-term memory is between fifteen and thirty seconds.36
Later, other studies distinguished between two kinds of short-term memory (not shown in the above diagram). One is verbal: the immediate memory for numbers, words, and so on that we have been discussing. The second is conceptual: the memory of an idea or meaning conveyed in a sentence or other expression of several parts (an algebraic equation, for instance). In a 1982 experiment, subjects were shown sentences, a word at a time, at a tenth of a second per word; they could easily remember plausible (though not necessarily true) sentences like this:
Tardy students annoy inexperienced teachers.
But they fared badly with nonsensical sentences of the same length, like:
Purple concrete trained imaginative alleys.37
A number of studies showed that we easily retain the message of a sentence in short-term memory but swiftly forget its exact words. Similarly, we retain in long-term memory for months, years, or a lifetime the content or meaning of some conversations we have had and books we have read, the gist of courses we have taken, and innumerable facts we have learned, but none, or at most a few, of the exact words in which any of these were couched. The mass of material
stored away in this fashion is far larger than most of us can imagine: John Griffith, a mathematician, calculated that the lifetime capacity of the average human memory is up to 1011 (one hundred trillion) bits,* or five hundred times as much information as is contained in the Encyclopaedia Britannica. 38
New information in short-term memory is forgotten after we use it, unless we make it part of long-term memory by subjecting it to further processing. One form of processing is rote memorizing, as schoolchildren memorize multiplication tables. Another is the linking of new information to some easily remembered structure or mnemonic device, like a singsong jingle (the preschool alphabet song) or a rhyming rule (“When the letter C you spy, / Put the E before the I”).
But a far more important kind, as became clear in the research performed in the 1960s and 1970s, is “elaborative processing,” in which the new information is connected to parts of our existing organized mass of long-term memories. We splice it into our semantic network, so to speak. If the new item is a mango and we have never seen one before, we link the word and concept to the appropriate part of long-term memory (not a physical location—ideas and images are now thought to be scattered throughout the brain—but a conceptual one: the category “fruit”), along with the mango’s visual image, feel, taste, and smell (each of which we also link to the categories of images, tactile qualities, and so on), plus what we learn about where it grows, what it costs, how to serve it, and more. In the future, when we try to think of a mango, we retrieve the memory in any one of many ways: by recalling its name, or thinking about fruit, or about fruit with a green skin, or about yellow sweet slices, or any other category or trait with which it is linked.
Much of what was learned about how all these kinds of information are organized was the product of reaction-time experiments such as asking subjects to name, in a brief period of time, as many things as they can that are red, or that are fruit, or that start with a given letter. Using that technique, Elizabeth Loftus found that in one minute volunteers could, on average, name twelve instances of “bird” but only nine of “yellow.” Her conclusion was that we cannot readily look directly in memory for examples of a property but instead locate categories of objects (birds, fruit, vegetables), and scan each for that property.39
Similarly, as Loftus and a colleague, Allan Collins, found, it takes people longer to answer “true” or “false” to the statement “An ostrich is a bird” than to the statement “A canary is a bird.” The implication: A canary is a more typical bird than an ostrich, is closer to the center of the category, so it requires less time to identify. Collins and Loftus, on the basis of such data, symbolically portrayed long-term semantic memory as an intricate network that is hierarchical (a general category is surrounded by specific instances) and associative (each instance is linked to a number of traits). They envisioned it as shown on p. 611.40
FIGURE 41
One portrayal of the long-term semantic memory network
This is only a minuscule sample of the semantic memory network. Every node shown here is connected to many other chains of nodes not shown: “Swim” might be linked to “cetaceans,” “human swimmers,” “sports,” “healthful exercises,” and each of those to other instances, characteristics, traits, and so on, and on.
A much later and much more detailed representation of the memory network relating to birds is bewilderingly complex; it is on page 612, as FIGURE 42, for those who care to puzzle it out.
Memory research has been so far-ranging and multifaceted over the last several decades that we must limit ourselves now to a handful of brief reports of major research findings and theories, and then move on.
Memory systems: The memory system portrayed in FIGURE 40, on p. 608, is now seen as too simple. According to the results of many studies, there are a number of interacting memory systems that encode and store different kinds of information in different ways. The memories stored about how to swim, drive a car, or sail a boat are very different from those concerning the names and identities of people you know, how to perform arithmetical procedures, or what a collie looks like. Each of these kinds of memory, and many others, require their own forms of processing and storage, and differ in the amount and kinds of effort required to enter and retain it in long term memory.
FIGURE 42
Network and connectionist representations of concepts relating to birds
Moreover, memory researchers distinguish among types of memory in other ways: Explicit memory refers to information or knowledge that we can bring to mind and to personal experiences, and implicit memory to information that is available without conscious effort, including motor skills and automatic responses (such as avoiding bumping into others on the sidewalk), built-in attitudes and reactions to people, objects, and situations—all of these requiring different memory systems.41
Other studies have investigated the differing process of recognition and recall—a distinction familiar enough in everyday experience (we all recognize a great many words that we cannot easily or at all summon up voluntarily). In a socially valuable application of the difference, a series of studies tested whether witnesses to a crime (a staged one before groups of students who were not told what was going on until later) would be more likely to identify the actual culprit in a lineup or by seeing a number of suspects one at a time. The latter method proved so much the better that many police departments are now changing their standard lineup procedures.42
Cognitive neuroscientists have lately done brain scans during different kinds of memory activity and come up with an answer to an old question: Where are memories stored? The answer, in the past, has vacillated between “locally” and “widely distributed.” Brain scans now show that “widely distributed” is the answer—and that different kinds of memories are differently distributed.43
Categorization: Much research indicates that the human mind has a tendency to spontaneously group similar objects in memory and, from their similarities, develop general concepts or categories. Even infants only a few months old seem to do simple categorizing. One research team showed four-month-old babies patches of varied blues, greens, yellows, and reds. After seeing a number of patches of one color group, the babies showed a preference for a patch of any other color. The conclusion: Hue categorization is either innate or develops soon after birth.44
Many other studies have documented how, as children acquire language, they gradually develop such categories as “animal” after experiences of dogs, cats, squirrels, and others. Parents, to be sure, teach these concepts to their children, but in part the tendency seems to be built in. It is so general among all people as to be presumed an innate human trait. The anthropologist Brent Berlin found that people in a dozen different primitive societies group plants and animals in remarkably similar fashion, namely, hierarchically, starting with subgroups similar to biological species, combining these in larger headings similar to biological genera, and lumping these together in categories similar to biological plant and animal kingdoms.45
The ability to categorize was probably selected by evolution. It has survival value, since from such groupings we can make valid inferences about things that are new to us. Rochel Gelman and a colleague showed subjects pictures of a flamingo, a bat, and a blackbird. The blackbird was portrayed so that it looked much like the bat. Subjects were told about the flamingo, “This bird’s heart has a right aortic arch only,” and about the bat, “This bat’s heart has a left aortic arch only.” Then they were asked about the blackbird, “What does this bird’s heart have?” Almost 90 percent answered “right aortic arch only,” correctly basing their answer not on the visual similarity of bat and blackbird but the common membership in the bird category of flamingo and blackbird. Even four-year-old children, when given a similar but simpler test, based their answers almost 70 percent of the time on category membership.46
Representation: Researchers were long at odds about the form in which the material is stored in long-term memory. Some believed it is represen
ted both in images and words and that there is communication between the two data banks. Others, drawing on information theory and the computer model, argued that information is recorded in memory only in the form of “propositions.” A proposition is a simple “idea unit” or bit of knowledge embodied in a conceptual relationship like that between bat and wings (a bat has them) or bat and mammal (a bat is one).
In the first view, a bat would be recorded in memory as an image, along with verbal statements about it; in the second view it would be recorded in the form of relationships (as in the bits of semantic networks in the figures above) which, though not verbal, are equivalent to “bat has wings,” “salmon is red,” and so forth. Another example of the propositional view is seen in these sentences:
The princess kissed the frog,
and its passive version,
The frog was kissed by the princess,
which mean the same thing; they are verbal expressions, differently focused, of the same proposition or unit of relationship knowledge.47
The proponents of each view have good evidence to back them up. The “mental rotation” experiments of Roger Shepard that we saw earlier indicate that we see objects “in the mind’s eye” and deal with those images as if they were three-dimensional objects. Later studies by others confirmed and extended this finding. Several years ago, Stephen Kosslyn, who has long explored mental imagery, took a different tack: He had subjects memorize a map of a small roughly pear-shaped island with various things located here and there, among them a hut at one end, a lake nearby, a cliff somewhat farther off, a large rocklike object at the farthest end, and so on. Later, his subjects were asked to close their eyes, summon up the remembered image, focus on one location such as the site of the hut, and then find another named site and push a button as soon as they found it. The times of each mental search were recorded; most remarkably, the farther the second location was from the first, the longer it took them to find it. Obviously, they were scanning across the mental image.48