The Story of Psychology

Home > Other > The Story of Psychology > Page 77
The Story of Psychology Page 77

by Morton Hunt


  —On the other hand, anthropologists have found that in many other cultures people have fewer color terms than English-speaking people but experience the world no differently. The Dani of New Guinea have only two color terms: mili (dark) and mola (light), but tests of speakers of Dani and other languages that lack many explicit color names have shown that their memory for colors and their ability to judge differences between color samples are much the same as our own. At least when it comes to color, they can think without words.63

  —The studies of children’s thinking, carried out by Piaget and other developmental psychologists, show strong interactions between language and thought. Hierarchical categorization, for one thing, is a powerful cognitive mechanism that enables us to organize and make use of our knowledge; if we are told that an unfamiliar item in an ethnic grocery store is a fruit, says Philip Lieberman, we know at once that it is a plant, edible, and is probably sweet.64 This inferential capacity is built into the structure of language and acquired in the normal course of development. Studies show that children begin verbal categorization at about eighteen months, and that one of the results is the “naming explosion,” a phenomenon every parent has observed. Thus, says Lieberman, “particular languages do not inherently constrain human thought, because both capacities [language and thought] appear to involve closely related brain mechanisms.”65

  The physical locations of some of those brain mechanisms were pinpointed through the study of aphasia, a speech disorder caused by an injury to or lesion in a specific part of the brain. A lesion in Wernicke’s Area, as we saw earlier, results in speech that is relatively fluent and syntactical but often nonsensical; victims either mangle or cannot find the nouns, verbs, and adjectives they want. Howard Gardner, a Harvard cognitive psychologist who has explored aphasia, has given this example, taken from a conversation he had with a patient:

  “What kind of work have you done, Mr. Johnson?” I asked.

  “We, the kids, all of us, and I, we were working for a long time in the… you know…it’s the kind of space, I mean place rear to the spedwan…”

  At this point I interjected, “Excuse me, but I wanted to know what work you have been doing.”

  “If you had said that, we had said that, poomer, near the fortunate, forpunate, tamppoo, all around the fourth of martz. Oh, I get all confused,” he replied, looking somewhat puzzled that the stream of language did not appear to satisfy me.66

  In contrast, a person with damage to Broca’s area, though able to understand language, has great difficulty producing any; the speech is fragmented, lacking in grammatical structure, and deficient in modifiers of nouns and verbs.

  This much is known at the macro level. Nothing, however, is known about how the neuronal networks within Wernicke’s and Broca’s areas carry out language functions in normal persons; those areas are still “black boxes” to psychologists—mechanisms whose input and output are known but whose internal machinery is a mystery.

  But neuroscientists have found a few clues. Analyses of brain function in speech-impaired persons by means of electrode probes during surgery, PET and fMRI scanning, and other methods have shown that linguistic knowledge is located not only in Wernicke’s and Broca’s Areas but in many parts of the brain and is assembled when needed. Dr. Antonio Damasio of the University of Iowa College of Medicine is one of many researchers who have concluded that information about any object is widely distributed. If the object is, say, a polystyrene cup (Damasio’s example), its shape will be stored in one place, crushability in another, texture in another, and so on. These connect, by neural networks, to a “convergence zone” and thence to a verbal area where the noun “cup” is stored.67 This is strikingly similar to the abstract portraits of the semantic memory network we saw earlier in this chapter.

  In the past several years, PET and fMRI scans of normal people have identified areas in the brain that are active when specific linguistic processes are going on. But despite a wealth of such information, the data do not tell us how the firing of myriad neurons in those locations becomes a word, a thought, a sentence, or a concept in the mind of the individual. The data provide a more detailed model than was formerly available of where language processes take place in the brain, but cognitive neuroscience has not yet yielded a theory as to how the neural events become language. As Michael Gazanniga and his co-authors say in Cognitive Neuroscience, “The human language system is complex, and much remains to be learned about how the biology of the brain enables the rich speech and language comprehension that characterize our daily lives.”68*

  “Much remains”? A memorable understatement.

  Reasoning

  Some years ago I asked Gordon Bower, a prominent memory researcher, a question about thinking and was taken aback by his testy reply: “I don’t work on ‘thinking’ at all. I don’t know what ‘thinking’ is.” How could the head of Stanford University’s psychology department not work on thinking at all—and not even know what it is? Then, rather grudgingly, Bower added, “I presume it’s the study of reasoning.”

  Thinking was traditionally a central theme in psychology, but by the 1970s the proliferation of knowledge in cognitive psychology had made the term unhandy, since it included processes as disparate as momentary short-term memory and protracted problem solving. Psychologists preferred to speak of thought processes in more specific terms: “chunking,” “reasoning,” “retrieval,” “categorization,” “formal operations,” “problem solving,” and scores of others. “Thinking” came to have a narrower and more precise meaning than before: the manipulation of knowledge to achieve a goal. To avoid any misunderstanding, however, many psychologists preferred, like Bower, to use the term “reasoning.”

  Although human beings have always viewed reasoning ability as the essence of their humanity, research on it was long a psychological backwater.69 From the 1930s to the 1950s little work was done on reasoning except for the problem-solving experiments of Karl Duncker and other Gestaltists and the studies by Piaget and his followers of the kinds of thought processes characteristic of children at different stages of intellectual development.

  But with the advent of the cognitive revolution, research on reasoning became an active field. The IP (information processing) model enabled psychologists to formulate hypotheses that portrayed, in flow-chart fashion, what went on in various kinds of reasoning, and the computer was a piece of apparatus—the first ever—with which such hypotheses could be tested.

  IP theory and the computer were synergistic. A hypothesis about any form of reasoning could be described, in IP terms, as a sequence of specific steps of information processing; the computer could then be programmed to perform an analogous sequence of steps. If the hypothesis was correct, the machine would reach the same conclusion as the reasoning human mind. By the same token, if a reasoning program written for the computer produced the same answer as a human being to a given problem, one could suppose that the program was operating in the same way as, or at least in a similar fashion to, that of the human mind.

  How does a computer do such reasoning? Its program contains a routine, or set of instructions, plus a series of subroutines, each of which is used or not used, depending on the results of the previous operations and the information in the program’s memory. A common form of routine is a series of if-then steps: “If the input meets condition 1, then take action 1; if not, take action 2. Compare the result with condition 2 and if the result is [larger, smaller, or whatever], take action 3. Otherwise take action 4… Store resulting conditions 2, 3… and, depending on further results, use these stored items in such-and-such ways.”70

  But when computers carry out such programs, whether in mathematical computing or problem solving, are they actually reasoning? Are they not acting as automata that unthinkingly execute prescribed actions? The question is one for the philosopher. If a computer can, like a knowledgeable human being, prove a theorem, navigate a spacecraft, or determine whether a poem was written by Shakespeare, who i
s to say that it is a mindless automaton—or that a human being is not one?

  In 1950, when only a few primitive computers existed but the theory of computation was being much discussed by mathematicians, information theorists, and others, Alan Turing, a gifted English mathematician, proposed a test, more philosophic than scientific, to determine whether a computer could or could not think. In the test, a computer programmed to solve a certain kind of problem is stationed in one room, a person skilled in that kind of problem is in another room, and in a third room is a judge in telegraphic communication with each. If the judge cannot tell from the dialogue which is the computer and which the person, the computer will pass the test: it thinks.71 No computer program has yet won hands down, in publicly conducted contests, although some have fooled some of the judges. The validity of the Turing test has been debated, but at the very least it must mean that if a computer seems to think, what it does is as good as thinking.

  By the 1960s, most cognitive psychologists, whether or not they agreed that computers really think, regarded computation theory as a conceptual breakthrough; it enabled them for the first time to describe any aspect of cognition, and of reasoning in particular, in detailed and precise IP terms. Moreover, having hypothesized the steps of any such program, they could translate them from words into computer language and try the result on a computer. If it ran successfully, it meant that the mind did indeed reason by means of something like that program. No wonder Herbert Simon said the computer was as important for psychology as the microscope had been for biology; no wonder other enthusiasts said the human mind and the computer were two species of the genus “information-processing system.”72

  The ability to solve problems is one of the most important applications of human reasoning. Most animals solve such problems as finding food, escaping enemies, and making a nest or lair largely by means of innate or partly innate patterns of behavior; human beings solve or attempt to solve most of their problems by means of either learned or original reasoning.

  In the mid-1950s, when Simon and Newell undertook to create Logic Theorist, the first program that simulated thinking, they posed a problem to themselves: How do human beings solve problems? Logic Theorist took them a year and a half, but the question occupied them for more than fifteen. The resulting theory, published in 1972, has been the foundation of work in that field ever since.

  Their chief method of working on it, according to Simon’s autobiography, was two-man brainstorming. This involved deductive and inductive reasoning, analogical and metaphoric thinking, and flights of fancy—in short, any kind of reasoning, orderly or disorderly:

  From 1955 to the early 1960s, when we met almost daily… [we] worked mostly by conversations together, with the explicit rule that one could talk nonsensically and vaguely, but without criticism unless you intended to talk accurately and sensibly. We could try out ideas that were half-baked or quarter-baked or not baked at all, and just talk and listen and try them again.73

  They also did a good deal of laboratory work. Singly and together they recorded and analyzed the steps by which they and others solved puzzles and then wrote out the steps as programs. A favorite puzzle, of which they made extensive use for some years, is a child’s toy known as the Tower of Hanoi. In its simplest form, it consists of three disks of different sizes (with holes in their centers) piled on one of three vertical rods mounted on flat bases. At the outset, the largest disk is on the bottom, the middle-sized one in the middle, the smallest one on top. The problem is to move them one at a time in the fewest possible moves, never putting any disk on top of one smaller than itself, until they are piled in the same order on another rod.

  The perfect solution takes seven steps, although with errors leading to dead ends and backtracking to correct them, it can take several times that many. In more advanced versions, the solution requires complex strategies and many moves. A perfect five-disk game takes thirty-one moves, a perfect seven-disk game 127 moves, and so on.* Simon has said, quite seriously, that “the Tower of Hanoi was to cognitive science what fruit flies were to modern genetics—an invaluable standard research setting.”74 (Sometimes, however, he ascribes this honor to chess.)

  Another laboratory tool used by the team was cryptarithmetic, a type of puzzle in which a simple addition problem is presented in letters instead of numbers. The goal is to figure out what digits the letters stand for. This is one of Simon and Newell’s simpler examples:

  The obvious first step: M must be 1, since no two digits—S + M in this case—can add up to more than 19, even with a carry.† Simon and Newell had volunteers talk out loud as they worked on such a puzzle, recorded everything they said, and afterward diagrammed the steps of their thought process in the form of a search track of moves, decisions at forks with more than one option, wrong choices pursued to dead ends, reversals to try another route from the last fork, and so on.

  Simon and Newell made particular use of chess, a vastly more complex problem than either the Tower or cryptarithmetic. In a typical chess game of sixty moves, at each step there are on average thirty possible moves; to “look ahead” only three moves would mean visualizing twenty-seven thousand possibilities. A key question for Simon and Newell was how chess players deal with such impossibly large sets of contingencies. The answer: A skilled chess player does not consider all the possible moves he might make next and all the moves his opponent might make in response but only those few moves that make good sense and that follow elementary guidelines like “Guard the King” and “Don’t give away a piece for one of lesser value.” In short, the chess player makes a heuristic search—one guided by broad strategic principles of good sense—rather than a thorough but uninformed one.

  The Newell and Simon theory of problem solving—for alphabetical reasons Newell’s name is first on their joint publications—on which they worked for another fifteen years is that problem solving is a search for a route from an initial state to a goal. To get there, the problem solver has to find a path through a problem space made up of all possible states he might arrive at by making all the moves that obey the path constraints (rules or conditions of the domain).

  In most such searches, the possibilities multiply geometrically, since each decision point offers two or more possibilities, each of which leads to another decision point offering another set of possibilities. In the sixty moves of an average chess game, each move, as already mentioned, has an average of thirty alternatives; the total number of paths in a game is 3060—30 million trillion trillion trillion trillion trillion trillion—a number totally beyond human comprehension. Accordingly, as Simon and Newell’s research demonstrated, problem solvers, in finding their way through such problem spaces, make no effort to look at every possibility.

  In the massive tome they published in 1972 and straightforwardly called Human Problem Solving, Newell and Simon presented what they considered its general characteristics. Among them:75

  —Because of the limits of short-term memory, we work our way through a problem space in serial fashion, taking one thing at a time.

  —But we do not perform a serial search of every possibility, one after another. We use that method only when there are very few possibilities. (If, for instance, you don’t know which one of a small bunch of keys opens a friend’s front door, you try them one at a time.)

  —In many problem situations trial and error is not practicable; if so, we search heuristically. Knowledge makes this very effective. As simple a problem as solving an eight-letter anagram like SPLOMBER would take fifty-six working hours if you wrote out all 40,320 permutations at a rate of one every five seconds, but most people can solve it in seconds or minutes by ignoring invalid beginnings (PB or PM, for instance) and considering only valid ones (SL, PR, etc.).*

  —One important heuristic commonly used to simplify the task is what Newell and Simon call “best-first search.” At any fork in the search path, or “decision tree,” we first try the move that appears to carry us closest to the goal. It is e
fficient to move toward the goal with every step (although sometimes we have to move away from it to circumvent an obstacle).

  —A complementary and even more important heuristic is “means-end analysis,” which Simon has called “the workhorse of GPS [General Problem Solver].” Means-end analysis is a mixture of forward and backward search. Unlike chess, which uses forward searching, in many cases the problem solver sees that he cannot proceed directly toward the goal but must first reach a subgoal from which the goal is attainable, or perhaps has to get first to an even earlier subgoal or one still earlier than that.

  In a relatively recent review of problem-solving theory, Keith Holyoak offers a homely example of means-end analysis. Your goal is to have your living room freshly painted. The subgoal nearest that goal is the condition in which you can paint it, but that requires you to have paint and a brush, so you must first reach the earlier subgoal of buying them. To do so requires reaching the even earlier subgoal of being at a hardware store. So it goes, backward chaining until you have a complete strategy by which to move from your present state to the state of having a painted living room.76

  As major an achievement as Newell and Simon’s theory of problem solving was, it dealt only with deductive reasoning. Moreover, it considered only “knowledge-poor” problem solving—the kind applicable to puzzles, games, and abstract problems. To what extent the method described problem solving in knowledge-rich domains—the sciences, business, or law, for instance—was unclear.

 

‹ Prev