The Story of Psychology

Home > Other > The Story of Psychology > Page 78
The Story of Psychology Page 78

by Morton Hunt


  For the past several decades, therefore, a number of researchers have been expanding the investigation of reasoning. Some have studied the psychological tendencies on which deductive and inductive reasoning are based; some whether either form, or some other, is what we use in everyday reasoning; some the differences in the kinds of reasoning used by experts and by novices in knowledge-rich situations. These investigations have produced a wealth of insights into the formerly invisible workings of the reasoning human mind. Here are a few of the highlights:

  Deductive reasoning: The traditional idea, going back to Aristotle, is that there are two kinds of reasoning, deduction and induction. Deduction extracts a further belief from one that is given; that is, if the premise or premises are true, so is the conclusion, since it is necessarily included in them. From the premises of Aristotle’s classic syllogism

  All men are mortal.

  Socrates is a man.

  it follows that

  Socrates is mortal.

  This kind of reasoning is tight, strong, easy to follow, and fully convincing. It is exemplified by proofs of logic and geometry theorems.

  Yet many other syllogisms that have only two premises and contain only three terms are not so transparent; some are so difficult that most people cannot draw a valid conclusion from them. Philip Johnson-Laird, who has done research on the psychology of deduction, gives an example that he has used in the laboratory. Imagine that in a room there are some archaeologists, biologists, and chess players, and that these two statements are true:

  None of the archaeologists is a biologist.

  All the biologists are chess players.

  What, if anything, follows from those premises? Johnson-Laird has found that few people can give the right answer.77* Why not? He believes that the ease of drawing the valid conclusion in the Socrates syllogism and the difficulty of doing so in the archaeologist syllogism are due to the way the arguments are represented in the mind—the “mental models” we create of them, a theory he has been developing and testing ever since.78

  People with formal training in logic usually visualize such arguments in the form of geometrical diagrams, the two premises being represented by circles, one inside the other, or overlapping it, or separate. But Johnson-Laird’s theory, based on his research and validated by a computer simulation, is that people without such training use a more homespun model. In the Socrates syllogism, they unconsciously imagine a number of people, all mortal, imagine Socrates as related to that group, and then cast about for any other possibility (anyone outside the set—possibly Socrates). There being no such possibility, they correctly conclude that Socrates is mortal.

  In the archaeologist syllogism, however, they imagine and try out first one, then another, and finally a third model, of increasing difficulty (we will spare ourselves the details). Some people rely on the first, unable to see that the second invalidates it, and others the second, not seeing that it, too, is discredited by the third and most difficult—which leads to the only valid conclusion.79

  Mental modeling is not the only source of erroneous deductions. Experiments have shown that even where the form of a syllogism is simple and its mental model easy to create, people are apt to be misled by their beliefs and information. One research team asked a group of subjects whether these two syllogisms were logically correct:

  All things that have a motor need oil.

  Automobiles need oil.

  Therefore, automobiles have motors.

  All things that have a motor need oil.

  Opprobines need oil.

  Therefore, opprobines have motors.

  More people thought the first one logically correct than did the second, although the two are identical in structure, differing only in the substitution of the nonsense word “opprobines” for “automobiles.” They were misled by their knowledge of automobiles; knowing the conclusion of the first syllogism to be true, they thought the argument logically correct. But it is not, as they could see in the case of opprobines, about which they knew nothing and where they could recognize that there is no necessary overlap between opprobines and things with a motor.80

  Inductive reasoning: By contrast, inductive reasoning is loose and inexact. It moves from specific beliefs to broader ones, that is, from limited cases to generalizations. From “Socrates is mortal,” “Aristotle is mortal,” and other instances, one infers, with a degree of confidence based on the number of cases, that “all men are mortal,” although even a single case to the contrary would invalidate that conclusion.

  A good deal of important human reasoning is of this type. Categorization and concept formation, crucial to thinking, are the products of induction, as seen in studies of how children arrive at categories and concepts. All the higher knowledge humankind possesses about the world—everything from the inevitability of death to the laws of planetary motion and galactic formation—is the product of the derivation of generalizations from a mass of particulars.

  Induction is also the reasoning used where pattern recognition is the key to solving a problem. A simple example:

  What number comes next?

  2 3 5 6 9 10 14 15 ——

  A ten-year-old can answer correctly after a while; an adult can see the pattern and the answer (20) in a minute or less. It is the very reasoning process employed by economists, public health officials, telephone system planners, and many others whose recognition of patterns is critically important to the survival of modern society.

  (Disconcertingly, researchers have found that many people frequently fail to reason inductively from incoming information. All too often, we notice and add to our memory store only what supports a strongly held belief, ignoring any that does not. Psychologists call this “confirmation bias.” Dan Russell and Warren Jones gave subjects materials to read, some confirming and some disproving the existence of ESP. Afterward, believers in ESP remembered the confirming materials 100 percent of the time but the negative materials only 39 percent of the time, while skeptics remembered both kinds about 90 percent of the time.81)

  Much of our reasoning combines deduction and induction, each of which serves its own purposes. How we came by both kinds of reasoning ability has been explained, at least hypothetically, by evolutionary psychology: Both methods are assets in the struggle to survive and were the products of natural selection.82 The hypothesis seems validated by a recent study using PET scans: When subjects were asked to solve problems requiring deduction, two small areas on the right side of the brain showed increased activity; when the problems required inductive thinking, two brain structures on the left side showed it.83 Natural selection, in short, developed brain structures capable of both kinds of reasoning.

  Probabilistic reasoning: The human mind’s abilities are the product of evolutionary selection, but we have lived in advanced civilized societies too short a time to have developed an inherited ability for sound reasoning about statistical likelihoods, though it is often called for in modern life.

  Daniel Kahneman and Amos Tversky, who did much of the basic work in this area, asked a group of subjects which they would prefer: a sure gain of $80, or an 85 percent chance of winning $100 along with a 15 percent chance of winning nothing. Most people preferred the sure gain of $80, although statistically the average yield of the risky choice is $85. Kahneman and Tversky concluded that people are “risk-averse”: They prefer a sure thing even when a risky thing is the better bet.

  Turning to the obverse situation, Kahneman and Tversky asked another group of subjects whether they would prefer a sure loss of $80 or an 85 percent chance of losing $100 along with a 15 percent chance of losing nothing. This time a large majority preferred the gamble to the sure thing even though, on average, the gamble is costlier. Kahneman and Tversky’s conclusion: When choosing between gains, people are risk-averse; when choosing between losses, they are risk-seeking—and in both cases are likely to make poor judgments.84

  An even more disquieting finding came from a later experiment in wh
ich they posed two versions of a public-health problem to groups of college students. The versions are mathematically identical but different in wording. The first version:

  Imagine that the U.S. is preparing for the outbreak of a rare Asian disease, which is expected to kill 600 people. Two alternative programs to combat the disease have been proposed. Assume that the exact scientific estimates of the consequences of the programs are as follows:

  If Program A is adopted, 200 people will be saved.

  If Program B is adopted, there is a ⅓ probability that 600 people will be saved, and a ⅔ probability that no people will be saved. Which of the two programs would you favor?

  The second version gave the same story but worded the alternatives as follows:

  If Program C is adopted, 400 people will die.

  If Program D is adopted, there is a ⅓ probability that nobody will die, and a ⅔ probability that 600 people will die.

  Subjects responded quite differently to the two versions: 72 percent chose Program A over Program B, but 78 percent (of a different group) chose Program D over Program C. Kahneman and Tversky’s explanation: In the first version, the outcomes are portrayed in terms of gains (lives saved), in the second version in terms of losses (lives lost). The same biases as shown by the experiments where money was at stake distorted subjects’ judgment in this case, where lives were at stake.85 (In

  2002, Kahneman won the Nobel prize in economics for his work on probabilistic reasoning; Tversky, who would have shared it, unfortunately was dead by then.)

  We reason poorly in these cases because the factors involved are “nonintuitive”; our minds do not readily grasp the reality involved in probabilities. This shortcoming affects us both individually and as a society; the electorate and its leaders often make costly decisions because of poor probabilistic reasoning. As Richard Nisbett and Lee Ross point out in their book Human Inference, many governmental practices and policies adopted during crises are deemed beneficial because of what happens afterward, even though the programs are often useless or worse. The misjudgment is caused by the human tendency to attribute a result to the action meant to produce it, although often the result stems from the normal tendency of events to revert from the unusual to the usual.86

  It is reassuring, therefore, that a number of studies have found that unconscious mental processing often yields good evaluations and decisions—sometimes better than the results of conscious deliberation. In a series of studies reported in 2004, a Dutch psychologist asked subjects to make choices about complex real-world matters that had many positive and negative features such as choosing an apartment. One group was told to make an immediate (no thought) choice, another group to think for three minutes and then choose (conscious thought), and a third group to work for three minutes on a difficult distracting task and then choose (unconscious thought). In all three studies, the subjects in the unconscious thought condition made the best choices.87

  Analogical reasoning: By the 1970s, cognitive psychologists had begun to recognize that much of what logicians regard as faulty reasoning is, in fact, “natural” or “plausible” reasoning—inexact, loose, intuitive, and technically invalid, but often competent and effective.

  One such form of thinking is the analogical. Whenever we recognize that a problem is analogous to a different problem, one we are familiar with and know the answer to, we make a leap of thought to a solution. Many people, for instance, when they have to assemble a piece of knocked-down furniture or machinery, ignore the instruction manual and work by “feel”—looking for relationships among the parts that are analogous to the relationships among the parts of different kinds of furniture or machinery they assembled earlier.

  Analogical reasoning is acquired in the later stages of childhood mental development. Dedre Gentner, a cognitive psychologist, asked five-year-olds and adults in what way a cloud is like a sponge. The children replied in terms of similar attributes (“They’re both round and fluffy”), adults in terms of relational similarities (“They both store water and give it back to you”).88

  Gentner interprets analogical reasoning as a “mapping” of high-level relations from one domain to another; she and two colleagues even wrote a computer program, the “Structure-Mapping Engine,” that simulates the process. When it was run on a computer and provided with limited data about both the atom and the solar system, the program, like the great physicist Lord Rutherford, recognized that they are analogous and drew appropriate conclusions.89

  With difficult or unfamiliar problems, people generally do not use analogical reasoning because they only rarely spot a distant analogy, even when it would provide the solution to their problem. But if they consciously make the effort to look for an analogy, they are far more apt to see one that is not at all obvious. M. L. Gick and Keith Holyoak used Duncker’s classic problem, of which we read earlier, about how one can use X-rays to destroy a stomach tumor without harming the surrounding healthy tissue. Most of their subjects did not spontaneously discover the solution; Gick and Holyoak then provided them with a story that, they hinted, might prove helpful. It told of an army unable to capture a fortress by a single frontal attack but successful when its general divided it into separate bands that attacked from all sides. Having read this and consciously sought an analogy to the X-ray problem, most subjects saw that many sources of weak X-rays placed all around the body and converging on the tumor would solve the problem.90

  Expert reasoning: Many cognitive psychologists, intrigued by Newell and Simon’s work, assumed that their theory would apply to problem solving by experts in fields of special knowledge, but found, to their surprise, that it did not. In a knowledge-rich domain, experts do more forward searching than backward searching or means-end analysis, and their thinking often proceeds not step by step but in leaps. Rather than starting with details, they perceive overall relationships; they know which category or principle is involved and work top-down. Novices, in contrast, lack perspective and work bottom-up, starting with details and trying to gather enough data to gain an overview.91

  Since the 1980s, a number of cognitive psychologists have been exploring the characteristics of expert reasoning in different fields. They have asked experts in cardiology, commodity trading, law, and many other areas to solve problems; again and again they have found that experts, rather than pursuing a logical, step-by-step search (as a newly trained novice or an artificial intelligence program would do), often leap from a few facts to a correct assessment of the nature of the problem and the probable solution. A cardiologist, for instance, might from only two or three fragments of information correctly diagnose a specific heart disorder, while a newly graduated doctor, presented with the same case, would ask a great many questions and slowly narrow down the range of possibilities. The explanation: Unlike novices, experts have their knowledge organized and arranged in schemas that are full of special shortcuts based on experience.92

  Is the Mind a Computer? Is a Computer a Mind?

  Even in the first flush of enthusiasm for IP theory and computer simulations of reasoning, some psychologists, of a more humanistic than computer-technical bent, had reservations about the comparability of mind and machine. There are, indeed, major dissimilarities. For one, the computer searches for and retrieves items as needed—at blinding speed, nowadays—but human beings retrieve many items of information without any search: our own name, for instance, and most of the words we utter. For another, as the cognitive scientist Donald Norman has pointed out, if you are asked “What’s Charles Dickens’s telephone number?” you know right away that it’s a silly question, but a computer would not, and would go looking for the number.93

  For a third, the mind knows the meaning of words and other symbols, but the computer does not; to it they’re only labels. Nor does anything about the computer resemble the unconscious or all that goes on in it.

  These are only a few of the differences that have been obvious since the first experiments in computer reasoning. Yet, no less an a
uthority than Herbert Simon categorically asserted that mind and machine were kin. In 1969, in a series of lectures published as The Sciences of the Artificial, he argued that the computer and the human mind are both “symbol systems”—physical entities that process, transform, elaborate, and generally manipulate symbols of various kinds.

  Throughout the 1970s, small cadres of dedicated psychologists and computer scientists at MIT, Carnegie-Mellon, Stanford, and a handful of other universities, possessed of a zealotlike belief that they were on the verge of a great breakthrough, developed programs that were both theories of how the mind works and machine versions of human thinking. By the 1980s the work had spread to scores of universities and to the laboratories of a number of major companies. The programs carried out such varied activities as playing chess, parsing sentences, deducing the laws of planetary motion from a mass of raw data, translating elementary sentences from one language to another, and inferring the structure of molecules from mass spectrographic data.94

  The enthusiasts saw no limit to the ability of IP theory to explain how the mind works and of AI to verify those explanations by carrying out the same processes—and eventually doing so far better than human beings. In 1981 Robert Jastrow, director of the Goddard Institute for Space Studies, predicted that “around 1995, according to current trends, we will see the silicon brain as an emergent form of life, competitive with man.”95

  But some psychologists felt that the computer was only a mechanical simulation of certain aspects of the mind and that the computational model of mental processing was a poor fit. The eminent cognitivist Ulric Neisser had become “disillusioned” with information-processing models by 1976, when he published Cognition and Reality. Here, much influenced by James Gibson and his “ecological” psychology, Neisser made the case that IP models were narrow and far removed from real-life perception, cognition, and purposeful activity, and fail to take into account the richness of experience and information we continually receive from the world around us.96

 

‹ Prev