The Gap

Home > Other > The Gap > Page 17
The Gap Page 17

by Thomas Suddendorf


  For the most part, IQ tests are given to assess someone’s chances of success in training, job performance, and the like, rather than to measure their intellectual wealth per se. IQ tests predict various indicators of “success,” from school drop-out rates to future income. During the twentieth century the tests therefore became increasingly popular in Western societies.2

  Researchers have identified many variables that affect performance on the tests. Prenatal exposure to alcohol, for example, leads to reduced IQ in children, whereas having high IQ parents predicts high scores. IQ is highly inheritable, yet test scores have been increasing overall over the last one hundred years. This has stimulated debate about whether we are actually getting smarter or merely better at taking these tests.

  Most importantly, what do these tests really tell us about the nature of intelligence? A basic consensus in the IQ testing community is that intelligence involves the capacity to learn from experience, to adapt to the surrounding environment, and to reflect on one’s own performance. Because there is such a great wealth of IQ data available, many intelligence researchers have examined the relationship between performance on various subtests for clues about the underlying structure of intelligence.

  Alas, the resulting theories of intelligence have deeply contradicted each other. For example, although many researchers highlight a single general intelligence factor (g), others have shown that we need to distinguish at least two factors: crystalized intelligence and fluid intelligence. The latter refers to processing capacities that decline with advanced age, and the former refers to knowledge of facts, which does not tend to decline. Other theorists have made distinctions between 7 abilities (verbal comprehension, verbal fluency, inductive reasoning, spatial visualization, number, memory, and perceptual speed) and even between 150 (too many to list here). While some researchers subscribe to a hierarchical structure, others see discrete components. Worst of all, there is no clear way of deciding which of these theories is correct. Although intelligence testing has been a resounding success in some sense (such as in terms of predictions and money made), the research on millions of test scores and their correlations has not established consensus on the structure of human intelligence.

  The IQ testing approach to studying intelligence certainly has its critics. One long-standing criticism is that the tests reflect specific Western values of intelligence and measures these only with a restricted range of artificial tasks. Consider, for instance, the fact that tests are often timed. Although speedy decision making may be a hallmark capacity of smart stock market traders or air traffic controllers, in other cultures and contexts speed may not be valued in the same way. In fact, for many cases of intelligent decision making, speed is quite unimportant compared to, say, getting it absolutely right. Consider weighty questions such as whom to marry, what house to buy, and whether to go to war.

  IQs are established with paper and pencil in a quiet testing room. Yet the real world is noisy and often lacking in the luxury of quiet desk space. Universities are full of people with high IQ scores, and yet among them are some who—as an Australian might say—“couldn’t organize a piss-up in a brewery.” As the psychologist Robert Sternberg suggests, practical intelligence is quite distinct from the analytical intelligence measured in IQ tests. You can score low on IQ tests and be very smart in your practical life and vice versa. I suspect that some of the most successful people in life—certain politicians spring to mind—do not score unusually high on a standard IQ test.

  There are in fact a few alternative accounts of intelligence. One scheme recognizes multiple intelligences, including linguistic, logical, musical, spatial, kinesthetic, naturalist, interpersonal, intrapersonal, and existential intelligence. You may also have heard of emotional intelligence as a popular addition. These proposals go beyond the standard tests and acknowledge the manifold capacities that people may have. It is often said that everyone has a talent—you just need to find it. In fact, the term “talent” may be more appropriate for many of these purported intelligences.

  Whatever your view on IQ testing, the tests are not all that helpful for our purposes. We want to know how humans might differ from animals, not how we differ from each other. Since the tests all involve verbal instructions, we cannot simply give them to animals, though I foolhardily tried.3 In order to compare intelligence in humans and animals, we must return to the essential foundation of what intelligence is. We can all recognize it when we see it, but researchers have been so preoccupied with individual differences that many have overlooked what intelligence we have in common. Steven Pinker offers the following definition: “intelligence . . . is the ability to attain goals in the face of obstacles by means of decisions based on rational (truth-obeying) rules.”

  This definition draws attention to two crucial points. First, intelligence is practical: it enables the overcoming of obstacles in pursuit of goals. To judge an act as intelligent you need to take into account what it is the individual wants to achieve. Someone may superficially appear a total fool (e.g., dropping things, forgetting others, making costly mistakes) but may still be acting intelligently. Given our capacity to reason about others’ minds, we may intend to be perceived as stupid—for instance, if you want someone else to believe you are not cut out for a task you do not wish to do. Without a goal an action can hardly be intelligent.4 Second, to intelligently achieve a goal the action must be based on reasoning by rational rules. If you get what you want by chance alone, you can hardly take the credit.

  Man is a rational animal—so at least I have been told. Throughout a long life, I have looked diligently for evidence in favour of this statement . . .

  —BERTRAND RUSSELL

  ALTHOUGH ARISTOTLE PROCLAIMED THAT HUMANS are rational animals, we often fail to live up to expectations. The psychologists Amos Tversky and Daniel Kahneman documented numerous biases and heuristics that people commonly use to reach decisions. For example, we frequently base judgments on how easily we can call to mind relevant information and make a decision as soon as we have a satisfying answer. We therefore frequently fail to act optimally given available information. Yet we tend to be supremely (over)confident about our judgments and generally resist evidence that demonstrates we are wrong. In hindsight we are sure we would have predicted what we now know to have happened. Some researchers (as well as some fictional characters such as Spock and Dr. Sheldon Cooper) take great delight in pointing out the logical shortcomings of human thinking, and many studies support them. Suffice it to say, I am often irrational—and so are you.

  In spite of this confession, humans evidently are capable of rational thinking. Bertrand Russell certainly was. We can try out potential solutions in our minds. We can infer and deduce, even if we prefer shortcuts. We can reason, even if we are often guided by emotions. We can think scientifically, even if we might prefer mystical explanations. A common bumper sticker in Australia reads, “Magic happens,” and I had to smile when I saw ABC Science retorting with a sticker of its own: “Logic happens.” It does.

  A fundamental capacity involved in any form of reasoning is the ability to store and process information in one’s mind. Differences in this storage capacity explain a lot about differences in reasoning and intelligence. Short-term memory has to be distinguished from long-term memory, because one can be intact even when the other fails. Most information is only briefly held in mind and then is lost forever. Try to recall the biases I listed two paragraphs ago. You may recall the gist, but you have probably lost much of the detail. Yet to follow a written passage, such as this one, you need to keep information in mind long enough to meaningfully link what you are reading with what you read before. Early research suggested that we can only hold up to seven (plus or minus two) chunks of information in short-term memory. When more information has to be considered, some of the earlier encoded information will be lost from short-term memory (unless it is transferred to a more long-term store). If I give you a number to read and you have to close your eyes and r
epeat it backwards in your mind, you should find it easy to do with five digits (48372) but much less so with ten (3747297497).

  This is a consistent finding—unless you cheat. One way to cheat is to chunk information together so that the task of remembering, say, the ten-letter sequence AC DCA BCL OL turns into a three-chunk sequence ACDC ABC LOL that is easier to remember because it occupies only three slots by linking together familiar letter sequences. People who excel at memory tasks usually employ a host of such mnemonic strategies to increase performance. When it is made impossible to use these strategies and to rehearse, usually by asking participants to do a distracter task in parallel, recent research suggests that human short-term memory capacity is limited to a mere three to five chunks. Short-term memory is limited indeed.

  Psychologists these days are inclined to speak of “working memory” rather than short-term memory, because the system is not merely a passive information store.5 Working memory is our capacity to hold and manipulate chunks of information in our minds. We use working memory in all manner of mental activity, from simple tasks such as rehearsing a telephone number to creative endeavors such as designing a house. It is the workbench for our conscious mental operations. We can disengage from perception and imagine alternative scenarios, such as those we need for mind reading and time traveling. In a sense, working memory is the stage in the theater metaphor of mental scenario building. It allows us to reason offline, as it were. With sufficient working-memory capacity we can temporarily bind several concepts and reflect on their relationships. Thinking about thinking and other embedded processes are only possible when one can juggle several chunks of information in working memory.

  Working-memory capacity constrains the number of relations one can consider together. This accounts for major differences in intelligence. Indeed, it has been established that how well people do on working-memory tasks predicts how well they score on reasoning and intelligence tests. As much as half of the variability in IQ can be explained by variability in working memory.

  Children increase their working memory capacity steadily between ages four and eleven, and these increases have been linked to the kind of tasks they can solve. My colleague Graeme Halford has made the case that toddlers only have the capacity to bind two concepts in working memory and can hence only understand simple relationships, such as the concept “smaller,” as in one thing being smaller than another. Preschoolers develop a capacity to process the relationship between three variables such that they can compute formal additions (e.g., 4 plus 5 equals 9). Only later still do they become able to consider four items and so can compute complex relations such as proportions (e.g., is 2 to 3 equivalent to 6 to 9?). Halford and colleagues have argued that many changes in reasoning capacity during a child’s development can be explained in terms of the growing capacity to deal with processing load.

  One persistent problem with this theory, however, is that processes and concepts may be chunked, just as numbers and letters can. Halford gives the example of the concept “speed,” which can be represented as distance traveled divided by time, but turns into a single variable when simply read as a pointer on a dial. So a three-year-old might talk sensibly about speed without considering the relation between distance and time. But she cannot answer questions such as “How does speed change if we cover the same distance in half the time?” until she can entertain these relationships in working memory. Limits in working-memory capacity constrain reasoning.6

  Temporary storage and processing space are important for our ability to imagine multiple mental scenarios, to integrate them into a larger narrative, and to compare and evaluate them. It is essential for creating any kind of nested, recursive thought. Therefore, sufficient working-memory capacity is critical for language, mental time travel, and theory of mind. It is now widely discussed as a potentially crucial factor in human cognitive evolution. Yet there is more to our smarts than a simple capacity increase.

  One way in which humans radically improve their imaginative capacity is better chunking. We can treat mental scenarios themselves as single chunks of information and embed them in more complex trains of thought. In this way we can use the limited working-memory platform to reflect on scenarios and consider their respective likelihoods and desirability. We can hierarchically organize them and construct higher-order (meta) scenarios. For example, the idea of, say, “getting a degree” consists of numerous scenarios involving lectures, study, and exams. By chunking them under this one heading—represented, for example, by an image of a framed diploma—we can reflect on the conglomeration of all these activities, without all the details. The image acts as a placeholder, allowing us to reason about the value of the achievement and the opportunities it would bring, without having to simulate the day-to-day activities that would get us there. Thus we are able to use placeholders to represent (symbolize) complex propositions and treat them as one mental chunk.

  Clever chunking and embedding allows us to decontextualize: to think abstractly, without the clutter of the concrete. Because this thinking is no longer closely tied to specifics, we can apply what we learn in one context to any other. Cooking, as you may recall, affords us innumerable metaphors. I am not going to mince words: this capacity is an essential ingredient in the recipe of the human mind. Such decontextualized thinking allows us to use metaphors, infer and deduce unseen forces, build general theories, and consider logical coherence. So we come to be able to form, and reason about, abstract concepts such as the economy, nouns, or evolution. This system gives us supreme flexibility and potential.

  Much of our thinking is abstract rather than episodic. Yet it has its roots in our capacity to generate scenarios, substitute them with placeholders, and recursively treat them as chunks of information.

  THERE IS YET ANOTHER PERSPECTIVE on human intelligence and scenario building that requires attention. Robert Sternberg suggests that in addition to analytical and practical intelligence, we need to acknowledge that people differ in their imagination and creativity. These are essential aspects of our intellect. Indeed, one of the most famously smart people, Albert Einstein, once said, “Imagination is more important than knowledge.”

  We can mentally build scenarios of things that are not real (yet). We use imagination to design and innovate in numerous domains, such as architecture, art, fashion, literature, science, and technology. You do not need to be a genius to be creative in all sorts of ordinary contexts, such as cooking, gardening, playing sports, and fixing your car. In sheds and workshops around the world countless functional and aesthetic objects are created every day. As we have seen, when you speak, you easily generate entirely novel sentences. Although some individuals seem more creative than others, every one of us has immense mental power to conjure up ideas, stories, and solutions to problems.

  The imagination is one of the highest prerogatives of man. By this faculty he unites, independently of the will, former images and ideas, and this creates brilliant and novel results.

  —CHARLES DARWIN

  A RECURRING THEME HAS BEEN that recursion is a key mechanism that unites “former images and ideas,” allowing us to produce novelty in language, music, technology, and art through recombination. Generating novel content is not enough, however, unless we want to grant creativity to a random number generator. Creativity also requires a capacity to assess what is generated.

  We sometimes disagree with each others’ evaluations, of course. Indeed, objectively assessing creativity is notoriously difficult. What I think is creative may seem derivative to you and vice versa. Researchers have developed simple tests in their attempts to quantify creativity. In so-called divergent-thinking tasks, for instance, participants are asked questions such as, “Tell me all the things you can do with a newspaper,” and the researcher records the number of appropriate answers a child comes up with. Sometimes they also give scores for the originality of responses (e.g., if no other person in the sample came up with, say, “make a paper hat,” then that answer receives an o
riginality point). To generate appropriate answers, children need to search their own knowledge base and assess options. This thinking about knowledge may require similar capacities as theory of mind tasks. Indeed, in a couple of early studies Claire Fletcher-Flinn and I found associations between children’s theory of mind and divergent-thinking scores.7 Once children passed false-belief tasks, they generated more answers as well as more original answers.

  Human generativity coupled with our ability to mentally project ourselves into future scenarios enables us to prudently design aspects of our environment. Designing is the capacity to imagine a new object or situation with a specific function or aesthetic in mind. Design is not limited to professional architects or couturiers but includes everyday activities such as arranging a flower bouquet or one’s living room in a premeditated fashion. When we design objects, we combine and recombine basic elements recursively and appraise their imagined constellation in terms of the desired function. Rather than adapting to the environment, we have increasingly used this design capacity to flexibly shape our world to meet our fancy. We like a challenge, and we even invent novel problems to solve. Sudoku, anyone?

  ANIMALS, LIKE HUMANS, PRODUCE ARTIFACTS that significantly change their environments. Termites create mounds, spiders spin webs, and beavers build dams. Yet even the most impressive of these constructions, such as the elaborate bowers of Vogelkop bowerbirds, may not be based on a reasoned plan. All members of these species (or of the relevant sex) build the objects in question. Furthermore, they all seem to build only one or a few types of items. There is no evidence of the open-ended flexibility that characterizes human design. But perhaps this underestimates their competence. Some animals use tools, and a few species even make tools. Great apes, as we have seen, have demonstrated at least some capacity at imagining alternative worlds. Various creatures act in ways that seem intelligent and creative. Consider the Queensland jumping spider, Portia. It hunts for other spiders, taking detours to abseil on top of them or moving across its prey’s web only when wind and other disturbances offer a smoke screen. Many examples of rather clever-looking behavior exist in a variety of nonhuman taxa. Is this not intelligence?

 

‹ Prev