The Neuroscience of Intelligence

Home > Other > The Neuroscience of Intelligence > Page 11
The Neuroscience of Intelligence Page 11

by Richard J Haier


  The Blank Slate: The Modern Denial of Human Nature (Pinker, 2002). This is a comprehensive look at nature versus nurture issues from many perspectives. The argument is decidedly made in favor of nature.

  How Much Can We Boost IQ and Scholastic Achievement (Jensen, 1969). This is possibly the most infamous paper in psychology and is the basis for most modern intelligence research.

  Cyril Burt: Fraud or Framed? (Mackintosh, 1995). This is a collection of essays on all sides of the Burt controversy.

  The IQ Controversy, the Media and Public Policy (Snyderman and Rothman, 1988). Based on survey data, this is a controversial book that argues that liberal bias systematically distorted the reporting of Jensen’s work and other genetic research on intelligence.

  Intelligence, Race, And Genetics: Conversations With Arthur R. Jensen (Jensen & Miele, 2002). This book offers an update of Jensen’s views by Jensen himself in his own words.

  Nature via Nurture: Genes, Experience and What Makes Us Human (Ridley, 2003). Written for the public, this book clearly explains the concepts and techniques of behavioral genetics. Although published an eon ago in terms of scientific advancement, you can see the case for genetic influences on intelligence is not new. This chapter updates his case with even stronger evidence.

  Chapter Three

  Peeking Inside the Living Brain: Neuroimaging Is a Game-changer for Intelligence Research

  The brain is a black box – we cannot see in it and must ignore it.(attributed to B.F. Skinner in the 1950s)

  … if Freud were alive today, he’d trade his couch for an MRI …(Richard Haier, video lecture #9, The Intelligent Brain, 2013)

  Learning Objectives

  How has neuroimaging technology advanced the study of human intelligence beyond psychometric methods?

  How do the basic technologies of PET and MRI differ?

  What was a surprising finding from the early PET studies of intelligence?

  Does imaging research indicate that there is an “intelligence center” in the brain?

  What brain areas are included in the PFIT model of intelligence?

  Introduction

  The next two chapters review brain-imaging studies of intelligence. This chapter is written to give a somewhat personal historical perspective on the early studies from 1988 to 2006, a period I describe as phase one in the application of modern brain-imaging technology to intelligence research. This phase began with the first positron emission tomography (PET) study of intelligence, published in 1988, and ends with a review of the relevant literature published in 2007. The 37 studies during this period reported several unexpected results and set the direction for current imaging/intelligence research. This chapter is written in roughly chronological order of publication to demonstrate how the early research unfolded, including my research. This perspective helps students understand how researchers advance from one set of findings to new questions. There are also basic descriptions of how the main imaging technologies work. In the next Chapter (4), we’ll see the subsequent and more sophisticated phase two of worldwide brain-imaging research on intelligence. Brain-imaging technology has significantly helped advance intelligence research from mainly psychometric methods (described in Chapter 1) to neuroscience approaches that can quantify brain characteristics. Brain imaging is a key development in this field and that is why we devote two chapters to it.

  The early quantitative genetic research described in Chapter 2 provided the rationale for a biological component to intelligence and laid the foundation for neuroscience research just as powerful new neuroimaging methods were becoming available. Prior to the introduction of neuroimaging in the early 1980s, brain researchers were limited to indirect measurements of brain chemistry by-products found in blood, urine, and spinal fluid. EEG and evoked potential (EP) research allowed millisecond-by-millisecond measurements of brain activity, but technical issues like distortion of electrical signals by the scalp and poor spatial resolution limited the scope and interpretation of data. Today, EEG-based techniques are more sophisticated and include ways to map cortical activity (see Chapter 4). Inferences from studies of patients with brain damage and from autopsy studies similarly had only limited success in identifying brain/intelligence relationships. For example, some studies of patients with brain damage concluded that the frontal lobes were the seat of intelligence (Duncan et al., 1995), a conclusion we now know is oversimplistic based on newer lesion studies (see Chapter 4). These early indirect research methods and their preliminary findings are summarized elsewhere in intelligence textbooks (Hunt, 2011; Mackintosh, 2011).

  3.1 The First PET Studies

  In the early 1980s, PET was a game-changer. Twenty years before the wide availability of magnetic resonance imaging (MRI, to be discussed later in this chapter), PET technology allowed researchers to see inside the brains of living people and make relatively high-resolution measurements about which brain areas are more or less active during mental activity. This differs dramatically from X-ray technology that had been available much earlier, including CAT scans. Whereas X-rays pass through the head and show brain tissue structure, they are silent as to brain activity. A CAT scan of a person looks the same whether the person is awake, asleep, doing mental arithmetic, or dead. Because the brain is soft tissue, X-rays pass through easily and brain pictures are not very detailed. By contrast, PET can quantify brain activity as glucose metabolism, blood flow, or in some cases, neurotransmitter activity. This is accomplished in a conceptually simple way. Radioactive tracers are injected into a person while they perform a cognitive task and the brain areas that are most active during the task take up the most tracer. The radiation exposure is within limits set for medical uses. The subsequent PET scan detects the radioactivity and mathematical models allow an image to be constructed showing the spatial locations where the varying amounts of radioactivity have accumulated.

  For example, a positron-emitting isotope like fluorine18 can be attached to a special glucose called fluorodeoxyglucose (FDG). Because glucose, a sugar, is the energy supply of the brain, the harder any area of the brain is working, the more radioactive glucose is taken up and metabolically fixed in that part of the brain and the more positrons are accumulated. The positrons collide with electrons, which are naturally plentiful everywhere, and each collision releases energy in the form of two gamma rays, always at 180 degrees from each other. The 180-degree angle is a fact of physics and millions of gamma rays are released from the FDG tracer. When the head is placed inside the PET scanner, which contains one or more rings of gamma ray detectors, the spots in the brain where the gamma rays originated can be reconstructed mathematically based on detection of a gamma ray and at the same moment in time a coincident detection of another gamma ray 180 degrees away. Somewhere on the straight line connecting these two simultaneous events, a positron decayed. With millions of these coincident detections, the spatial location of the accumulated positrons can be determined and the areas releasing the most gamma rays can be quantified. These are the areas most active during the FDG uptake and the activation patterns in areas will be different depending on the mental activity during the uptake. It takes about 32 minutes for the brain to take up the FDG tracer. This means that brain activity is summed over the 32 minutes so the time resolution of FDG PET scans is very long. You cannot see how brain activity changes from second to second. However, radioactive oxygen instead of glucose can be used in PET to image blood flow with a time resolution of minutes. Other imaging techniques based on MRI have time resolutions of about 1–2 seconds and newer methods like the magneto-encephalogram (MEG) show changes millisecond by millisecond. Compared to PET, MRI and MEG techniques also are far less intrusive (no injections or exposure to radioactivity), as we will detail in due course as they have been applied to intelligence research.

  An advantage of PET is that the rate of glucose metabolism can be calculated from measurement of radioactivity decay in the blood periodically after the injection of tracer. The PET image shows a qua
ntitative map of glucose metabolic rate (GMR) while the cognitive task was performed. The physics of fluorine18 give the radioactive glucose a half-life of about 110 minutes, so the logistics of a PET study are formidable. The steps include manufacturing the fluorine18 in a cyclotron, attaching it to glucose in a nearby hot lab, injecting it into a person while they perform a cognitive task for about 32 minutes, and then scanning for about 45–60 minutes to acquire millions of coincident gamma ray detections (the glucose is metabolically fixed so scanning happens after the task is complete and the image shows glucose uptake during the task). The expense is similarly formidable, usually about $2,500 per scan. There are other isotopes that can be used to create tracers that show blood flow and some neurotransmitter activity. The PET images are constructed as slices that cover the entire brain. Color-coding shows rates of glucose activity. In the same person, PET images will differ depending on whether the person is awake or asleep or doing any cognitive task like solving problems on the Raven’s test of abstract reasoning described in Chapter 1.

  I first learned about PET when I worked in the Intramural Research Program at the NIMH in the early 1980s and recognized the potential for intelligence research. Before NIMH took delivery of one of the very first PET scanners available, however, I left for Brown University, where I did rudimentary EEG/EP mapping of brain activity (proudly with an Apple II Plus) and related it to Raven’s scores (Haier et al., 1983). When the opportunity came to join my former NIMH colleague, Monte Buchsbaum, when he relocated to the University of California, Irvine (UCI) and acquired a new PET scanner, I moved to California. In the early 1980s, most of the first PET research was on schizophrenia and psychiatric disorders. PET scans for psychological studies were rare. The first research project I was able to undertake in 1987 was based on only eight scans that were provided without charge as a reward for a successful fundraising effort (the politics of scan access also was a formidable challenge and still is). I used those eight scans to ask a simple question: Where in the brain is intelligence?

  In 1988 we published the first PET study of intelligence (Haier et al., 1988). We had the eight male volunteers take the Raven’s Advanced Progressive Matrices (RAPM) test with 36 items. These included some very hard items to create sufficient variance in a college sample in order to avoid the problem of restricted range. Remember, the Raven’s is a non-verbal test of abstract reasoning that is one of the best single estimates of the g-factor. After each participant completed a practice set of 12 items and began working on the 36 test items, we injected the radioactive glucose used to label the parts of the brain working the hardest while the person was solving the problems. After 32 minutes of working on the items, we moved the person into the PET scanner to see where in the brain there was increased activity compared to other control individuals doing a simple test of attention that required no problem-solving.

  When we did the typical analysis and compared GMR between the group doing the RAPM and the group doing the attention task, several areas across the brain cortex were statistically different. We went a step further that was not typical, but it was logical from the perspective of individual differences. There were a range of RAPM scores, so we correlated the scores to glucose rate in each brain area that was different from the attention control group. There were significant correlations, but to our surprise, all of the correlations were negative. In other words, the individuals with the highest test scores showed the lowest activity in the brain areas that differed between the groups. This inverse relationship is shown in Figure 3.1.

  Figure 3.1 Brain activity assessed with PET during Raven’s Test. Red and yellow show greatest activity in units of glucose metabolic rate. The person with the highest test score (images on right) shows lower brain activity during the test, consistent with brain efficiency related to intelligence (courtesy Richard Haier).

  The two images on the right are from one person doing the RAPM and the two images on the left are from another person doing the RAPM. These are horizontal (axial) slices through the top and center of the brain. All images are shown with the same color scale of glucose metabolism so you can compare them easily. Red and yellow show the highest activity, blue and black show the lowest. The person on the left shows much more activity in both slices than the person on the right (top of image is front of the brain). However, the person on the left with the very active brain actually had the lowest RAPM test score of only 11; the person on the right had the highest score of 33. No one saw this coming. It seemed backwards. More brain activity went with worse performance. What could this mean?

  3.2 Brain Efficiency

  At the time, this counter-intuitive result suggested to us that it’s not how hard your brain works that makes you smart, it’s how efficiently it works. Based on this result, we proposed the brain efficiency hypothesis of intelligence: higher intelligence requires less brainwork. About the same time, another group reported inverse correlations in multiple areas of the cortex between GMR and scores on a test of verbal fluency, another test with a high g-loading (Parks et al., 1988). They scanned 16 subjects while performing a verbal fluency test. During the test, GMR increased compared to another 35 controls scanned in a resting state. The correlations between GMR and scores on verbal fluency were negative in frontal, temporal, and parietal areas. Similarly, a third group of researchers (Boivin et al., 1992) scanned 33 adults also performing a verbal fluency test. They found both positive and negative correlations between scores and GMR across the cortex. Negative correlations were found in frontal areas (left and right) and positive correlations were in temporal areas, especially in the left hemisphere. Their participants included a wide age range (21–71 years old) and combined males and females, but removing age and IQ statistically had little apparent effect on the results (although no sex-specific analyses were reported). It should be noted that by today’s standards of image analysis, all these studies used rudimentary methods for defining cortical regions. Nonetheless, the negative correlations found during cognitive activation were unexpected and, for many cognitive psychologists, hard to believe.

  Since this surprising finding, many researchers have been trying to understand how exactly brain efficiency might relate to intelligence. We will return to the efficiency concept in Chapter 4 as we detail recent studies that show the concept is still viable. Back in 1988 we started thinking about how learning, a key component of intelligence, might make the brain more efficient. When you learn something like driving a car, for example, doesn’t your brain get more efficient so you now can drive in traffic and have a conversation at the same time, something not possible that very first day you were concentrating on driving back and forth in a big empty parking lot?

  We decided to do a PET study of learning so we turned to Tetris, a computer game just out at the time, and now one of the most popular games of all time. We scanned another eight volunteers before and after 50 days of practice on the original Tetris version (Haier et al., 1992b). The volunteers, all college males, used my office computer to practice because almost no one had computers at home in the early 1990s. Because access to PET was so limited, there were not many data about brain changes after learning a complex task. The natural expectation was that after learning to perform a complex task, brain activity would increase to reflect the harder mental work necessary to perform at a higher level. Based on our RAPM finding and the interpretation of efficiency, we hypothesized the opposite: after learning to perform better, brain activity would decrease.

  In case you don’t know Tetris, here’s how the original version works. Different shapes made from the arrangement of four equal squares (there are five different shapes) appear one at a time at the top of the screen and slowly fall to the bottom. You can move them right or left or rotate them or drop them immediately by pressing buttons on the keyboard. The object is to place each shape so they form perfect rows with no gaps at the bottom of the screen. When you complete a row, it disappears and all the shapes above drop down, changing the configur
ation as the shapes continue to drop. The main object is to complete as many rows as possible before the shapes not in complete rows stack up to the top of the play space, which ends the game. The better you do (the more rows you complete), the faster the shapes drop, so with practice, the game is faster and harder. Although the rules are quite simple to learn, playing and improvement are based on complex cognition including visual–spatial ability, planning ahead, attention, motor coordination, and fast reaction time.

  On day 1, the first time any of the students ever played Tetris except for 10 minutes of practice to be sure they understood the game, they completed 10 rows per game on average while the radioactive glucose was labeling their brains during the first PET scan. This increased to nearly 100 rows per game during their second scan after the 50-day practice period. At the end of the practice period, some of the games were moving so fast you could scarcely believe a human being could make and execute decisions so quickly.

  Figure 3.2 shows what we found.

  Figure 3.2 Playing Tetris naïve vs. practiced PET images. Red and yellow show greatest activity in units of glucose metabolic rate. Brain activity decreases with practice, consistent with the brain becoming more efficient (courtesy Richard Haier).

  The image on the left shows the scan of a person’s first Tetris session. Notice all the high activity in red. The scan on the right is the same person after the 50 days of practice. There is less brain activity after practice even though the game was faster and harder. Our interpretation was that the brain learned what areas NOT to use and became more efficient with practice. We also noticed a trend in this study for the people with the highest intelligence test scores to show the greatest decreases in brain activity after practice (Haier et al., 1992a). In other words, the smartest people became the most brain-efficient after practice. Other subsequent studies have shown inconsistent results on this observation, so the jury is still out on what the weight of evidence will show. Many other subsequent studies, however, have replicated decreased brain activity after learning, consistent with the brain efficiency hypothesis. Other studies have not shown this effect, so the conditions and variables relevant to learning/brain activity are still open questions. From the perspective of individual differences, the important variables may be within the person rather than within the task.

 

‹ Prev