Eight years after the initial PNAS report, the weight of evidence from independent studies finds essentially no transfer effects from memory training to intelligence scores that are truly independent of the training method (Redick, 2015). At this stage, most positive results about n-back training and intelligence come from Jaeggi and her colleagues. Most researchers remain highly skeptical and have moved on to other projects despite some earlier enthusiasm for the possibility of increasing Gf with memory training (Sternberg, 2008). The Shmozart paper effectively ended most research on the Mozart Effect. It is not yet clear if the compelling reports by Bogg and Lasecki and by Redick will have the same impact on the n-back, shman-back intelligence story.
An interesting coincidence is that Jaeggi relocated to my university a few years ago to the School of Education and we have become friends despite a complete disagreement about whether memory training increases intelligence. Based on the history of similar claims in the past, I suspect memory training research will become less directed at improving intelligence and more directed at other cognitive and education variables.
In fact, there is growing interest in broader cognitive training using computer games to increase school achievement, as we see in the next case.
5.3 Case 3: Can Computer Games for Children Raise IQ?
There is a large research literature and considerable controversy about whether computer games may have any beneficial cognitive effects (two dueling pro and con “consensus” open letter statements were released in 2014: http://longevity3.stanford.edu/blog/2014/10/15/the-consensus-on-the-brain-training-industry-from-the-scientific-community-2/ and http://www.cognitivetrainingdata.org).
Whatever effects computer games may have on learning, attention, or memory (Bejjanki et al., 2014; Cardoso-Leite & Bavelier, 2014; Gozli et al., 2014), our focus here is on the narrow question of whether computer game training demonstrably increases intelligence. One research group from the University of California, Berkeley claimed a 10-point increase in Performance IQ following computer game training of basic cognitive skills involved in reasoning and processing speed in a study of children from low socio–economic backgrounds (Mackey et al., 2011). Reminiscent of the 2008 PNAS n-back study, the Berkeley researchers bluntly concluded that, “Counter to widespread belief, these results indicate that both fluid reasoning and processing speed are modifiable by training.” Let’s see.
The study involved 28 children aged 7–10 years old. The students were randomly assigned to one of two training groups. One group (n = 17) trained on commercial computer games thought to foster fluid reasoning (i.e., fluid intelligence or the g-factor) and the other group (n = 11) trained on commercial computer games thought to foster brain processing speed. Each training intervention occurred during school for an hour on two days a week for eight weeks, although the average number of training days was about 12 for each group. On training days (two per week) each group worked on four different computer games for about 15 minutes each. Pre- and post-training assessments for fluid reasoning (FR) were based on the Test of Nonverbal Intelligence (TONI – version 3) and for processing speed (PS) two tests were used: Cross Out from the Woodcock–Johnson Revised test battery and Coding B from the Wechsler Intelligence Scale for Children IV. The test details are not necessary to understand the results.
The group trained on FR showed about a 4.5-point increase in TONI non-verbal intelligence score on the post-test and no significant increase on the PS tests. For the group trained on PS, the opposite was found: there was a significant increase in coding score but no change in FR score. The authors translated the raw score 4.5-point increase to an increase of 9.9 IQ points, more than half a standard deviation. Four of the children apparently increased by over 20 IQ points. They concluded that the main message was hope that cognitive gaps in disadvantaged kids, especially any related to FR, could be closed with a “mere 8 weeks of playing commercially available games.” News coverage followed. So did grant funding.
The key finding is shown in Figure 5.4. There are several problems that by now should be familiar to you. The sample sizes are very small and IQ scores at this age often fluctuate by several points. The apparent IQ increases easily could be due to chance effects with undue influence in small samples, as noted in Section 5.2 (Bogg & Lasecki, 2015). This is more likely given that the children who improved the most on some training tasks were not the children who showed the greatest FR gains. Actually, the children who had the lowest FR before the training showed the greatest increase after training, suggesting the effect was due, at least in part, to regression to the mean (statistically, repeat scores on average tend to move back to the group mean). Overall, the results are interesting, but trusting they indicate a new finding “counter to widespread belief” is a dubious conclusion, especially when the widespread belief is based on the weight of evidence from hundreds of other studies.
Figure 5.4 The findings that countered “widespread belief” and were the basis for optimism for closing cognitive gaps for disadvantaged children. Panel (a) shows that computer game training on matrix reasoning (n = 17) increased reasoning scores but not speed of processing scores. Panel (b) shows that cognitive speed training (n = 11) increased coding scores but not reasoning scores.
Reprinted with permission, Mackey et al. (2011).
This study may be the basis of some generic commercial claims that computer games can increase IQ (without specific attribution to this research study). I am unaware of any replication studies of the UC Berkeley findings, positive or negative, either by the original authors or by other researchers. This is odd given the claimed potential for these findings to overturn widely held beliefs. Most intelligence researchers remain highly skeptical of a 10-point IQ increase attributed to general cognitive training. A recent comprehensive study, for example, found virtually no relationship between video game experience and fluid intelligence in a large sample of young adults (Unsworth et al., 2015).
A number of commercial companies market computer-based training programs to parents and to school systems with the explicit or implied goal of closing cognitive gaps, especially for students from disadvantaged backgrounds (see Chapter 6 for more about SES and intelligence). Most reputable companies are careful to avoid making explicit claims about increasing intelligence. One company, however, claims in their 2014 report (downloaded from the Internet) that their brain-training program raises IQ an average of 15 points for their clients. Their clients who start the program with “severe cognitive weakness” show average gains of 22 IQ points. The report has many pages of impressive-looking statistical analyses, tables, and graphs that show apparently amazing results for users of their program, but does not list a single publication where the statistics and claims have undergone independent peer review. Other companies sometimes cite individual published research reports, especially with small samples, as evidence for the validity of computer training programs to increase mental performance. This kind of cherry-picking is quite common where only studies that support a claim are noted while ignoring other studies that do not. Neuro-education and brain-based learning are attractive concepts for educators but, in my view, there is not yet a compelling weight of evidence of successful applications so considerable caution is required (Geake, 2008, 2011; Howard-Jones, 2014). Potential buyers of such programs, especially of those claiming increases in intelligence, are advised to keep three words in mind before signing a contract or making a purchase: independent replication required.
Speaking of independent replication, none of the three studies discussed so far (the Mozart Effect, n-back training, and computer training) included any replication attempt in the original reports. There are other interesting commonalities among these studies. Each claimed a finding that overturned long-standing findings from many previous studies. Each study was based on small samples. Each study measured putative cognitive gains with single test scores rather that extracting a latent factor like g from multiple measures. Each study’s primary author was a
young investigator and the more senior authors had few previous publications that depended on psychometric assessment of intelligence. In retrospect, is it surprising that numerous subsequent studies by independent, experienced investigators failed to replicate the original claims? There is a certain eagerness about showing that intelligence is malleable and can be increased with relatively simple interventions. This eagerness requires researchers to be extra cautious. Peer-reviewed publication of extraordinary claims requires extraordinary evidence, which is not apparent in Figures 5.1, 5.3, and 5.4. In my view, basic requirements for publication of “landmark” findings would start with replication data included along with original findings. This would save many years of effort and expense trying to replicate provocative claims based on fundamentally flawed studies and weak results. It is a modest proposal, but probably unrealistic given academic pressures to publish and obtain grants. Before leaving this section on increasing intelligence in children, there is another interesting and more optimistic report to consider. Whereas the three cases discussed so far are presented as cautionary examples, this one is a positive illustration of how progress in the field can be advanced more prudently. This report is based on meta-analyses of “nearly every available intervention involving children from birth to kindergarten” to increase intelligence (Protzko et al., 2013). These researchers from New York University (NYU) maintain the Database of Raising Intelligence. This database includes studies designed to increase intelligence that have the following components: a sample drawn from a general, non-clinical population; a pure randomized controlled experimental design; a sustained intervention; and a widely accepted, standardized measure of intelligence as an outcome variable. Four meta-analyses are reported on the effects of dietary supplementation to pregnant mothers and neonates, early educational interventions, interactive reading, and sending a child to preschool. Here is a summary of the main results for each of these four analyses.
The nutrition research was limited mostly to studies of the long-chain fatty acid called PUFA (don’t ask why this name), an ingredient in breast milk necessary for normal brain development and function. This analysis was inspired by earlier evidence of higher IQ in breast-fed children compared to bottle-fed children (Anderson et al., 1999). The 2013 meta-analysis included 10 other studies of 844 total participants. The analysis suggested that a 3.5 IQ point increase was associated with long-chain PUFA when it was used as a dietary supplement. However, a review of 84 related studies suggested several possible confounding factors, including that parents with higher IQs tended to breastfeed more. The conclusion was that the small IQ increase in children attributed to breastfeeding may actually be due to confounding factors including the genetics of IQ (Walfisch et al., 2013). This was also the conclusion of an earlier prospective study of sibling pairs where one was breast-fed and the other was not (Der et al., 2006). Thus, the weight of evidence does not support breastfeeding as a way to increase a child’s IQ. Similar analyses for iron, zinc, vitamin B6, and multivitamin supplements were less encouraging for increasing IQ based on the available evidence.
The second meta-analysis focused on early education. In Chapter 2 we described a few key intervention studies that failed to show lasting IQ increases. The NYU analyses incorporated 19 studies going as far back as 1968. Some had interventions that went on for more than 3 years. Although some individual studies did show IQ increases for some infants, all together the meta-analysis indicated that there was no appreciable effect on IQ. The third meta-analysis focused on interactive reading and incorporated 10 studies totaling 499 participants. For children under 4 years old, the meta-analysis indicated about a 6-point increase in IQ when the child was an active participant in the reading. The authors speculate that this intervention may influence language development which then indirectly influences IQ. Active reading is now widely recommended to parents. The fourth meta-analysis focused on preschool and included 16 studies of 7,370 participants, mostly with low family income backgrounds. All together the analysis indicated a 4-point increase in IQ but up to a 7-point increase for the subset of programs that included a specific emphasis on language development. Interestingly, longer preschool attendance was not related to greater increase in IQ points. How long any of the putative increases may last and the brain mechanisms that might be relevant are not yet known.
Even if statistically significant, the reported IQ increases still are mostly about the size of the standard error of IQ tests, especially given that intelligence test scores in this young age range are less reliable and often fluctuate for many reasons over short periods of time. Many of the studies included in the four meta-analyses have the same small sample issues that characterized the three case studies and the n-back meta-analysis done by Au and colleagues on studies of memory training (Bogg & Lasecki, 2015; Redick, 2015). It is too early to know if the NYU meta-analyses will hold up as more data become available, so continued skepticism is warranted for any effect these interventions may have on intelligence. Nonetheless, the NYU researchers have provided a systematic, empirical basis for their conclusions and for their suggestions for additional intervention research in children.
5.4 Where are the IQ Pills?
The genetic and neuroimaging studies described in Chapters 2, 3, and 4 provide compelling evidence that intelligence has a strong basis in neurobiology, neurochemistry, and neurodevelopment. Actual brain mechanisms that influence or control brain structures and functions related to intelligence are not understood to any significant degree. If certain neurotransmitters, for example, are found to play a central role in relevant cognitive mechanisms (say working memory), then drugs that increase or decrease activity of those neurotransmitters may show effects on intelligence test scores. Synaptic events regulated by neurotransmitters may be the place for interventions. These include changing the level of neurotransmitter, or how fast the neurotransmitter is replenished, or the sensitivity of the receptors that respond to the neurotransmitters. On the other hand, if drugs are accidently found to increase scores on IQ tests, inferences about how those drugs work on neurotransmitters in the synapse can generate new hypotheses about what brain mechanisms might be most relevant to intelligence. This logic for drug effects is the same as applied in the intervention studies we have discussed earlier in this chapter. Drugs influence brain mechanisms more directly than memory training, for instance, so drugs may have greater intelligence boosting potential. The study criteria for showing an effect on intelligence for any drug is also the same: a sample that includes a range of normal IQ scores, multiple measures of intelligence to extract a latent g-factor, double-blind placebo-controlled trials with random assignment, dose-dependent response for any short-term effect (greater dose shows greater enhancement), a follow-up period to determine any lasting effects, and independent replication. And, of course, a ratio scale of intelligence would make an increase most convincing although none yet exists (Haier, 2014). (See Textbox 6.1 for a possible way to define a ratio scale for intelligence.)
The Internet has countless entries for IQ-boosting drugs, and there are many peer-reviewed studies of cognitive enhancing effects on learning, memory, and attention for drugs like nicotine (Heishman et al., 2010). Psychostimulant drugs used to treat attention deficit hyperactivity disorder (ADHD) and other clinical disorders of the brain are particularly favorite candidates for use by students in high school, college, and university and by adults without clinical conditions who desire cognitive enhancement for academic or vocational achievement. Many surveys show that drugs already are widely used to enhance aspects of cognition and a number of surrounding ethical issues have been discussed. Some of these issues are presented in Textbox 5.2. Overall, well-designed research studies do not strongly support such use (Bagot & Kaminer, 2014; Farah et al., 2014; Husain & Mehta, 2011; Ilieva & Farah, 2013; Smith & Farah, 2011). Even fewer studies are designed specifically to investigate drug effects directly on intelligence test scores in samples of people who do not have clinical problems. I co
uld find no relevant meta-analysis that might support such use. In short, there is no compelling scientific evidence yet for an IQ pill. As we learn more about brain mechanisms and intelligence, however, there is every reason to believe that it will be possible to enhance the relevant brain mechanisms with drugs, perhaps existing ones or new ones. Research on treating Alzheimer’s disease, for example, may reveal specific brain mechanisms related to learning and memory that can be enhanced with new drugs significantly better than existing drugs. This prospect fuels intense research at many multinational pharmaceutical companies. If such drugs become available to enhance learning and memory in patients with Alzheimer’s disease, surely the effect of those drugs will be studied in non-patients to boost cognition.
Because there is a paucity of empirical evidence for raising intelligence, and because psychoactive drugs often have serious side effects, especially when a physician does not monitor their use, no list of drugs claimed to increase intelligence appears in this book. In my view, there are none to list. The potential for drugs to boost intelligence, however, is directly correlated to the extent to which the biological bases of intelligence are revealed, and as described in previous chapters, the pace of discovery is increasing. Drugs, however, may not be the only way to tweak neurobiological processes. There are fascinating hints at other methods. We now turn to what may sound like science fiction efforts to enhance intelligence and related cognition. They are not fiction and they are mind-blowing, almost literally.
The Neuroscience of Intelligence Page 21