The Shallows
Page 16
Other studies suggest that the kind of mental calisthenics we engage in online may lead to a small expansion in the capacity of our working memory.45 That, too, would help us to become more adept at juggling data. Such research “indicates that our brains learn to swiftly focus attention, analyze information, and almost instantaneously decide on a go or no-go decision,” says Gary Small. He believes that as we spend more time navigating the vast quantity of information available online, “many of us are developing neural circuitry that is customized for rapid and incisive spurts of directed attention.”46 As we practice browsing, surfing, scanning, and multitasking, our plastic brains may well become more facile at those tasks.
The importance of such skills shouldn’t be taken lightly. As our work and social lives come to center on the use of electronic media, the faster we’re able to navigate those media and the more adroitly we’re able to shift our attention among online tasks, the more valuable we’re likely to become as employees and even as friends and colleagues. As the writer Sam Anderson put it in “In Defense of Distraction,” a 2009 article in New York magazine, “Our jobs depend on connectivity” and “our pleasure-cycles—no trivial matter—are increasingly tied to it.” The practical benefits of Web use are many, which is one of the main reasons we spend so much time online. “It’s too late,” argues Anderson, “to just retreat to a quieter time.” 47
He’s right, but it would be a serious mistake to look narrowly at the Net’s benefits and conclude that the technology is making us more intelligent. Jordan Grafman, head of the cognitive neuroscience unit at the National Institute of Neurological Disorders and Stroke, explains that the constant shifting of our attention when we’re online may make our brains more nimble when it comes to multitasking, but improving our ability to multitask actually hampers our ability to think deeply and creatively. “Does optimizing for multitasking result in better functioning—that is, creativity, inventiveness, productiveness? The answer is, in more cases than not, no,” says Grafman. “The more you multitask, the less deliberative you become; the less able to think and reason out a problem.” You become, he argues, more likely to rely on conventional ideas and solutions rather than challenging them with original lines of thought.48 David Meyer, a University of Michigan neuroscientist and one of the leading experts on multitasking, makes a similar point. As we gain more experience in rapidly shifting our attention, we may “overcome some of the inefficiencies” inherent in multitasking, he says, “but except in rare circumstances, you can train until you’re blue in the face and you’d never be as good as if you just focused on one thing at a time.” 49 What we’re doing when we multitask “is learning to be skillful at a superficial level.”50 The Roman philosopher Seneca may have put it best two thousand years ago: “To be everywhere is to be nowhere.”51
In an article published in Science in early 2009, Patricia Greenfield, a prominent developmental psychologist who teaches at UCLA, reviewed more than fifty studies of the effects of different types of media on people’s intelligence and learning ability. She concluded that “every medium develops some cognitive skills at the expense of others.” Our growing use of the Net and other screen-based technologies has led to the “widespread and sophisticated development of visual-spatial skills.” We can, for example, rotate objects in our minds better than we used to be able to. But our “new strengths in visual-spatial intelligence” go hand in hand with a weakening of our capacities for the kind of “deep processing” that underpins “mindful knowledge acquisition, inductive analysis, critical thinking, imagination, and reflection.”52 The Net is making us smarter, in other words, only if we define intelligence by the Net’s own standards. If we take a broader and more traditional view of intelligence—if we think about the depth of our thought rather than just its speed—we have to come to a different and considerably darker conclusion.
Given our brain’s plasticity, we know that our online habits continue to reverberate in the workings of our synapses when we’re not online. We can assume that the neural circuits devoted to scanning, skimming, and multitasking are expanding and strengthening, while those used for reading and thinking deeply, with sustained concentration, are weakening or eroding. In 2009, researchers from Stanford University found signs that this shift may already be well under way. They gave a battery of cognitive tests to a group of heavy media multitaskers as well as a group of relatively light multitaskers. They found that the heavy multitaskers were much more easily distracted by “irrelevant environmental stimuli,” had significantly less control over the contents of their working memory, and were in general much less able to maintain their concentration on a particular task. Whereas the infrequent multitaskers exhibited relatively strong “top-down attentional control,” the habitual multitaskers showed “a greater tendency for bottom-up attentional control,” suggesting that “they may be sacrificing performance on the primary task to let in other sources of information.” Intensive multitaskers are “suckers for irrelevancy,” commented Clifford Nass, the Stanford professor who led the research. “Everything distracts them.”53 Michael Merzenich offers an even bleaker assessment. As we multitask online, he says, we are “training our brains to pay attention to the crap.” The consequences for our intellectual lives may prove “deadly.”54
The mental functions that are losing the “survival of the busiest” brain cell battle are those that support calm, linear thought—the ones we use in traversing a lengthy narrative or an involved argument, the ones we draw on when we reflect on our experiences or contemplate an outward or inward phenomenon. The winners are those functions that help us speedily locate, categorize, and assess disparate bits of information in a variety of forms, that let us maintain our mental bearings while being bombarded by stimuli. These functions are, not coincidentally, very similar to the ones performed by computers, which are programmed for the high-speed transfer of data in and out of memory. Once again, we seem to be taking on the characteristics of a popular new intellectual technology.
ON THE EVENING of April 18, 1775, Samuel Johnson accompanied his friends James Boswell and Joshua Reynolds on a visit to Richard Owen Cambridge’s grand villa on the banks of the Thames outside London. They were shown into the library, where Cambridge was waiting to meet them, and after a brief greeting Johnson darted to the shelves and began silently reading the spines of the volumes arrayed there. “Dr. Johnson,” said Cambridge, “it seems odd that one should have such a desire to look at the backs of books.” Johnson, Boswell would later recall, “instantly started from his reverie, wheeled about, and replied, ‘Sir, the reason is very plain. Knowledge is of two kinds. We know a subject ourselves, or we know where we can find information upon it.’”55
The Net grants us instant access to a library of information unprecedented in its size and scope, and it makes it easy for us to sort through that library—to find, if not exactly what we were looking for, at least something sufficient for our immediate purposes. What the Net diminishes is Johnson’s primary kind of knowledge: the ability to know, in depth, a subject for ourselves, to construct within our own minds the rich and idiosyncratic set of connections that give rise to a singular intelligence.
A Digression On The Buoyancy Of IQ Scores
THIRTY YEARS AGO, James Flynn, then the head of the political science department at New Zealand’s University of Otago, began studying historical records of IQ tests. As he dug through the numbers, stripping out the various scoring adjustments that had been made through the years, he discovered something startling: IQ scores had been rising steadily—and pretty much everywhere—throughout the century. Controversial when originally reported, the Flynn effect, as the phenomenon came to be called, has been confirmed by many subsequent studies. It’s real.
Ever since Flynn made his discovery, it has provided a ready-made brickbat to hurl at anyone who suggests that our intellectual powers may be on the wane: If we’re so dumb, why do we keep getting smarter? The Flynn effect has been used to defend TV shows,
video games, personal computers, and, most recently, the Internet. Don Tapscott, in Grown Up Digital, his paean to the first generation of “digital natives,” counters arguments that the extensive use of digital media may be dumbing kids down by pointing out, with a nod to Flynn, that “raw IQ scores have been going up three points a decade since World War II.”1
Tapscott’s right about the numbers, and we should certainly be heartened by the rise in IQ scores, particularly since the gains have been sharpest among segments of the population whose scores have lagged in the past. But there are good reasons to be skeptical of any claim that the Flynn effect proves that people are “smarter” today than they used to be or that the Internet is boosting the general intelligence of the human race. For one thing, as Tapscott himself notes, IQ scores have been going up for a very long time—since well before World War II, in fact—and the pace of increase has remained remarkably stable, varying only slightly from decade to decade. That pattern suggests that the rise probably reflects a deep and persistent change in some aspect of society rather than any particular recent event or technology. The fact that the Internet began to come into widespread use only about ten years ago makes it all the more unlikely that it has been a significant force propelling IQ scores upward.
Other measures of intelligence don’t show anything like the gains we’ve seen in overall IQ scores. In fact, even IQ tests have been sending mixed signals. The tests have different sections, which measure different aspects of intelligence, and performance on them has varied widely. Most of the increase in overall scores can be attributed to strengthening performance in tests involving the mental rotation of geometric forms, the identification of similarities between disparate objects, and the arrangement of shapes into logical sequences. Tests of memorization, vocabulary, general knowledge, and even basic arithmetic have shown little or no improvement.
Scores on other common tests designed to measure intellectual skills also seem to be either stagnant or declining. Scores on PSAT exams, which are given to high school juniors throughout the United States, did not increase at all during the years from 1999 to 2008, a time when Net use in homes and schools was expanding dramatically. In fact, while the average math scores held fairly steady during that period, dropping a fraction of a point, from 49.2 to 48.8, scores on the verbal portions of the test declined significantly. The average critical-reading score fell 3.3 percent, from 48.3 to 46.7, and the average writing-skills score dropped an even steeper 6.9 percent, from 49.2 to 45.8.2 Scores on the verbal sections of the SAT tests given to college-bound students have also been dropping. A 2007 report from the U.S. Department of Education showed that twelfth-graders’ scores on tests of three different kinds of reading—for performing a task, for gathering information, and for literary experience—fell between 1992 and 2005. Literary reading aptitude suffered the largest decline, dropping twelve percent.3
There are signs, as well, that the Flynn effect may be starting to fade even as Web use picks up. Research in Norway and Denmark shows that the rise in intelligence test scores began to slow in those countries during the 1970s and ’80s and that since the mid-1990s scores have either remained steady or fallen slightly.4 In the United Kingdom, a 2009 study revealed that the IQ scores of teenagers dropped by two points between 1980 and 2008, after decades of gains.5 Scandinavians and Britons have been among the world’s pace setters in adopting high-speed Internet service and using multipurpose mobile phones. If digital media were boosting IQ scores, you’d expect to see particularly strong evidence in their results.
So what is behind the Flynn effect? Many theories have been offered, from smaller families to better nutrition to the expansion of formal education, but the explanation that seems most credible comes from James Flynn himself. Early in his research, he realized that his findings presented a couple of paradoxes. First, the steepness of the rise in test scores during the twentieth century suggests that our forebears must have been dimwits, even though everything we know about them tells us otherwise. As Flynn wrote in his book What Is Intelligence?, “If IQ gains are in any sense real, we are driven to the absurd conclusion that a majority of our ancestors were mentally retarded.”6 The second paradox stems from the disparities in the scores on different sections of IQ tests: “How can people get more intelligent and have no larger vocabularies, no larger stores of general information, no greater ability to solve arithmetical problems?”7
After mulling over the paradoxes for many years, Flynn came to the conclusion that the gains in IQ scores have less to do with an increase in general intelligence than with a transformation in the way people think about intelligence. Up until the end of the nineteenth century, the scientific view of intelligence, with its stress on classification, correlation, and abstract reasoning, remained fairly rare, limited to those who attended or taught at universities. Most people continued to see intelligence as a matter of deciphering the workings of nature and solving practical problems—on the farm, in the factory, at home. Living in a world of substance rather than symbol, they had little cause or opportunity to think about abstract shapes and theoretical classification schemes.
But, Flynn realized, that all changed over the course of the last century when, for economic, technological, and educational reasons, abstract reasoning moved into the mainstream. Everyone began to wear, as Flynn colorfully puts it, the same “scientific spectacles” that were worn by the original developers of IQ tests.8 Once he had that insight, Flynn recalled in a 2007 interview, “I began to feel that I was bridging the gulf between our minds and the minds of our ancestors. We weren’t more intelligent than they, but we had learnt to apply our intelligence to a new set of problems. We had detached logic from the concrete, we were willing to deal with the hypothetical, and we thought the world was a place to be classified and understood scientifically rather than to be manipulated.” 9
Patricia Greenfield, the UCLA psychologist, came to a similar conclusion in her Science article on media and intelligence. Noting that the rise in IQ scores “is concentrated in nonverbal IQ performance,” which is “mainly tested through visual tests,” she attributed the Flynn effect to an array of factors, from urbanization to the growth in “societal complexity,” all of which “are part and parcel of the worldwide movement from smaller-scale, low-tech communities with subsistence economies toward large-scale, high-tech societies with commercial economies.”10
We’re not smarter than our parents or our parents’ parents. We’re just smart in different ways. And that influences not only how we see the world but also how we raise and educate our children. This social revolution in how we think about thinking explains why we’ve become ever more adept at working out the problems in the more abstract and visual sections of IQ tests while making little or no progress in expanding our personal knowledge, bolstering our basic academic skills, or improving our ability to communicate complicated ideas clearly. We’re trained, from infancy, to put things into categories, to solve puzzles, to think in terms of symbols in space. Our use of personal computers and the Internet may well be reinforcing some of those mental skills and the corresponding neural circuits by strengthening our visual acuity, particularly our ability to speedily evaluate objects and other stimuli as they appear in the abstract realm of a computer screen. But, as Flynn stresses, that doesn’t mean we have “better brains.” It just means we have different brains.11
The Church Of Google
Not long after Nietzsche bought his mechanical writing ball, an earnest young man named Frederick Winslow Taylor carried a stopwatch into the Midvale Steel plant in Philadelphia and began a historic series of experiments aimed at boosting the efficiency of the plant’s machinists. With the grudging approval of Midvale’s owners, Taylor recruited a group of factory hands, set them to work on various metalworking machines, and recorded and timed their every movement. By breaking down each job into a sequence of small steps and then testing different ways of performing them, he created a set of precise instructions—an “algori
thm,” we might say today—for how each worker should work. Midvale’s employees grumbled about the strict new regime, claiming that it turned them into little more than automatons, but the factory’s productivity soared.1
More than a century after the invention of the steam engine, the Industrial Revolution had at last found its philosophy and its philosopher. Taylor’s tight industrial choreography—his “system,” as he liked to call it—was embraced by manufacturers throughout the country and, in time, around the world. Seeking maximum speed, maximum efficiency, and maximum output, factory owners used time-and-motion studies to organize their work and configure the jobs of their workers. The goal, as Taylor defined it in his celebrated 1911 treatise The Principles of Scientific Management, was to identify and adopt, for every job, the “one best method” of work and thereby to effect “the gradual substitution of science for rule of thumb throughout the mechanic arts.”2 Once his system was applied to all acts of manual labor, Taylor assured his many followers, it would bring about a restructuring not only of industry but of society, creating a utopia of perfect efficiency. “In the past the man has been first,” he declared; “in the future the system must be first.”3
Taylor’s system of measurement and optimization is still very much with us; it remains one of the underpinnings of industrial manufacturing. And now, thanks to the growing power that computer engineers and software coders wield over our intellectual and social lives, Taylor’s ethic is beginning to govern the realm of the mind as well. The Internet is a machine designed for the efficient, automated collection, transmission, and manipulation of information, and its legions of programmers are intent on finding the “one best way”—the perfect algorithm—to carry out the mental movements of what we’ve come to describe as knowledge work.