In discussing the implications of his study, van Nimwegen suggested that programmers might want to design their software to be less helpful in order to force users to think harder. That may well be good advice, but it’s hard to imagine the developers of commercial computer programs and Web applications taking it to heart. As van Nimwegen himself noted, one of the long-standing trends in software programming has been the pursuit of ever more “user-friendly” interfaces. That’s particularly true on the Net. Internet companies are in fierce competition to make people’s lives easier, to shift the burden of problem solving and other mental labor away from the user and onto the microprocessor. A small but telling example can be seen in the evolution of search engines. In its earliest incarnation, the Google engine was a very simple tool: you entered a keyword into the search box, and you hit the Search button. But Google, facing competition from other search engines, like Microsoft’s Bing, has worked diligently to make its service ever more solicitous. Now, as soon as you enter the first letter of your keyword into the box, Google immediately suggests a list of popular search terms that begin with that letter. “Our algorithms use a wide range of information to predict the queries users are most likely to want to see,” the company explains. “By suggesting more refined searches up front, [we] can make your searches more convenient and efficient.”30
Automating cognitive processes in this way has become the modern programmer’s stock-in-trade. And for good reason: people naturally seek out those software tools and Web sites that offer the most help and the most guidance—and shun those that are difficult to master. We want friendly, helpful software. Why wouldn’t we? Yet as we cede to software more of the toil of thinking, we are likely diminishing our own brain power in subtle but meaningful ways. When a ditchdigger trades his shovel for a backhoe, his arm muscles weaken even as his efficiency increases. A similar trade-off may well take place as we automate the work of the mind.
Another recent study, this one on academic research, provides real-world evidence of the way the tools we use to sift information online influence our mental habits and frame our thinking. James Evans, a sociologist at the University of Chicago, assembled an enormous database on 34 million scholarly articles published in academic journals from 1945 through 2005. He analyzed the citations included in the articles to see if patterns of citation, and hence of research, have changed as journals have shifted from being printed on paper to being published online. Considering how much easier it is to search digital text than printed text, the common assumption has been that making journals available on the Net would significantly broaden the scope of scholarly research, leading to a much more diverse set of citations. But that’s not at all what Evans discovered. As more journals moved online, scholars actually cited fewer articles than they had before. And as old issues of printed journals were digitized and uploaded to the Web, scholars cited more recent articles with increasing frequency. A broadening of available information led, as Evans described it, to a “narrowing of science and scholarship.”31
In explaining the counterintuitive findings in a 2008 Science article, Evans noted that automated information-filtering tools, such as search engines, tend to serve as amplifiers of popularity, quickly establishing and then continually reinforcing a consensus about what information is important and what isn’t. The ease of following hyperlinks, moreover, leads online researchers to “bypass many of the marginally related articles that print researchers” would routinely skim as they flipped through the pages of a journal or a book. The quicker that scholars are able to “find prevailing opinion,” wrote Evans, the more likely they are “to follow it, leading to more citations referencing fewer articles.” Though much less efficient than searching the Web, old-fashioned library research probably served to widen scholars’ horizons: “By drawing researchers through unrelated articles, print browsing and perusal may have facilitated broader comparisons and led researchers into the past.”32 The easy way may not always be the best way, but the easy way is the way our computers and search engines encourage us to take.
Before Frederick Taylor introduced his system of scientific management, the individual laborer, drawing on his training, knowledge, and experience, would make his own decisions about how he did his work. He would write his own script. After Taylor, the laborer began following a script written by someone else. The machine operator was not expected to understand how the script was constructed or the reasoning behind it; he was simply expected to obey it. The messiness that comes with individual autonomy was cleaned up, and the factory as a whole became more efficient, its output more predictable. Industry prospered. What was lost along with the messiness was personal initiative, creativity, and whim. Conscious craft turned into unconscious routine.
When we go online, we, too, are following scripts written by others—algorithmic instructions that few of us would be able to understand even if the hidden codes were revealed to us. When we search for information through Google or other search engines, we’re following a script. When we look at a product recommended to us by Amazon or Netflix, we’re following a script. When we choose from a list of categories to describe ourselves or our relationships on Facebook, we’re following a script. These scripts can be ingenious and extraordinarily useful, as they were in the Taylorist factories, but they also mechanize the messy processes of intellectual exploration and even social attachment. As the computer programmer Thomas Lord has argued, software can end up turning the most intimate and personal of human activities into mindless “rituals” whose steps are “encoded in the logic of web pages.”33 Rather than acting according to our own knowledge and intuition, we go through the motions.
WHAT EXACTLY WAS going on in Hawthorne’s head as he sat in the green seclusion of Sleepy Hollow and lost himself in contemplation? And how was it different from what was going through the minds of the city dwellers on that crowded, noisy train? A series of psychological studies over the past twenty years has revealed that after spending time in a quiet rural setting, close to nature, people exhibit greater attentiveness, stronger memory, and generally improved cognition. Their brains become both calmer and sharper. The reason, according to attention restoration theory, or ART, is that when people aren’t being bombarded by external stimuli, their brains can, in effect, relax. They no longer have to tax their working memories by processing a stream of bottom-up distractions. The resulting state of contemplativeness strengthens their ability to control their mind.
The results of the most recent such study were published in Psychological Science at the end of 2008. A team of University of Michigan researchers, led by psychologist Marc Berman, recruited some three dozen people and subjected them to a rigorous, and mentally fatiguing, series of tests designed to measure the capacity of their working memory and their ability to exert top-down control over their attention. The subjects were then divided into two groups. Half of them spent about an hour walking through a secluded woodland park, and the other half spent an equal amount of time walking along busy downtown streets. Both groups then took the tests a second time. Spending time in the park, the researchers found, “significantly improved” people’s performance on the cognitive tests, indicating a substantial increase in attentiveness. Walking in the city, by contrast, led to no improvement in test results.
The researchers then conducted a similar experiment with another set of people. Rather than taking walks between the rounds of testing, these subjects simply looked at photographs of either calm rural scenes or busy urban ones. The results were the same. The people who looked at pictures of nature scenes were able to exert substantially stronger control over their attention, while those who looked at city scenes showed no improvement in their attentiveness. “In sum,” concluded the researchers, “simple and brief interactions with nature can produce marked increases in cognitive control.” Spending time in the natural world seems to be of “vital importance” to “effective cognitive functioning.”34
There is no Sleepy Hollow on the
Internet, no peaceful spot where contemplativeness can work its restorative magic. There is only the endless, mesmerizing buzz of the urban street. The stimulations of the Net, like those of the city, can be invigorating and inspiring. We wouldn’t want to give them up. But they are, as well, exhausting and distracting. They can easily, as Hawthorne understood, overwhelm all quieter modes of thought. One of the greatest dangers we face as we automate the work of our minds, as we cede control over the flow of our thoughts and memories to a powerful electronic system, is the one that informs the fears of both the scientist Joseph Weizenbaum and the artist Richard Foreman: a slow erosion of our humanness and our humanity.
It’s not only deep thinking that requires a calm, attentive mind. It’s also empathy and compassion. Psychologists have long studied how people experience fear and react to physical threats, but it’s only recently that they’ve begun researching the sources of our nobler instincts. What they’re finding is that, as Antonio Damasio, the director of USC’s Brain and Creativity Institute, explains, the higher emotions emerge from neural processes that “are inherently slow.”35 In one recent experiment, Damasio and his colleagues had subjects listen to stories describing people experiencing physical or psychological pain. The subjects were then put into a magnetic resonance imaging machine and their brains were scanned as they were asked to remember the stories. The experiment revealed that while the human brain reacts very quickly to demonstrations of physical pain—when you see someone injured, the primitive pain centers in your own brain activate almost instantaneously—the more sophisticated mental process of empathizing with psychological suffering unfolds much more slowly. It takes time, the researchers discovered, for the brain “to transcend immediate involvement of the body” and begin to understand and to feel “the psychological and moral dimensions of a situation.”36
The experiment, say the scholars, indicates that the more distracted we become, the less able we are to experience the subtlest, most distinctively human forms of empathy, compassion, and other emotions. “For some kinds of thoughts, especially moral decision-making about other people’s social and psychological situations, we need to allow for adequate time and reflection,” cautions Mary Helen Immordino-Yang, a member of the research team. “If things are happening too fast, you may not ever fully experience emotions about other people’s psychological states.”37 It would be rash to jump to the conclusion that the Internet is undermining our moral sense. It would not be rash to suggest that as the Net reroutes our vital paths and diminishes our capacity for contemplation, it is altering the depth of our emotions as well as our thoughts.
There are those who are heartened by the ease with which our minds are adapting to the Web’s intellectual ethic. “Technological progress does not reverse,” writes a Wall Street Journal columnist, “so the trend toward multitasking and consuming many different types of information will only continue.” We need not worry, though, because our “human software” will in time “catch up to the machine technology that made the information abundance possible.” We’ll “evolve” to become more agile consumers of data.38 The writer of a cover story in New York magazine says that as we become used to “the 21st-century task” of “flitting” among bits of online information, “the wiring of the brain will inevitably change to deal more efficiently with more information.” We may lose our capacity “to concentrate on a complex task from beginning to end,” but in recompense we’ll gain new skills, such as the ability to “conduct 34 conversations simultaneously across six different media.”39 A prominent economist writes, cheerily, that “the web allows us to borrow cognitive strengths from autism and to be better infovores.” 40 An Atlantic author suggests that our “technology-induced ADD” may be “a short-term problem,” stemming from our reliance on “cognitive habits evolved and perfected in an era of limited information flow.” Developing new cognitive habits is “the only viable approach to navigating the age of constant connectivity.”41
These writers are certainly correct in arguing that we’re being molded by our new information environment. Our mental adaptability, built into the deepest workings of our brains, is a keynote of intellectual history. But if there’s comfort in their reassurances, it’s of a very cold sort. Adaptation leaves us better suited to our circumstances, but qualitatively it’s a neutral process. What matters in the end is not our becoming but what we become. In the 1950s, Martin Heidegger observed that the looming “tide of technological revolution” could “so captivate, bewitch, dazzle, and beguile man that calculative thinking may someday come to be accepted and practiced as the only way of thinking.” Our ability to engage in “meditative thinking,” which he saw as the very essence of our humanity, might become a victim of headlong progress.42 The tumultuous advance of technology could, like the arrival of the locomotive at the Concord station, drown out the refined perceptions, thoughts, and emotions that arise only through contemplation and reflection. The “frenziedness of technology,” Heidegger wrote, threatens to “entrench itself everywhere.”43
It may be that we are now entering the final stage of that entrenchment. We are welcoming the frenziedness into our souls.
Human Elements
As I was finishing this book late in 2009, I stumbled on a small story tucked away in the press. Edexcel, the largest educational testing firm in England, had announced it was introducing “artificial intelligence-based, automated marking of exam essays.” The computerized grading system would “read and assess” the essays that British students write as part of a widely used test of language proficiency. A spokesman for Edexcel, which is a subsidiary of the media conglomerate Pearson, explained that the system “produced the accuracy of human markers while eliminating human elements such as tiredness and subjectivity,” according to a report in the Times Education Supplement. A testing expert told the paper that the computerized evaluation of essays would be a mainstay of education in the future: “The uncertainty is ‘when’ not ‘if.’”1
How, I wondered, would the Edexcel software discern those rare students who break from the conventions of writing not because they’re incompetent but because they have a special spark of brilliance? I knew the answer: it wouldn’t. Computers, as Joseph Weizenbaum pointed out, follow rules; they don’t make judgments. In place of subjectivity, they give us formula. The story revealed just how prescient Weizenbaum had been when, decades ago, he warned that as we grow more accustomed to and dependent on our computers we will be tempted to entrust to them “tasks that demand wisdom.” And once we do that, there will be no turning back. The software will become indispensable to those tasks.
The seductions of technology are hard to resist, and in our age of instant information the benefits of speed and efficiency can seem unalloyed, their desirability beyond debate. But I continue to hold out hope that we won’t go gently into the future our computer engineers and software programmers are scripting for us. Even if we don’t heed Weizenbaum’s words, we owe it to ourselves to consider them, to be attentive to what we stand to lose. How sad it would be, particularly when it comes to the nurturing of our children’s minds, if we were to accept without question the idea that “human elements” are outmoded and dispensable.
The Edexcel story also stirred, once again, my memory of that scene at the end of 2001. It’s a scene that has haunted me ever since I first saw the film as a teenager back in the 1970s, in the midst of my analogue youth. What makes it so poignant, and so weird, is the computer’s emotional response to the disassembly of its mind: its despair as one circuit after another goes dark, its childlike pleading with the astronaut—“I can feel it. I can feel it. I’m afraid”—and its final reversion to what can only be called a state of innocence. HAL’s outpouring of feeling contrasts with the emotionlessness that characterizes the human figures in the film, who go about their business with an almost robotic efficiency. Their thoughts and actions feel scripted, as if they’re following the steps of an algorithm. In the world of 2001, people have become so machi
nelike that the most human character turns out to be a machine. That’s the essence of Kubrick’s dark prophecy: as we come to rely on computers to mediate our understanding of the world, it is our own intelligence that flattens into artificial intelligence.
Notes
Prologue THE WATCHDOG AND THE THIEF
1. Marshall McLuhan, Understanding Media: The Extensions of Man, critical ed., ed. W. Terrence Gordon (Corte Madera, CA: Gingko, 2003), 5.
2. Ibid., 30.
3. Ibid., 31.
4. Ibid., 23.
5. Ibid., 31.
6. David Thomson, Have You Seen?: A Personal Introduction to 1,000 Films (New York: Knopf, 2008), 149.
One HAL AND ME
1. Heather Pringle, “Is Google Making Archaeologists Smarter?,” Beyond Stone & Bone blog (Archaeological Institute of America), February 27, 2009, http://archaeology.org/blog/?p=332.
The Shallows Page 24