Book Read Free

The Shallows

Page 23

by Nicholas Carr


  That fact, almost “a tautology,” helps explain how our dependence on digital computers grew steadily and seemingly inexorably after the machines were invented at the end of the Second World War. “The computer was not a prerequisite to the survival of modern society in the post-war period and beyond,” Weizenbaum argued; “its enthusiastic, uncritical embrace by the most ‘progressive’ elements of American government, business, and industry made it a resource essential to society’s survival in the form that the computer itself had been instrumental in shaping.” He knew from his experience with time-sharing networks that the role of computers would expand beyond the automation of governmental and industrial processes. Computers would come to mediate the activities that define people’s everyday lives—how they learn, how they think, how they socialize. What the history of intellectual technologies shows us, he warned, is that “the introduction of computers into some complex human activities may constitute an irreversible commitment.” Our intellectual and social lives may, like our industrial routines, come to reflect the form that the computer imposes on them.13

  What makes us most human, Weizenbaum had come to believe, is what is least computable about us—the connections between our mind and our body, the experiences that shape our memory and our thinking, our capacity for emotion and empathy. The great danger we face as we become more intimately involved with our computers—as we come to experience more of our lives through the disembodied symbols flickering across our screens—is that we’ll begin to lose our humanness, to sacrifice the very qualities that separate us from machines. The only way to avoid that fate, Weizenbaum wrote, is to have the self-awareness and the courage to refuse to delegate to computers the most human of our mental activities and intellectual pursuits, particularly “tasks that demand wisdom.”14

  In addition to being a learned treatise on the workings of computers and software, Weizenbaum’s book was a cri de coeur, a computer programmer’s passionate and at times self-righteous examination of the limits of his profession. The book did not endear the author to his peers. After it came out, Weizenbaum was spurned as a heretic by leading computer scientists, particularly those pursuing artificial intelligence. John McCarthy, one of the organizers of the original Dartmouth AI conference, spoke for many technologists when, in a mocking review, he dismissed Computer Power and Human Reason as “an unreasonable book” and scolded Weizenbaum for unscientific “moralizing.”15 Outside the data-processing field, the book caused only a brief stir. It appeared just as the first personal computers were making the leap from hobbyists’ workbenches to mass production. The public, primed for the start of a buying spree that would put computers into most every office, home, and school in the land, was in no mood to entertain an apostate’s doubts.

  WHEN A CARPENTER picks up a hammer, the hammer becomes, so far as his brain is concerned, part of his hand. When a soldier raises a pair of binoculars to his face, his brain sees through a new set of eyes, adapting instantaneously to a very different field of view. The experiments on pliers-wielding monkeys revealed how readily the plastic primate brain can incorporate tools into its sensory maps, making the artificial feel natural. In the human brain, that capacity has advanced far beyond what’s seen in even our closest primate cousins. Our ability to meld with all manner of tools is one of the qualities that most distinguishes us as a species. In combination with our superior cognitive skills, it’s what makes us so good at using new technologies. It’s also what makes us so good at inventing them. Our brains can imagine the mechanics and the benefits of using a new device before that device even exists. The evolution of our extraordinary mental capacity to blur the boundary between the internal and the external, the body and the instrument, was, says University of Oregon neuroscientist Scott Frey, “no doubt a fundamental step in the development of technology.”16

  The tight bonds we form with our tools go both ways. Even as our technologies become extensions of ourselves, we become extensions of our technologies. When the carpenter takes his hammer into his hand, he can use that hand to do only what a hammer can do. The hand becomes an implement for pounding and pulling nails. When the soldier puts the binoculars to his eyes, he can see only what the lenses allow him to see. His field of view lengthens, but he becomes blind to what’s nearby. Nietzsche’s experience with his typewriter provides a particularly good illustration of the way technologies exert their influence on us. Not only did the philosopher come to imagine that his writing ball was “a thing like me” he also sensed that he was becoming a thing like it, that his typewriter was shaping his thoughts. T. S. Eliot had a similar experience when he went from writing his poems and essays by hand to typing them. “Composing on the typewriter,” he wrote in a 1916 letter to Conrad Aiken, “I find that I am sloughing off all my long sentences which I used to dote upon. Short, staccato, like modern French prose. The typewriter makes for lucidity, but I am not sure that it encourages subtlety.”17

  Every tool imposes limitations even as it opens possibilities. The more we use it, the more we mold ourselves to its form and function. That explains why, after working with a word processor for a time, I began to lose my facility for writing and editing in longhand. My experience, I later learned, was not uncommon. “People who write on a computer are often at a loss when they have to write by hand,” Norman Doidge reports. Their ability “to translate thoughts into cursive writing” diminishes as they become used to tapping keys and watching letters appear as if by magic on a screen.18 Today, with kids using keyboards and keypads from a very young age and schools discontinuing penmanship lessons, there is mounting evidence that the ability to write in cursive script is disappearing altogether from our culture. It’s becoming a lost art. “We shape our tools,” observed the Jesuit priest and media scholar John Culkin in 1967, “and thereafter they shape us.”19

  Marshall McLuhan, who was Culkin’s intellectual mentor, elucidated the ways our technologies at once strengthen and sap us. In one of the most perceptive, if least remarked, passages in Understanding Media, McLuhan wrote that our tools end up “numbing” whatever part of our body they “amplify.”20 When we extend some part of ourselves artificially, we also distance ourselves from the amplified part and its natural functions. When the power loom was invented, weavers could manufacture far more cloth during the course of a workday than they’d been able to make by hand, but they sacrificed some of their manual dexterity, not to mention some of their “feel” for fabric. Their fingers, in McLuhan’s terms, became numb. Farmers, similarly, lost some of their feel for the soil when they began using mechanical harrows and plows. Today’s industrial farm worker, sitting in his air-conditioned cage atop a gargantuan tractor, rarely touches the soil at all—though in a single day he can till a field that his hoe-wielding forebear could not have turned in a month. When we’re behind the wheel of our car, we can go a far greater distance than we could cover on foot, but we lose the walker’s intimate connection to the land.

  As McLuhan acknowledged, he was far from the first to observe technology’s numbing effect. It’s an ancient idea, one that was given perhaps its most eloquent and ominous expression by the Old Testament psalmist:

  Their idols are silver and gold,

  The work of men’s hands.

  They have mouths, but they speak not;

  Eyes have they, but they see not;

  They have ears, but they hear not;

  Noses have they, but they smell not;

  They have hands, but they handle not;

  Feet have they, but they walk not;

  Neither speak they through their throat.

  They that make them are like unto them;

  So is every one that trusteth in them.

  The price we pay to assume technology’s power is alienation. The toll can be particularly high with our intellectual technologies. The tools of the mind amplify and in turn numb the most intimate, the most human, of our natural capacities—those for reason, perception, memory, emotion. The mechanical clock,
for all the blessings it bestowed, removed us from the natural flow of time. When Lewis Mumford described how modern clocks helped “create the belief in an independent world of mathematically measurable sequences,” he also stressed that, as a consequence, clocks “disassociated time from human events.”21 Weizenbaum, building on Mumford’s point, argued that the conception of the world that emerged from timekeeping instruments “was and remains an impoverished version of the older one, for it rests on a rejection of those direct experiences that formed the basis for, and indeed constituted, the old reality.”22 In deciding when to eat, to work, to sleep, to wake up, we stopped listening to our senses and started obeying the clock. We became a lot more scientific, but we became a bit more mechanical as well.

  Even a tool as seemingly simple and benign as the map had a numbing effect. Our ancestors’ navigational skills were amplified enormously by the cartographer’s art. For the first time, people could confidently traverse lands and seas they’d never seen before—an advance that spurred a history-making expansion of exploration, trade, and warfare. But their native ability to comprehend a landscape, to create a richly detailed mental map of their surroundings, weakened. The map’s abstract, two-dimensional representation of space interposed itself between the map reader and his perception of the actual land. As we can infer from recent studies of the brain, the loss must have had a physical component. When people came to rely on maps rather than their own bearings, they would have experienced a diminishment of the area of their hippocampus devoted to spatial representation. The numbing would have occurred deep in their neurons.

  We’re likely going through another such adaptation today as we come to depend on computerized GPS devices to shepherd us around. Eleanor Maguire, the neuroscientist who led the study of the brains of London taxi drivers, worries that satellite navigation could have “a big effect” on cabbies’ neurons. “We very much hope they don’t start using it,” she says, speaking on behalf of her team of researchers. “We believe [the hippocampal] area of the brain increased in grey matter volume because of the huge amount of data [the drivers] have to memorize. If they all start using GPS, that knowledge base will be less and possibly affect the brain changes we are seeing.”23 The cabbies would be freed from the hard work of learning the city’s roads, but they would also lose the distinctive mental benefits of that training. Their brains would become less interesting.

  In explaining how technologies numb the very faculties they amplify, to the point even of “autoamputation,” McLuhan was not trying to romanticize society as it existed before the invention of maps or clocks or power looms. Alienation, he understood, is an inevitable by-product of the use of technology. Whenever we use a tool to exert greater control over the outside world, we change our relationship with that world. Control can be wielded only from a psychological distance. In some cases, alienation is precisely what gives a tool its value. We build houses and sew Gore-Tex jackets because we want to be alienated from the wind and the rain and the cold. We build public sewers because we want to maintain a healthy distance from our own filth. Nature isn’t our enemy, but neither is it our friend. McLuhan’s point was that an honest appraisal of any new technology, or of progress in general, requires a sensitivity to what’s lost as well as what’s gained. We shouldn’t allow the glories of technology to blind our inner watchdog to the possibility that we’ve numbed an essential part of our self.

  AS A UNIVERSAL medium, a supremely versatile extension of our senses, our cognition, and our memory, the networked computer serves as a particularly powerful neural amplifier. Its numbing effects are equally strong. Norman Doidge explains that “the computer extends the processing capabilities of our central nervous system” and in the process “also alters it.” Electronic media “are so effective at altering the nervous system because they both work in similar ways and are basically compatible and easily linked.” Thanks to its plasticity, the nervous system “can take advantage of this compatibility and merge with the electronic media, making a single, larger system.”24

  There’s another, even deeper reason why our nervous systems are so quick to “merge” with our computers. Evolution has imbued our brains with a powerful social instinct, which, as Jason Mitchell, the head of Harvard’s Social Cognition and Affective Neuroscience Laboratory, says, entails “a set of processes for inferring what those around us are thinking and feeling.” Recent neuroimaging studies indicate that three highly active brain regions—one in the prefrontal cortex, one in the parietal cortex, and one at the intersection of the parietal and temporal cortices—are “specifically dedicated to the task of understanding the goings-on of other people’s minds.” Our innate ability for “mind reading,” says Mitchell, has played an important role in the success of our species, allowing us to “coordinate large groups of people to achieve goals that individuals could not.”25 As we’ve entered the computer age, however, our talent for connecting with other minds has had an unintended consequence. The “chronic overactivity of those brain regions implicated in social thought” can, writes Mitchell, lead us to perceive minds where no minds exist, even in “inanimate objects.” There’s growing evidence, moreover, that our brains naturally mimic the states of the other minds we interact with, whether those minds are real or imagined. Such neural “mirroring” helps explain why we’re so quick to attribute human characteristics to our computers and computer characteristics to ourselves—why we hear a human voice when ELIZA speaks.

  Our willingness, even eagerness, to enter into what Doidge calls “a single, larger system” with our data-processing devices is an outgrowth not only of the characteristics of the digital computer as an informational medium but of the characteristics of our socially adapted brains. While this cybernetic blurring of mind and machine may allow us to carry out certain cognitive tasks far more efficiently, it poses a threat to our integrity as human beings. Even as the larger system into which our minds so readily meld is lending us its powers, it is also imposing on us its limitations. To put a new spin on Culkin’s phrase, we program our computers and thereafter they program us.

  Even at a practical level, the effects are not always as beneficial as we want to believe. As the many studies of hypertext and multimedia show, our ability to learn can be severely compromised when our brains become overloaded with diverse stimuli online. More information can mean less knowledge. But what about the effects of the many software tools we use? How do all the ingenious applications we depend on to find and evaluate information, form and communicate our thoughts, and carry out other cognitive chores influence what and how we learn? In 2003, a Dutch clinical psychologist named Christof van Nimwegen began a fascinating study of computer-aided learning that a BBC writer would later call “one of the most interesting examinations of current computer use and the potential downsides of our increasing reliance on screen-based interaction with information systems.”26 Van Nimwegen had two groups of volunteers work through a tricky logic puzzle on a computer. The puzzle involved transferring colored balls between two boxes in accordance with a set of rules governing which balls could be moved at which time. One of the groups used software that had been designed to be as helpful as possible. It offered on-screen assistance during the course of solving the puzzle, providing visual cues, for instance, to highlight permitted moves. The other group used a bare-bones program, which provided no hints or other guidance.

  In the early stages of solving the puzzle, the group using the helpful software made correct moves more quickly than the other group, as would be expected. But as the test proceeded, the proficiency of the members of the group using the bare-bones software increased more rapidly. In the end, those using the unhelpful program were able to solve the puzzle more quickly and with fewer wrong moves. They also reached fewer impasses—states in which no further moves were possible—than did the people using the helpful software. The findings indicated, as van Nimwegen reported, that those using the unhelpful software were better able to plan ahead and plot str
ategy, while those using the helpful software tended to rely on simple trial and error. Often, in fact, those with the helpful software were found “to aimlessly click around” as they tried to crack the puzzle.27

  Eight months after the experiment, van Nimwegen reassembled the groups and had them again work on the colored-balls puzzle as well as a variation on it. He found that the people who had originally used the unhelpful software were able to solve the puzzles nearly twice as fast as those who had used the helpful software. In another test, he had a different set of volunteers use ordinary calendar software to schedule a complicated series of meetings involving overlapping groups of people. Once again, one group used helpful software that provided lots of on-screen cues, and another group used unhelpful software. The results were the same. The subjects using the unhelpful program “solved the problems with fewer superfluous moves [and] in a more straightforward manner,” and they demonstrated greater “plan-based behavior” and “smarter solution paths.”28

  In his report on the research, van Nimwegen emphasized that he controlled for variations in the participants’ fundamental cognitive skills. It was the differences in the design of the software that explained the differences in performance and learning. The subjects using the bare-bones software consistently demonstrated “more focus, more direct and economical solutions, better strategies, and better imprinting of knowledge.” The more that people depended on explicit guidance from software programs, the less engaged they were in the task and the less they ended up learning. The findings indicate, van Nimwegen concluded, that as we “externalize” problem solving and other cognitive chores to our computers, we reduce our brain’s ability “to build stable knowledge structures”—schemas, in other words—that can later “be applied in new situations.”29 A polemicist might put it more pointedly: The brighter the software, the dimmer the user.

 

‹ Prev