Book Read Free

The Shallows

Page 22

by Nicholas Carr


  “I PROJECT THE history of the future,” wrote Walt Whitman in one of the opening verses of Leaves of Grass. It has long been known that the culture a person is brought up in influences the content and character of that person’s memory. People born into societies that celebrate individual achievement, like the United States, tend, for example, to be able to remember events from earlier in their lives than do people raised in societies that stress communal achievement, such as Korea.40 Psychologists and anthropologists are now discovering that, as Whitman intuited, the influence goes both ways. Personal memory shapes and sustains the “collective memory” that underpins culture. What’s stored in the individual mind—events, facts, concepts, skills—is more than the “representation of distinctive personhood” that constitutes the self, writes the anthropologist Pascal Boyer. It’s also “the crux of cultural transmission.”41 Each of us carries and projects the history of the future. Culture is sustained in our synapses.

  The offloading of memory to external data banks doesn’t just threaten the depth and distinctiveness of the self. It threatens the depth and distinctiveness of the culture we all share. In a recent essay, the playwright Richard Foreman eloquently described what’s at stake. “I come from a tradition of Western culture,” he wrote, “in which the ideal (my ideal) was the complex, dense and ‘cathedral-like’ structure of the highly educated and articulate personality—a man or woman who carried inside themselves a personally constructed and unique version of the entire heritage of the West.” But now, he continued, “I see within us all (myself included) the replacement of complex inner density with a new kind of self—evolving under the pressure of information overload and the technology of the ‘instantly available.’” As we are drained of our “inner repertory of dense cultural inheritance,” Foreman concluded, we risk turning into “pancake people—spread wide and thin as we connect with that vast network of information accessed by the mere touch of a button.”42

  Culture is more than the aggregate of what Google describes as “the world’s information.” It’s more than what can be reduced to binary code and uploaded onto the Net. To remain vital, culture must be renewed in the minds of the members of every generation. Outsource memory, and culture withers.

  A Digression On The Writing Of This Book

  I KNOW WHAT you’re thinking. The very existence of this book would seem to contradict its thesis. If I’m finding it so hard to concentrate, to stay focused on a line of thought, how in the world did I manage to write a few hundred pages of at least semicoherent prose?

  It wasn’t easy. When I began writing The Shallows, toward the end of 2007, I struggled in vain to keep my mind fixed on the task. The Net provided, as always, a bounty of useful information and research tools, but its constant interruptions scattered my thoughts and words. I tended to write in disconnected spurts, the same way I wrote when blogging. It was clear that big changes were in order. In the summer of the following year, I moved with my wife from a highly connected suburb of Boston to the mountains of Colorado. There was no cell phone service at our new home, and the Internet arrived through a relatively poky DSL connection. I canceled my Twitter account, put my Facebook membership on hiatus, and mothballed my blog. I shut down my RSS reader and curtailed my skyping and instant messaging. Most important, I throttled back my e-mail application. It had long been set to check for new messages every minute. I reset it to check only once an hour, and when that still created too much of a distraction, I began keeping the program closed much of the day.

  The dismantling of my online life was far from painless. For months, my synapses howled for their Net fix. I found myself sneaking clicks on the “check for new mail” button. Occasionally, I’d go on a daylong Web binge. But in time the cravings subsided, and I found myself able to type at my keyboard for hours on end or to read through a dense academic paper without my mind wandering. Some old, disused neural circuits were springing back to life, it seemed, and some of the newer, Web-wired ones were quieting down. I started to feel generally calmer and more in control of my thoughts—less like a lab rat pressing a lever and more like, well, a human being. My brain could breathe again.

  My case, I realize, isn’t typical. Being self-employed and of a fairly solitary nature, I have the option of disconnecting. Most people today don’t. The Web is so essential to their work and social lives that even if they wanted to escape the network they could not. In a recent essay, the young novelist Benjamin Kunkel mulled over the Net’s expanding hold on his waking hours: “The internet, as its proponents rightly remind us, makes for variety and convenience; it does not force anything on you. Only it turns out it doesn’t feel like that at all. We don’t feel as if we had freely chosen our online practices. We feel instead that they are habits we have helplessly picked up or that history has enforced, that we are not distributing our attention as we intend or even like to.”1

  The question, really, isn’t whether people can still read or write the occasional book. Of course they can. When we begin using a new intellectual technology, we don’t immediately switch from one mental mode to another. The brain isn’t binary. An intellectual technology exerts its influence by shifting the emphasis of our thought. Although even the initial users of the technology can often sense the changes in their patterns of attention, cognition, and memory as their brains adapt to the new medium, the most profound shifts play out more slowly, over several generations, as the technology becomes ever more embedded in work, leisure, and education—in all the norms and practices that define a society and its culture. How is the way we read changing? How is the way we write changing? How is the way we think changing? Those are the questions we should be asking, both of ourselves and of our children.

  As for me, I’m already backsliding. With the end of this book in sight, I’ve gone back to keeping my e-mail running all the time and I’ve jacked into my RSS feed again. I’ve been playing around with a few new social-networking services and have been posting some new entries to my blog. I recently broke down and bought a Blu-ray player with a built-in Wi-fi connection. It lets me stream music from Pandora, movies from NetFlix, and videos from YouTube through my television and stereo. I have to confess: it’s cool. I’m not sure I could live without it.

  A Thing Like Me

  It was one of the odder episodes in the history of computer science, yet also one of the more telling. Over the course of a few months in 1964 and 1965, Joseph Weizenbaum, a forty-one-year-old computer scientist at the Massachusetts Institute of Technology, wrote a software application for parsing written language, which he programmed to run on the university’s new time-sharing system. A student, sitting at one of the system’s terminals, would type a sentence into the computer, and Weizenbaum’s program, following a set of simple rules about English grammar, would identify a salient word or phrase in the sentence and analyze the syntactical context in which it was used. The program would then, following another set of rules, transform the sentence into a new sentence that had the appearance of being a response to the original. The computer-generated sentence would appear almost instantly on the student’s terminal, giving the illusion of a conversation.

  In a January 1966 paper introducing his program, Weizenbaum provided an example of how it worked. If a person typed the sentence “I am very unhappy these days,” the computer would need only know that the phrase “I am” typically comes before a description of the speaker’s current situation or state of mind. The computer could then recast the sentence into the reply “How long have you been very unhappy these days?” The program worked, Weizenbaum explained, by first applying “a kind of template to the original sentence, one part of which matched the two words ‘I am’ and the remainder [of which] isolated the words ‘very unhappy these days.’” It then used an algorithmic “reassembly kit,” tailored to the template, that included a rule specifying that “any sentence of the form ‘I am BLAH’” should be “transformed to ‘How long have you been BLAH,’ independently of the meaning of B
LAH.”1

  Weizenbaum’s application was a product of its time. During the 1950s and ’60s, the enthusiasm for computers, software programming, and artificial intelligence gave rise not only to the idea that the human brain is a type of computer but to the sense that human language is the output of one of the algorithms running inside that computer. As David Golumbia explains in The Cultural Logic of Computation, a new breed of “computational linguists,” led by Weizenbaum’s MIT colleague Noam Chomsky, posited that the form of the “natural language” that people speak and write reflects “the operation of the computer inside the human mind that performs all linguistic operations.”2 In a 1958 article in the journal Information and Control, Chomsky had written that “one possible method for describing a grammar is in terms of a program for a universal Turing machine.”3 What made the computationalist theory so compelling was that it came wrapped in a seductive “penumbra of technological newness,” writes Golumbia. It offered a “mechanic clarity,” replacing language’s human “messiness” with “a clean internal computer.”4 By reverse-engineering the way people talk, you could discover language’s underlying code, which you could then replicate as software.

  Weizenbaum named his program ELIZA, after Eliza Doolittle, the cockney flower girl who, in George Bernard Shaw’s Pygmalion, learns to speak proper English under the tutelage of the haughty phonetics professor Henry Higgins. To make the simulated conversations a little more interesting, Weizenbaum also gave his artificial interlocutor a persona—that of a Rogerian psychotherapist. The choice, as he explained in his paper, was a pragmatic one. Using a technique developed by the psychologist Carl Rogers in the 1940s, Rogerian therapists pretended, in their conversations with patients, to have no understanding of the world. For the most part, they simply parroted their patients’ statements back to them in the form of banal, open-ended questions or comments. Knowing that the naïveté was a pose, the patients were free to attribute to their therapists “all sorts of background knowledge, insights and reasoning ability.” The Rogerian persona, Weizenbaum recognized, had “a crucial psychological utility” for ELIZA: it suggested that the program’s vacuity masked some sort of real intelligence.

  Weizenbaum provided a transcript of part of an exchange between ELIZA and a young woman who had been an early tester of the program (the machine’s responses are in italic type):

  Men are all alike.

  In what way?

  They’re always bugging us about something or other.

  Can you think of a specific example?

  Well, my boyfriend made me come here.

  Your boyfriend made you come here.

  He says I’m depressed much of the time.

  I am sorry to hear you are depressed.

  It’s true. I am unhappy.

  Do you think coming here will help you to be not unhappy?

  I need some help, that much seems certain.

  What would it mean to you if you got some help?

  Perhaps I could learn to get along with my mother.

  Tell me more about your family.

  My mother takes care of me.

  Who else in your family takes care of you?

  My father.

  Your father.

  You are like my father in some ways.

  What resemblance do you see?

  You are not very aggressive but I think you don’t want me to notice that.

  What makes you think I am not very aggressive?

  You don’t argue with me.

  Why do you think I don’t argue with you?

  You are afraid of me.

  Does it please you to believe I am afraid of you?

  My father is afraid of everybody.5

  In commenting on his creation, which he acknowledged to be a modest and even slightly silly contribution to the fledgling field of natural-language processing, Weizenbaum observed how easy it is for computer programmers to make machines “behave in wondrous ways, often sufficient to dazzle even the most experienced observer.” But as soon as a program’s “inner workings are explained in language sufficiently plain to induce understanding,” he continued, “its magic crumbles away; it stands revealed as a mere collection of procedures, each quite comprehensible. The observer says to himself ‘I could have written that.’” The program goes “from the shelf marked ‘intelligent’ to that reserved for curios.”6

  But Weizenbaum, like Henry Higgins, was soon to have his equilibrium disturbed. ELIZA quickly found fame on the MIT campus, becoming a mainstay of lectures and presentations about computing and time-sharing. It was among the first software programs able to demonstrate the power and speed of computers in a way that laymen could easily grasp. You didn’t need a background in mathematics, much less computer science, to chat with ELIZA. Copies of the program proliferated at other schools as well. Then the press took notice, and ELIZA became, as Weizenbaum later put it, “a national plaything.”7 While he was surprised by the public’s interest in his program, what shocked him was how quickly and deeply people using the software “became emotionally involved with the computer,” talking to it as if it were an actual person. They “would, after conversing with it for a time, insist, in spite of my explanations, that the machine really understood them.”8 Even his secretary, who had watched him write the code for ELIZA “and surely knew it to be merely a computer program,” was seduced. After a few moments using the software at a terminal in Weizenbaum’s office, she asked the professor to leave the room because she was embarrassed by the intimacy of the conversation. “What I had not realized,” said Weizenbaum, “is that extremely short exposures to a relatively simple computer program could induce powerful delusional thinking in quite normal people.” 9

  Things were about to get stranger still. Distinguished psychiatrists and scientists began to suggest, with considerable enthusiasm, that the program could play a valuable role in actually treating the ill and the disturbed. In an article in the Journal of Nervous and Mental Disease, three prominent research psychiatrists wrote that ELIZA, with a bit of tweaking, could be “a therapeutic tool which can be made widely available to mental hospitals and psychiatric centers suffering a shortage of therapists.” Thanks to the “time-sharing capabilities of modern and future computers, several hundred patients an hour could be handled by a computer system designed for this purpose.” Writing in Natural History, the prominent astrophysicist Carl Sagan expressed equal excitement about ELIZA’s potential. He foresaw the development of “a network of computer therapeutic terminals, something like arrays of large telephone booths, in which, for a few dollars a session, we would be able to talk with an attentive, tested, and largely non-directive psychotherapist.”10

  In his paper “Computing Machinery and Intelligence,” Alan Turing had grappled with the question “Can machines think?” He proposed a simple experiment for judging whether a computer could be said to be intelligent, which he called “the imitation game” but which soon came to be known as the Turing test. It involved having a person, the “interrogator,” sit at a computer terminal in an otherwise empty room and engage in a typed conversation with two other people, one an actual person and the other a computer pretending to be a person. If the interrogator was unable to distinguish the computer from the real person, then the computer, argued Turing, could be considered intelligent. The ability to conjure a plausible self out of words would signal the arrival of a true thinking machine.

  To converse with ELIZA was to engage in a variation on the Turing test. But, as Weizenbaum was astonished to discover, the people who “talked” with his program had little interest in making rational, objective judgments about the identity of ELIZA. They wanted to believe that ELIZA was a thinking machine. They wanted to imbue ELIZA with human qualities—even when they were well aware that ELIZA was nothing more than a computer program following simple and rather obvious instructions. The Turing test, it turned out, was as much a test of the way human beings think as of the way machines think. In their Journal of Nervous a
nd Mental Disease article, the three psychiatrists hadn’t just suggested that ELIZA could serve as a substitute for a real therapist. They went on to argue, in circular fashion, that a psychotherapist was in essence a kind of computer: “A human therapist can be viewed as an information processor and decision maker with a set of decision rules which are closely linked to short-range and long-range goals.”11 In simulating a human being, however clumsily, ELIZA encouraged human beings to think of themselves as simulations of computers.

  The reaction to the software unnerved Weizenbaum. It planted in his mind a question he had never before asked himself but that would preoccupy him for many years: “What is it about the computer that has brought the view of man as a machine to a new level of plausibility?”12 In 1976, a decade after ELIZA’s debut, he provided an answer in his book Computer Power and Human Reason. To understand the effects of a computer, he argued, you had to see the machine in the context of mankind’s past intellectual technologies, the long succession of tools that, like the map and the clock, transformed nature and altered “man’s perception of reality.” Such technologies become part of “the very stuff out of which man builds his world.” Once adopted, they can never be abandoned, at least not without plunging society into “great confusion and possibly utter chaos.” An intellectual technology, he wrote, “becomes an indispensable component of any structure once it is so thoroughly integrated with the structure, so enmeshed in various vital substructures, that it can no longer be factored out without fatally impairing the whole structure.”

 

‹ Prev