Book Read Free

The Shallows

Page 9

by Nicholas Carr


  Recent research into the neurological effects of deep reading has added a scientific gloss to Stevens’ lyric. In one fascinating study, conducted at Washington University’s Dynamic Cognition Laboratory and published in the journal Psychological Science in 2009, researchers used brain scans to examine what happens inside people’s heads as they read fiction. They found that “readers mentally simulate each new situation encountered in a narrative. Details about actions and sensation are captured from the text and integrated with personal knowledge from past experiences.” The brain regions that are activated often “mirror those involved when people perform, imagine, or observe similar real-world activities.” Deep reading, says the study’s lead researcher, Nicole Speer, “is by no means a passive exercise.”35 The reader becomes the book.

  The bond between book reader and book writer has always been a tightly symbiotic one, a means of intellectual and artistic cross-fertilization. The words of the writer act as a catalyst in the mind of the reader, inspiring new insights, associations, and perceptions, sometimes even epiphanies. And the very existence of the attentive, critical reader provides the spur for the writer’s work. It gives the author the confidence to explore new forms of expression, to blaze difficult and demanding paths of thought, to venture into uncharted and sometimes hazardous territory. “All great men have written proudly, nor cared to explain,” said Emerson. “They knew that the intelligent reader would come at last, and would thank them.”36

  Our rich literary tradition is unthinkable without the intimate exchanges that take place between reader and writer within the crucible of a book. After Gutenberg’s invention, the bounds of language expanded rapidly as writers, competing for the eyes of ever more sophisticated and demanding readers, strived to express ideas and emotions with superior clarity, elegance, and originality. The vocabulary of the English language, once limited to just a few thousand words, expanded to upwards of a million words as books proliferated.37 Many of the new words encapsulated abstract concepts that simply hadn’t existed before. Writers experimented with syntax and diction, opening new pathways of thought and imagination. Readers eagerly traveled down those pathways, becoming adept at following fluid, elaborate, and idiosyncratic prose and verse. The ideas that writers could express and readers could interpret became more complex and subtle, as arguments wound their way linearly across many pages of text. As language expanded, consciousness deepened.

  The deepening extended beyond the page. It’s no exaggeration to say that the writing and reading of books enhanced and refined people’s experience of life and of nature. “The remarkable virtuosity displayed by new literary artists who managed to counterfeit taste, touch, smell, or sound in mere words required a heightened awareness and closer observation of sensory experience that was passed on in turn to the reader,” writes Eisenstein. Like painters and composers, writers were able “to alter perception” in a way “that enriched rather than stunted sensuous response to external stimuli, expanded rather than contracted sympathetic response to the varieties of human experience.”38 The words in books didn’t just strengthen people’s ability to think abstractly; they enriched people’s experience of the physical world, the world outside the book.

  One of the most important lessons we’ve learned from the study of neuroplasticity is that the mental capacities, the very neural circuits, we develop for one purpose can be put to other uses as well. As our ancestors imbued their minds with the discipline to follow a line of argument or narrative through a succession of printed pages, they became more contemplative, reflective, and imaginative. “New thought came more readily to a brain that had already learned how to rearrange itself to read,” says Maryanne Wolf; “the increasingly sophisticated intellectual skills promoted by reading and writing added to our intellectual repertoire.”39 The quiet of deep reading became, as Stevens understood, “part of the mind.”

  Books weren’t the only reason that human consciousness was transformed during the years following the invention of the letterpress—many other technologies and social and demographic trends played important roles—but books were at the very center of the change. As the book came to be the primary means of exchanging knowledge and insight, its intellectual ethic became the foundation of our culture. The book made possible the delicately nuanced self-knowledge found in Wordsworth’s Prelude and Emerson’s essays and the equally subtle understanding of social and personal relations found in the novels of Austen, Flaubert, and Henry James. Even the great twentieth-century experiments in nonlinear narrative by writers like James Joyce and William Burroughs would have been unthinkable without the artists’ presumption of attentive, patient readers. When transcribed to a page, a stream of consciousness becomes literary and linear.

  The literary ethic was not only expressed in what we normally think of as literature. It became the ethic of the historian, illuminating works like Gibbon’s Decline and Fall of the Roman Empire. It became the ethic of the philosopher, informing the ideas of Descartes, Locke, Kant, and Nietzsche. And, crucially, it became the ethic of the scientist. One could argue that the single most influential literary work of the nineteenth century was Darwin’s On the Origin of Species. In the twentieth century, the literary ethic ran through such diverse books as Einstein’s Relativity, Keynes’s General Theory of Employment, Interest and Money, Thomas Kuhn’s Structure of Scientific Revolutions, and Rachel Carson’s Silent Spring. None of these momentous intellectual achievements would have been possible without the changes in reading and writing—and in perceiving and thinking—spurred by the efficient reproduction of long forms of writing on printed pages.

  LIKE OUR FOREBEARS during the later years of the Middle Ages, we find ourselves today between two technological worlds. After 550 years, the printing press and its products are being pushed from the center of our intellectual life to its edges. The shift began during the middle years of the twentieth century, when we started devoting more and more of our time and attention to the cheap, copious, and endlessly entertaining products of the first wave of electric and electronic media: radio, cinema, phonograph, television. But those technologies were always limited by their inability to transmit the written word. They could displace but not replace the book. Culture’s mainstream still ran through the printing press.

  Now the mainstream is being diverted, quickly and decisively, into a new channel. The electronic revolution is approaching its culmination as the computer—desktop, laptop, handheld—becomes our constant companion and the Internet becomes our medium of choice for storing, processing, and sharing information in all forms, including text. The new world will remain, of course, a literate world, packed with the familiar symbols of the alphabet. We cannot go back to the lost oral world, any more than we can turn the clock back to a time before the clock existed.40 “Writing and print and the computer,” writes Walter Ong, “are all ways of technologizing the word” and once technologized, the word cannot be de-technologized.41 But the world of the screen, as we’re already coming to understand, is a very different place from the world of the page. A new intellectual ethic is taking hold. The pathways in our brains are once again being rerouted.

  A Digression On Lee de Forest And His Amazing Audion

  OUR MODERN MEDIA spring from a common source, an invention that is rarely mentioned today but that had as decisive a role in shaping society as the internal combustion engine or the incandescent lightbulb. The invention was called the Audion. It was the first electronic audio amplifier, and the man who created it was Lee de Forest.

  Even when judged by the high standards set by America’s mad-genius inventors, de Forest was an oddball. Nasty, ill-favored, and generally despised—in high school he was voted “homeliest boy” in his class—he was propelled by an enormous ego and an equally out-sized inferiority complex.1 When he wasn’t marrying or divorcing a wife, alienating a colleague, or leading a business to ruin, he was usually in court defending himself against charges of fraud or patent infringement—or pressing his
own suit against one of his many enemies.

  De Forest grew up in Alabama, the son of a schoolmaster. After earning a doctorate in engineering from Yale in 1896, he spent a decade fiddling with the latest radio and telegraph technology, desperately seeking the breakthrough that would make his name and fortune. In 1906, his moment arrived. Without quite knowing what he was doing, he took a standard two-pole vacuum tube, which sent an electric current from one wire (the filament) to a second (the plate), and he added a third wire to it, turning the diode into a triode. He found that when he sent a small electric charge into the third wire—the grid—it boosted the strength of the current running between the filament and the plate. The device, he explained in a patent application, could be adapted “for amplifying feeble electric currents.”2

  De Forest’s seemingly modest invention turned out to be a world changer. Because it could be used to amplify an electrical signal, it could also be used to amplify audio transmissions sent and received as radio waves. Up to then, radios had been of limited use because their signals faded so quickly. With the Audion to boost the signals, long-distance wireless transmissions became possible, setting the stage for radio broadcasting. The Audion became, as well, a critical component of the new telephone system, enabling people on opposite sides of the country, or the world, to hear each other talk.

  De Forest couldn’t have known it at the time, but he had inaugurated the age of electronics. Electric currents are, simply put, streams of electrons, and the Audion was the first device that allowed the intensity of those streams to be controlled with precision. As the twentieth century progressed, triode tubes came to form the technological heart of the modern communications, entertainment, and media industries. They could be found in radio transmitters and receivers, in hi-fi sets, in public address systems, in guitar amps. Arrays of tubes also served as the processing units and data storage systems in many early digital computers. The first mainframes often had tens of thousands of them. When, around 1950, vacuum tubes began to be replaced by smaller, cheaper, and more reliable solid-state transistors, the popularity of electronic appliances exploded. In the miniaturized form of the triode transistor, Lee de Forest’s invention became the workhorse of our information age.

  In the end, de Forest wasn’t quite sure whether to be pleased or dismayed by the world he had helped bring into being. In “Dawn of the Electronic Age,” a 1952 article he wrote for Popular Mechanics, he crowed about his creation of the Audion, referring to it as “this small acorn from which has sprung the gigantic oak that is today world-embracing.” At the same time, he lamented the “moral depravity” of commercial broadcast media. “A melancholy view of our national mental level is obtained from a survey of the moronic quality of the majority of today’s radio programs,” he wrote.

  Looking ahead to future applications of electronics, he grew even gloomier. He believed that “electron physiologists” would eventually be able to monitor and analyze “thought or brain waves,” allowing “joy and grief [to] be measured in definite, quantitative units.” Ultimately, he concluded, “a professor may be able to implant knowledge into the reluctant brains of his 22nd-century pupils. What terrifying political possibilities may be lurking there! Let us be thankful that such things are only for posterity, not for us.”3

  A Medium Of The Most General Nature

  In the spring of 1954, as the first digital computers were moving into mass production, the brilliant British mathematician Alan Turing killed himself by eating a cyanide-laced apple—a piece of fruit that had been plucked at incalculable cost, the act begs us to conclude, from the tree of knowledge. Turing, who displayed throughout his short life what one biographer calls an “otherworldly innocence,”1 had during the Second World War played a crucial part in cracking the codes of Enigma, the elaborate typewriter that the Nazis used to encipher and decipher military commands and other sensitive messages. The breaking of Enigma was an epic achievement that helped turn the tide of the war and ensure an Allied victory, though it didn’t save Turing from the humiliation of being arrested, a few years later, for having sex with another man.

  Today, Alan Turing is best remembered as the creator of an imaginary computing device that anticipated, and served as a blueprint for, the modern computer. He was just twenty-four, a recently elected fellow at Cambridge University, when he introduced what would come to be called the Turing machine in a 1936 paper entitled “On Computable Numbers, with an Application to the Entscheidungsproblem.” Turing’s intent in writing the paper was to show that there is no such thing as a perfect system of logic or mathematics—that there will always be some statements that cannot be proven either true or false, that will remain “uncomputable.” To help prove the point, he conjured up a simple, digital calculator able to follow coded instructions and to read, write, and erase symbols. Such a computer, he demonstrated, could be programmed to perform the function of any other information-processing device. It was a “universal machine.”2

  In a later paper, “Computing Machinery and Intelligence,” Turing explained how the existence of programmable computers “has the important consequence that, considerations of speed apart, it is unnecessary to design various new machines to do various computing processes. They can all be done with one digital computer, suitably programmed for each case.” What that means, he concluded, is that “all digital computers are in a sense equivalent.”3 Turing was not the first person to imagine how a programmable computer might work—more than a century earlier, another English mathematician, Charles Babbage, had drawn up plans for an “analytical engine” that would be “a machine of the most general nature” 4 —but Turing seems to have been the first to understand the digital computer’s limitless adaptability.

  What he could not have anticipated was the way his universal machine would, just a few decades after his death, become our universal medium. Because the different sorts of information distributed by traditional media—words, numbers, sounds, images, moving pictures—can all be translated into digital code, they can all be “computed.” Everything from Beethoven’s Ninth to a porn flick can be reduced to a string of ones and zeros and processed, transmitted, and displayed or played by a computer. Today, with the Internet, we’re seeing firsthand the extraordinary implications of Turing’s discovery. Constructed of millions of interconnected computers and data banks, the Net is a Turing machine of immeasurable power, and it is, true to form, subsuming most of our other intellectual technologies. It’s becoming our typewriter and our printing press, our map and our clock, our calculator and our telephone, our post office and our library, our radio and our TV. It’s even taking over the functions of other computers; more and more of our software programs run through the Internet—or “in the cloud,” as the Silicon Valley types say—rather than inside our home computers.

  As Turing pointed out, the limiting factor of his universal machine was speed. Even the earliest digital computer could, in theory, do any information-processing job, but a complicated task—rendering a photograph, say—would have taken it far too long, and cost far too much, to be practicable. A guy in a darkroom with trays of chemicals could do the work much more quickly and cheaply. Computing’s speed limits, though, turned out to be only temporary obstacles. Since the first mainframe was assembled in the 1940s, the speed of computers and data networks has increased at a breakneck pace, and the cost of processing and transmitting data has fallen equally rapidly. Over the past three decades, the number of instructions a computer chip can process every second has doubled about every three years, while the cost of processing those instructions has fallen by almost half every year. Overall, the price of a typical computing task has dropped by 99.9 percent since the 1960s.5 Network bandwidth has expanded at an equally fast clip, with Internet traffic doubling, on average, every year since the World Wide Web was invented.6 Computer applications that were unimaginable in Turing’s day are now routine.

  The way the Web has progressed as a medium replays, with the velocity of a time
-lapse film, the entire history of modern media. Hundreds of years have been compressed into a couple of decades. The first information-processing machine that the Net replicated was Gutenberg’s press. Because text is fairly simple to translate into software code and to share over networks—it doesn’t require a lot of memory to store, a lot of bandwidth to transmit, or a lot of processing power to render on a screen—early Web sites were usually constructed entirely of typographical symbols. The very term we came to use to describe what we look at online—pages—emphasized the connection with printed documents. Publishers of magazines and newspapers, realizing that large quantities of text could, for the first time in history, be broadcast the way radio and TV programs had always been, were among the first businesses to open online outlets, posting articles, excerpts, and other pieces of writing on their sites. The ease with which words could be transmitted led, as well, to the widespread and extraordinarily rapid adoption of e-mail, rendering the personal letter obsolete.

  As the cost of memory and bandwidth fell, it became possible to incorporate photographs and drawings into Web pages. At first, the images, like the text they often accompanied, were in black and white, and their low resolution made them blurry. They looked like the first photos printed in newspapers a hundred years ago. But the capacity of the Net expanded to handle color pictures, and the size and quality of the images increased enormously. Soon, simple animations began to play online, mimicking the herky-jerky motions of the flip books, or kineographs, that were popular at the end of the nineteenth century.

 

‹ Prev