Book Read Free

The Digital Divide

Page 31

by Mark Bauerlein


  Under those circumstances, the Internet arrived as an incalculable blessing. We should never forget that. It has allowed isolated people to communicate with one another and marginalized people to find one another. The busy parent can stay in touch with far-flung friends. The gay teenager no longer has to feel like a freak. But as the Internet’s dimensionality has grown, it has quickly become too much of a good thing. Ten years ago we were writing e-mail messages on desktop computers and transmitting them over dial-up connections. Now we are sending text messages on our cell phones, posting pictures on our Facebook pages, and following complete strangers on Twitter. A constant stream of mediated contact, virtual, notional, or simulated, keeps us wired in to the electronic hive—though contact, or at least two-way contact, seems increasingly beside the point. The goal now, it seems, is simply to become known, to turn oneself into a sort of miniature celebrity. How many friends do I have on Facebook? How many people are reading my blog? How many Google hits does my name generate? Visibility secures our self-esteem, becoming a substitute, twice removed, for genuine connection. Not long ago, it was easy to feel lonely. Now, it is impossible to be alone.

  As a result, we are losing both sides of the Romantic dialectic. What does friendship mean when you have 532 “friends”? How does it enhance my sense of closeness when my Facebook News Feed tells me that Sally Smith (whom I haven’t seen since high school, and wasn’t all that friendly with even then) “is making coffee and staring off into space”? My students told me they have little time for intimacy. And, of course, they have no time at all for solitude.

  But at least friendship, if not intimacy, is still something they want. As jarring as the new dispensation may be for people in their thirties and forties, the real problem is that it has become completely natural for people in their teens and twenties. Young people today seem to have no desire for solitude, have never heard of it, can’t imagine why it would be worth having. In fact, their use of technology—or to be fair, our use of technology—seems to involve a constant effort to stave off the possibility of solitude, a continuous attempt, as we sit alone at our computers, to maintain the imaginative presence of others. As long ago as 1952, Trilling wrote about “the modern fear of being cut off from the social group even for a moment.” Now we have equipped ourselves with the means to prevent that fear from ever being realized. Which does not mean that we have put it to rest. Quite the contrary. Remember my student, who couldn’t even write a paper by herself. The more we keep aloneness at bay, the less are we able to deal with it and the more terrifying it gets.

  There is an analogy, it seems to me, with the previous generation’s experience of boredom. The two emotions, loneliness and boredom, are closely allied. They are also both characteristically modern. The Oxford English Dictionary’s earliest citations of either word, at least in the contemporary sense, date from the nineteenth century. Suburbanization, by eliminating the stimulation as well as the sociability of urban or traditional village life, exacerbated the tendency to both. But the great age of boredom, I believe, came in with television, precisely because television was designed to palliate that feeling. Boredom is not a necessary consequence of having nothing to do; it is only the negative experience of that state. Television, by obviating the need to learn how to make use of one’s lack of occupation, precludes one from ever discovering how to enjoy it. In fact, it renders that condition fearsome, its prospect intolerable. You are terrified of being bored—so you turn on the television.

  I speak from experience. I grew up in the ’60s and ’70s, the age of television. I was trained to be bored; boredom was cultivated within me like a precious crop. (It has been said that consumer society wants to condition us to feel bored, since boredom creates a market for stimulation.) It took me years to discover—and my nervous system will never fully adjust to this idea; I still have to fight against boredom, am permanently damaged in this respect—that having nothing to do doesn’t have to be a bad thing. The alternative to boredom is what Whitman called idleness: a passive receptivity to the world.

  So it is with the current generation’s experience of being alone. That is precisely the recognition implicit in the idea of solitude, which is to loneliness what idleness is to boredom. Loneliness is not the absence of company; it is grief over that absence. The lost sheep is lonely; the shepherd is not lonely. But the Internet is as powerful a machine for the production of loneliness as television is for the manufacture of boredom. If six hours of television a day creates the aptitude for boredom, the inability to sit still, a hundred text messages a day creates the aptitude for loneliness, the inability to be by yourself. Some degree of boredom and loneliness is to be expected, especially among young people, given the way our human environment has been attenuated. But technology amplifies those tendencies. You could call your schoolmates when I was a teenager, but you couldn’t call them a hundred times a day. You could get together with your friends when I was in college, but you couldn’t always get together with them when you wanted to, for the simple reason that you couldn’t always find them. If boredom is the great emotion of the TV generation, loneliness is the great emotion of the Web generation. We lost the ability to be still, our capacity for idleness. They have lost the ability to be alone, their capacity for solitude.

  And losing solitude, what have they lost? First, the propensity for introspection, that examination of the self that the Puritans, and the Romantics, and the modernists (and Socrates, for that matter) placed at the center of spiritual life—of wisdom, of conduct. Thoreau called it fishing “in the Walden Pond of [our] own natures,” “bait[ing our] hooks with darkness.” Lost, too, is the related propensity for sustained reading. The Internet brought text back into a televisual world, but it brought it back on terms dictated by that world—that is, by its remapping of our attention spans. Reading now means skipping and skimming; five minutes on the same Web page is considered an eternity. This is not reading as Marilynne Robinson described it: the encounter with a second self in the silence of mental solitude.

  But we no longer believe in the solitary mind. If the Romantics had Hume and the modernists had Freud, the current psychological model—and this should come as no surprise—is that of the networked or social mind. Evolutionary psychology tells us that our brains developed to interpret complex social signals. According to David Brooks, that reliable index of the social-scientific zeitgeist, cognitive scientists tell us that “our decision-making is powerfully influenced by social context”; neuroscientists, that we have “permeable minds” that function in part through a process of “deep imitation”; psychologists, that “we are organized by our attachments”; sociologists, that our behavior is affected by “the power of social networks.” The ultimate implication is that there is no mental space that is not social (contemporary social science dovetailing here with postmodern critical theory). One of the most striking things about the way young people relate to one another today is that they no longer seem to believe in the existence of Thoreau’s “darkness.”

  The MySpace page, with its shrieking typography and clamorous imagery, has replaced the journal and the letter as a way of creating and communicating one’s sense of self. The suggestion is not only that such communication is to be made to the world at large rather than to oneself or one’s intimates, or graphically rather than verbally, or performatively rather than narratively or analytically, but also that it can be made completely. Today’s young people seem to feel that they can make themselves fully known to one another. They seem to lack a sense of their own depths, and of the value of keeping them hidden.

  If they didn’t, they would understand that solitude enables us to secure the integrity of the self as well as to explore it. Few have shown this more beautifully than Woolf. In the middle of Mrs. Dalloway , between her navigation of the streets and her orchestration of the party, between the urban jostle and the social bustle, Clarissa goes up, “like a nun withdrawing,” to her attic room. Like a nun, she returns to a state that
she herself thinks of as a kind of virginity. This does not mean she’s a prude. Virginity is classically the outward sign of spiritual inviolability, of a self untouched by the world, a soul that has preserved its integrity by refusing to descend into the chaos and self-division of sexual and social relations. It is the mark of the saint and the monk, of Hippolytus and Antigone and Joan of Arc. Solitude is both the social image of that state and the means by which we can approximate it. And the supreme image in Mrs. Dalloway of the dignity of solitude itself is the old woman whom Clarissa catches sight of through her window. “Here was one room,” she thinks, “there another.” We are not merely social beings. We are each also separate, each solitary, each alone in our own room, each miraculously our unique selves and mysteriously enclosed in that selfhood.

  To remember this, to hold oneself apart from society, is to begin to think one’s way beyond it. Solitude, Emerson said, “is to genius the stern friend.” “He who should inspire and lead his race must be defended from traveling with the souls of other men, from living, breathing, reading, and writing in the daily, time-worn yoke of their opinions.” One must protect oneself from the momentum of intellectual and moral consensus—especially, Emerson added, during youth. “God is alone,” Thoreau said, “but the Devil, he is far from being alone; he sees a great deal of company; he is legion.” The university was to be praised, Emerson believed, if only because it provided its charges with “a separate chamber and fire”—the physical space of solitude. Today, of course, universities do everything they can to keep their students from being alone, lest they perpetrate self-destructive acts, and also, perhaps, unfashionable thoughts. But no real excellence, personal or social, artistic, philosophical, scientific or moral, can arise without solitude. “The saint and poet seek privacy,” Emerson said, “to ends the most public and universal.” We are back to the seer, seeking signposts for the future in splendid isolation.

  Solitude isn’t easy, and isn’t for everyone. It has undoubtedly never been the province of more than a few. “I believe,” Thoreau said, “that men are generally still a little afraid of the dark.” Teresa and Tiresias will always be the exceptions, or to speak in more relevant terms, the young people—and they still exist—who prefer to loaf and invite their soul, who step to the beat of a different drummer. But if solitude disappears as a social value and social idea, will even the exceptions remain possible? Still, one is powerless to reverse the drift of the culture. One can only save oneself—and whatever else happens, one can still always do that. But it takes a willingness to be unpopular.

  The last thing to say about solitude is that it isn’t very polite. Thoreau knew that the “doubleness” that solitude cultivates, the ability to stand back and observe life dispassionately, is apt to make us a little unpleasant to our fellows, to say nothing of the offense implicit in avoiding their company. But then, he didn’t worry overmuch about being genial. He didn’t even like having to talk to people three times a day, at meals; one can only imagine what he would have made of text messaging. We, however, have made of geniality—the weak smile, the polite interest, the fake invitation—a cardinal virtue. Friendship may be slipping from our grasp, but our friendliness is universal. Not for nothing does “gregarious” mean “part of the herd.” But Thoreau understood that securing one’s self-possession was worth a few wounded feelings. He may have put his neighbors off, but at least he was sure of himself. Those who would find solitude must not be afraid to stand alone.

  < Clay Shirky>

  means

  Excerpted from Cognitive Surplus (pp. 42–64).

  CLAY SHIRKY is an adjunct professor in NYU’s graduate Interactive Telecommunications Program (ITP). Prior to his appointment at NYU, Shirky was a partner at the investment firm The Accelerator Group in 1999–2001. He has had regular columns in Business 2.0 and FEED, and his writings have appeared in the New York Times, The Wall Street Journal, Harvard Business Review, Wired, Release 1.0, Computerworld, and IEEE Computer. His books include Here Comes Everybody: The Power of Organizing Without Organizations (2008) and Cognitive Surplus: Creativity and Generosity in a Connected Age (2010). His website is shirky.com.

  >>> gutenberg economics

  JOHANNES GUTENBERG, a printer in Mainz, in present-day Germany, introduced movable type to the world in the middle of the fifteenth century. Printing presses were already in use, but they were slow and laborious to operate, because a carving had to be made of the full text of each page. Gutenberg realized that if you made carvings of individual letters instead, you could arrange them into any words you liked. These carved letters—type—could be moved around to make new pages, and the type could be set in a fraction of the time that it would take to carve an entire page from scratch.

  Movable type introduced something else to the intellectual landscape of Europe: an abundance of books. Prior to Gutenberg, there just weren’t that many books. A single scribe, working alone with a quill and ink and a pile of vellum, could make a copy of a book, but the process was agonizingly slow, making output of scribal copying small and the price high. At the end of the fifteenth century, a scribe could produce a single copy of a five-hundred-page book for roughly thirty florins, while Ripoli, a Venetian press, would, for roughly the same price, print more than three hundred copies of the same book. Hence most scribal capacity was given over to producing additional copies of extant works. In the thirteenth century Saint Bonaventure, a Franciscan monk, described four ways a person could make books: copy a work whole, copy from several works at once, copy an existing work with his own additions, or write out some of his own work with additions from elsewhere. Each of these categories had its own name, like scribe or author, but Bonaventure does not seem to have considered—and certainly didn’t describe—the possibility of anyone creating a wholly original work. In this period, very few books were in existence and a good number of them were copies of the Bible, so the idea of bookmaking was centered on re-creating and recombining existing words far more than on producing novel ones.

  Movable type removed that bottleneck, and the first thing the growing cadre of European printers did was to print more Bibles—lots more Bibles. Printers began publishing Bibles translated into vulgar languages—contemporary languages other than Latin—because priests wanted them, not just as a convenience but as a matter of doctrine. Then they began putting out new editions of works by Aristotle, Galen, Virgil, and others that had survived from antiquity. And still the presses could produce more. The next move by the printers was at once simple and astonishing: print lots of new stuff. Prior to movable type, much of the literature available in Europe had been in Latin and was at least a millennium old. And then in a historical eyeblink, books started appearing in local languages, books whose text was months rather than centuries old, books that were, in aggregate, diverse, contemporary, and vulgar. (Indeed, the word novel comes from this period, when newness of content was itself new.)

  This radical solution to spare capacity—produce books that no one had ever read before—created new problems, chiefly financial risk. If a printer produced copies of a new book and no one wanted to read it, he’d lose the resources that went into creating it. If he did that enough times, he’d be out of business. Printers reproducing Bibles or the works of Aristotle never had to worry that people might not want their wares, but anyone who wanted to produce a novel book faced this risk. How did printers manage that risk? Their answer was to make the people who bore the risk—the printers—responsible for the quality of the books as well. There’s no obvious reason why people who are good at running a printing press should also be good at deciding which books are worth printing. But a printing press is expensive, requiring a professional staff to keep it running, and because the material has to be produced in advance of demand for it, the economics of the printing press put the risk at the site of production. Indeed, shouldering the possibility that a book might be unpopular marks the transition from printers (who made copies of hallowed works) to publishers (who too
k on the risk of novelty).

  A lot of new kinds of media have emerged since Gutenberg: images and sounds were encoded onto objects, from photographic plates to music CDs; electromagnetic waves were harnessed to create radio and TV. All these subsequent revolutions, as different as they were, still had the core of Gutenberg economics: enormous investment costs. It’s expensive to own the means of production, whether it is a printing press or a TV tower, which makes novelty a fundamentally high-risk operation. If it’s expensive to own and manage the means of production or if it requires a staff, you’re in a world of Gutenberg economics. And wherever you have Gutenberg economics, whether you are a Venetian publisher or a Hollywood producer, you’re going to have fifteenth-century risk management as well, where the producers have to decide what’s good before showing it to the audience. In this world almost all media was produced by “the media,” a world we all lived in up until a few years ago.

 

‹ Prev