The Digital Divide: Writings for and Against Facebook, Youtube, Texting, and the Age of Social Networking

Home > Other > The Digital Divide: Writings for and Against Facebook, Youtube, Texting, and the Age of Social Networking > Page 4
The Digital Divide: Writings for and Against Facebook, Youtube, Texting, and the Age of Social Networking Page 4

by Mark Bauerlein

TV: “Television in the Home, 1998: Third Annual Survey of Parents and Children, Annenberg Policy Center” (June 22, 1998) gives the number of TV hours watched per day as 2.55. M. Chen, in the Smart Parents’ Guide to Kids’ TV, (1994), gives the number as 4 hours/day. Taking the average, 3.3 hrs/day ×365 days ×18 years = 21,681.

  Commercials: There are roughly 18 30-second commercials during a TV hour. 18 commercials/hour ×3.3 hours/ day ×365 days ×20 years (infants love commercials) = 433,620.

  Reading: Eric Leuliette, a voracious (and meticulous) reader who has listed online every book he has ever read (www.csr.utexas.edu/personal/leuliette/fw_table_home.html), read about 1,300 books through college. If we take 1,300 books ×200 pages per book ×400 words per page, we get 104,000,000 words. Read at 400 words/minute, that gives 260,000 minutes, or 4,333 hours. This represents a little over 3 hours/book. Although others may read more slowly, most have read far fewer books than Leuliette.

  2 Paul Perry in American Way, May 15, 2000.

  3 Renate Numella Caine and Geoffrey Caine, Making Connections: Teaching and the Human Brain (Addison-Wesley, 1991), p. 31.

  4 Dr. Mriganka Sur, Nature, April 20, 2000.

  5 Sandra Blakeslee, New York Times, April 24, 2000.

  6 Leslie Ungerlieder, National Institutes of Health.

  7 James McLelland, University of Pittsburgh.

  8 Cited in Inferential Focus Briefing, September 30, 1997.

  9 Virginia Berninger, University of Washington, American Journal of Neuroradiology, May 2000.

  10 Dr. Mark Jude Tramano of Harvard. Reported in USA Today, December 10, 1998.

  11 Newsweek, January 1, 2000.

  12 They include Alexandr Romanovich Luria (1902–1977), Soviet pioneer in neuropsychology, author of The Human Brain and Psychological Processes (1963), and, more recently, Dr. Richard Nisbett of the University of Michigan.

  13 Quoted in Erica Goode, “How Culture Molds Habits of Thought,” New York Times, August 8, 2000.

  14 John T. Bruer, The Myth of the First Three Years (The Free Press, 1999), p. 155.

  15 G. Reid Lyon, a neuropsychologist who directs reading research funded by the National Institutes of Health, quoted in Frank D. Roylance, “Intensive Teaching Changes Brain,” SunSpot, Maryland’s Online Community, May 27, 2000.

  16 Alan T. Pope, research psychologist, Human Engineering Methods, NASA. Private communication.

  17 Time, July 5, 1999.

  18 The Economist, December 6, 1997.

  19 Kathleen Baynes, neurology researcher, University of California, Davis, quoted in Robert Lee Hotz, “In Art of Language, the Brain Matters,” Los Angeles Times, October 18, 1998.

  20 Dr. Michael S. Gazzaniga, neuroscientist at Dartmouth College, quoted in Robert Lee Hotz, “In Art of Language, the Brain Matters,” Los Angeles Times, October 18, 1998.

  21 William D. Winn, Director of the Learning Center, Human Interface Technology Laboratory, University of Washington, quoted in Moore, Inferential Focus Briefing (see note 22).

  22 Peter Moore, Inferential Focus Briefing, September 30, 1997.

  23 Ibid.

  24 Patricia Marks Greenfield, Mind and Media: The Effects of Television, Video Games and Computers (Harvard University Press, 1984).

  25 Dr. Edward Westhead, professor of biochemistry (retired), University of Massachusetts.

  26 A. C. Graesser and N. K. Person, “Question Asking During Tutoring,” American Educational Research Journal 31 (1994), pp. 104–107.

  27 Elizabeth Lorch, psychologist, Amherst College, quoted in Malcolm Gladwell, The Tipping Point: How Little Things Can Make a Big Difference (Little Brown & Co., 2000), p. 101.

  28 John Kernan, President, The Lightspan Partnership. Personal communication.

  29 Evaluation of Lightspan, “Research Results from 403 Schools and Over 14,580 Students,” February 2000, CD-ROM.

  30 Debra A. Lieberman, “Health Education Video Games for Children and Adolescents: Theory, Design and Research Findings,” paper presented at the annual meeting of the International Communications Association, Jerusalem, 1998.

  31 Scientific Learning Corporation, National Field Trial Results (pamphlet). See also Merzenich et al., “Temporal Processing Deficits of Language-Learning Impaired Children Ameliorated by Training,” and Tallal et al., “Language Comprehension in Language-Learning Impaired Children Improved with Acoustically Modified Speech,” in Science (January 5, 1996), pp. 27–28 and 77–84.

  32 Michael Parmentier, Director, Office of Readiness and Training, Department of Defense, The Pentagon. Private briefing.

  33 Don Johnson, Office of Readiness and Training, Department of Defense, The Pentagon. Private briefing.

  < Steven Johnson>

  the internet

  Excerpted from Everything Bad Is Good for You (pp. 116–24).

  STEVEN JOHNSON has authored books on science, technology and personal experience, including The Ghost Map (2006), Everything Bad Is Good for You: How Today’s Popular Culture Is Actually Making Us Smarter (2005), and Mind Wide Open: Your Brain and the Neuroscience of Everyday Life (2005). Johnson is contributing editor for Wired magazine and columnist for Discover magazine, as well as cofounder and editor in chief of FEED. He is a Distinguished Writer-in-Residence at the New York University Department of Journalism. He has published in The New York Times, The Wall Street Journal, The Nation, and many other periodicals, and has appeared on The Charlie Rose Show, The Daily Show with Jon Stewart, and The NewsHour with Jim Lehrer. His website is stevenberlinjohnson.com.

  VIEWERS WHO GET LOST in 24’s social network have a resource available to them that Dallas viewers lacked: the numerous online sites and communities that share information about popular television shows. Just as Apprentice viewers mulled Troy’s shady business ethics in excruciating detail, 24 fans exhaustively document and debate every passing glance and brief allusion in the series, building detailed episode guides and lists of Frequently Asked Questions. One Yahoo! site featured at the time of this writing more than forty thousand individual posts from ordinary viewers, contributing their own analysis of last night’s episode, posting questions about plot twists, or speculating on the upcoming season. As the shows have complexified, the resources for making sense of that complexity have multiplied as well. If you’re lost in 24’s social network, you can always get your bearings online.

  All of which brings us to another crucial piece in the puzzle of the Sleeper Curve: the Internet. Not just because the online world offers resources that help sustain more complex programming in other media, but because the process of acclimating to the new reality of networked communications has had a salutary effect on our minds. We do well to remind ourselves how quickly the industrialized world has embraced the many forms of participatory electronic media—from e-mail to hypertext to instant messages and blogging. Popular audiences embraced television and the cinema in comparable time frames, but neither required the learning curve of e-mail or the Web. It’s one thing to adapt your lifestyle to include time for sitting around watching a moving image on a screen; it’s quite another to learn a whole new language of communication and a small army of software tools along with it. It seems almost absurd to think of this now, but when the idea of hypertext documents first entered the popular domain in the early nineties, it was a distinctly avant-garde idea, promoted by an experimentalist literary fringe looking to explode the restrictions of the linear sentence and the page-bound book. Fast-forward less than a decade, and something extraordinary occurs: exploring nonlinear document structures becomes as second nature as dialing a phone for hundreds of millions—if not billions—of people. The mass embrace of hypertext is like the Seinfeld “Betrayal” episode: a cultural form that was once exclusively limited to avant-garde sensibilities, now happily enjoyed by grandmothers and third graders worldwide.

  I won’t dwell on this point, because the premise that increased interactivity is good for the brain is not a new one. (A number of insightful critics—Kevin Kelly, Douglas Rushko
ff, Janet Murray, Howard Rheingold, Henry Jenkins—have made variations on this argument over the past decade or so.) But let me say this much: The rise of the Internet has challenged our minds in three fundamental and related ways: by virtue of being participatory, by forcing users to learn new interfaces, and by creating new channels for social interaction.

  Almost all forms of sustained online activity are participatory in nature: writing e-mails, sending IMs, creating photo logs, posting two-page analyses of last night’s Apprentice episode. Steve Jobs likes to describe the difference between television and the Web as the difference between lean-back and sit-forward media. The networked computer makes you lean in, focus, engage, while television encourages you to zone out. (Though not as much as it used to, of course.) This is the familiar interactivity-is-good-for-you argument, and it’s proof that the conventional wisdom is, every now and then, actually wise.

  There was a point several years ago, during the first wave of Internet cheerleading, when it was still possible to be a skeptic about how participatory the new medium would turn out to be. Everyone recognized that the practices of composing e-mail and clicking on hyperlinks were going to be mainstream activities, but how many people out there were ultimately going to be interested in publishing more extensive material online? And if that turned out to be a small number—if the Web turned out to be a medium where most of the content was created by professional writers and editors—was it ultimately all that different from the previous order of things?

  The tremendous expansion of the blogging world over the past two years has convincingly silenced this objection. According to a 2004 study by the Pew Charitable Trust, more than 8 million Americans report that they have a personal weblog or online diary. The wonderful blog-tracking service Technorati reports that roughly 275,000 blog entries are published in the average day—a tiny fraction of them authored by professional writers. After only two years of media hype, the number of active bloggers in the United States alone has reached the audience size of prime-time network television.

  So why were the skeptics so wrong about the demand for self-publishing? Their primary mistake was to assume that the content produced in this new era would look like old-school journalism: op-ed pieces, film reviews, cultural commentary. There’s plenty of armchair journalism out there, of course, but the great bulk of personal publishing is just that, personal: the online diary is the dominant discursive mode in the blogosphere. People are using these new tools not to opine about social security privatization; they’re using the tools to talk about their lives. A decade ago Douglas Rushkoff coined the phrase “screenagers” to describe the first generation that grew up with the assumption that the images on a television screen were supposed to be manipulated; that they weren’t just there for passive consumption. The next generation is carrying that logic to a new extreme: the screen is not just something you manipulate, but something you project your identity onto, a place to work through the story of your life as it unfolds.

  To be sure, that projection can create some awkward or unhealthy situations, given the public intimacy of the online diary and the potential for identity fraud. But every new technology can be exploited or misused to nefarious ends. For the vast majority of those 8 million bloggers, these new venues for self-expression have been a wonderful addition to their lives. There’s no denying that the content of your average online diary can be juvenile. These diaries are, after all, frequently created by juveniles. But thirty years ago those juveniles weren’t writing novels or composing sonnets in their spare time; they were watching Laverne & Shirley. Better to have minds actively composing the soap opera of their own lives than zoning out in front of someone else’s.

  The Net has actually had a positive lateral effect on the tube as well, in that it has liberated television from attempting tasks that the medium wasn’t innately well suited to perform. As a vehicle for narrative and first-person intimacy, television can be a delightful medium, capable of conveying remarkably complex experiences. But as a source of information, it has its limitations. The rise of the Web has enabled television to off-load some of its information-sharing responsibilities to a platform that was designed specifically for the purposes of sharing information. This passage from Neil Postman’s Amusing Ourselves to Death showcases exactly how much has changed over the past twenty years:

  Television . . . encompasses all forms of discourse. No one goes to a movie to find out about government policy or the latest scientific advance. No one buys a record to find out the baseball scores or the weather or the latest murder.... But everyone goes to television for all these things and more, which is why television resonates so powerfully throughout the culture. Television is our culture’s principal mode of knowing about itself.

  No doubt in total hours television remains the dominant medium in American life, but there is also no doubt that the Net has been gaining on it with extraordinary speed. If the early adopters are any indication, that dominance won’t last for long. And for the types of knowledge-based queries that Postman describes—looking up government policy or sports scores—the Net has become the first place that people consult. Google is our culture’s principal way of knowing about itself.

  The second way in which the rise of the Net has challenged the mind runs parallel to the evolving rule systems of video games: the accelerating pace of new platforms and software applications forces users to probe and master new environments. Your mind is engaged by the interactive content of networked media—posting a response to an article online, maintaining three separate IM conversations at the same time—but you’re also exercising cognitive muscles interacting with the form of the media as well: learning the tricks of a new e-mail client, configuring the video chat software properly, getting your bearings after installing a new operating system. This type of problem solving can be challenging in an unpleasant way, of course, but the same can be said for calculus. Just because you don’t like troubleshooting your system when your browser crashes doesn’t mean you aren’t exercising your logic skills in finding a solution. This extra layer of cognitive involvement derives largely from the increased prominence of the interface in digital technology. When new tools arrive, you have to learn what they’re good for, but you also have to learn the rules that govern their use. To be an accomplished telephone user, you needed to grasp the essential utility of being able to have real-time conversations with people physically removed from you, and you had to master the interface of the telephone device itself. That same principle holds true for digital technologies; only the interfaces have expanded dramatically in depth and complexity. There’s only so much cognitive challenge at stake in learning the rules of a rotary dial phone. But you could lose a week exploring all the nooks and crannies of Microsoft Outlook.

  Just as we saw in the world of games, learning the intricacies of a new interface can be a genuine pleasure. This is a story that is not often enough told in describing our evolving relationship with software. There is a kind of exploratory wonder in downloading a new application, and meandering through its commands and dialog boxes, learning its tricks by feel. I’ve often found certain applications are more fun to explore the first time than they actually are to use—because in the initial exploration, you can delight in features that are clever without being terribly helpful. This sounds like something only a hardened tech geek would say, but I suspect the feeling has become much more mainstream over the past few years. Think of the millions of ordinary music fans who downloaded Apple’s iTunes software: I’m sure many of them enjoyed their first walk through the application, seeing all the tools that would revolutionize the way they listened to music. Many of them, I suspect, eschewed the manual altogether, choosing to probe the application the way gamers investigate their virtual worlds: from the inside. That probing is a powerful form of intellectual activity—you’re learning the rules of a complex system without a guide, after all. And it’s all the more powerful for being fun.

  Then there is the matte
r of social connection. The other concern that Net skeptics voiced a decade ago revolved around a withdrawal from public space: yes, the Internet might connect us to a new world of information, but it would come at a terrible social cost, by confining us in front of barren computer monitors, away from the vitality of genuine communities. In fact, nearly all of the most hyped developments on the Web in the past few years have been tools for augmenting social connection: online personals, social and business network sites such as Friendster, the Meetup.com service so central to the political organization of the 2004 campaign, the many tools designed to enhance conversation between bloggers—not to mention all the handheld devices that we now use to coordinate new kinds of real-world encounters. Some of these tools create new modes of communication that are entirely digital in nature (the cross-linked conversations of bloggers). Others use the networked computer to facilitate a face-to-face encounter (as in Meetup). Others involve a hybrid dance of real and virtual encounters, as in the personals world, where flesh-and-blood dates usually follow weeks of online flirting. Tools like Google have fulfilled the original dream of digital machines becoming extensions of our memory, but the new social networking applications have done something that the visionaries never imagined: they are augmenting our people skills as well, widening our social networks, and creating new possibilities for strangers to share ideas and experiences.

  Television and automobile society locked people up in their living rooms, away from the clash and vitality of public space, but the Net has reversed that long-term trend. After a half-century of technological isolation, we’re finally learning new ways to connect.

  < Maryanne Wolf >

  learning to think in a digital world

  Originally published in the Boston Globe (September 5, 2007).

  MARYANNE WOLF is a professor in the Eliot-Pearson Department of Child Development at Tufts University. She is the author of Proust and the Squid: The Story and Science of the Reading Brain (2007) and the RAVE-O Intervention Program, a fluency comprehension program for struggling readers. She was awarded the Distinguished Professor of the Year Award from the Massachusetts Psychological Association, and also the Teaching Excellence Award from the American Psychological Association.

 

‹ Prev