The Digital Divide

Home > Other > The Digital Divide > Page 29
The Digital Divide Page 29

by Mark Bauerlein


  Meyer’s voice rose. He was hollering. We were back in his university office, and he was attacking the “myth” that we can operate at top speeds on multiple tasks as well as if we were doing one at a time. “That’s ridiculous,” he practically shouted. “That’s ludicrous!” Like William Morris, Meyer often multitasks, reading The New York Times, chatting with his granddaughter, watching television all at once. Indeed, as we talked on the telephone one weekend morning, he interrupted me once or twice to read me the Wimbledon scores off the TV. But that’s for fun, he insisted. “I’m getting little bits and pieces of information as a result of engaging in each of these tasks, but the depth of what I’m getting, the quality of my understanding is nowhere as good as if I was just concentrating on one thing,” he said. “That’s the bottom line.” Once during our many conversations, he sadly confided the reason why he first began to be willing to speak up about multitasking: his seventeen-year-old son, Timothy, was broadsided and killed by a distracted driver who ran a red light one night in 1995. Spreading the word about the inefficiencies of multitasking is a little bit of “payback” for Tim’s death. But that’s only part of the story. Now Meyer speaks out about the costs of multitasking because he’s convinced that it exemplifies a head-down, tunnel vision way of life that values materialism over happiness, productivity over insight and compassion. He’s optimistic, taking a Darwinian view that people eventually will realize that multitasking’s larger costs outweigh its benefits. But first Meyer believes that the few in the know will have to do a whole lot of hollering before we recognize the magnitude of the problem and begin to change. So he’s raising his voice, as much as he can.

  notes

  This chapter has been published in an abridged version. Any notes corresponding to the omitted text have been deleted.

  1 Interview with Daniel Anderson, May 2006.

  2 Daniel Anderson and Heather Kirkorian, “Attention and Television,” in The Psychology of Entertainment, ed. J. Bryant and P. Vorderer (Lawrence Erlbaum, 2006), pp. 35–54.

  3 John E. Richards and Daniel Anderson, “Attentional Inertia in Children’s Extended Looking at Television,” Advances in Child Development and Behavior, ed. R. V. Kail (Academic Press, 2004), p. 168.

  4 Daniel Anderson and Tiffany Pempek, “Television and Very Young Children,” American Behavioral Scientist 48, no. 5 (January 2005), p. 508.

  5 Marie Schmidt et al., “The Effects of Background Television on the Toy Play of Very Young Children,” Child Development, in press.

  6 Heather Kirkorian et al., “The Impact of Background Television on Parent-Child Interaction,” poster presented at the biannual meeting of the Society for Research in Child Development, Atlanta, April 2005.

  7 Victoria Rideout and Donald Roberts, Generation M: Media in the Lives of Eight- to Eighteen-Year-Olds (Henry J. Kaiser Family Foundation, March 2005), p. 9.

  8 Marshall McLuhan, Understanding Media: Lectures and Interviews , ed. Stephanie McLuhan and David Staines (MIT Press, 2003), p. 129.

  9 Barbara Schneider and N. Broege, “Why Working Families Avoid Flexibility: The Costs of Over Working,” paper presented at the Alfred P. Sloan International Conference “Why Workplace Flexibility Matters,” Chicago, May 17, 2006.

  10 Eulynn Shiu and Amanda Lenhart, “How Americans Use Instant Messaging” (Pew Internet & American Life Project, 2004), http://www.pewinternet.org/PPF/r/133/reportdisplay.asp./

  11 Bonka Boneva et al., “Teenage Communication in the Instant Messaging Era,” in Computers, Phones and the Internet: Domesticating Information Technology, ed. Robert Kraut, Malcolm Bryin, and Sara Kiesler (Oxford University Press, 2006), pp. 201–18.

  12 Lisa Guernsey, “In the Lecture Hall, A Geek Chorus,” New York Times, July 24, 2003.

  13 Ibid.

  14 Caryn James, “Splitting. Screens. For Minds. Divided,” New York Times, January 9, 2004.

  15 August Fuhrmann, Das Kaiserpanorama und das Welt-Archivpolychromer Stereo-Urkunden auf Glas (1905), p. 8. Reprinted in Stephan Oettermann, The Panorama: History of a Mass Medium, trans. Deborah Lucas Schneider (Zone Books, 1997), p. 230.

  16 Interview with David Meyer, May 2006.

  17 Jonathan Crary, Suspensions of Perception: Attention, Spectacle and Modern Culture (MIT Press, 1999), p. 29.

  18 Ibid., pp. 11–12 and 27.

  19 Arthur Jersild, “Mental Set and Shift,” Archives of Psychology 29 (1927).

  20 Interviews with Steven Yantis and David Meyer, May, June, and July 2006.

  21 David E. Meyer, Professional Biography Published on the Occasion of His Distinguished Scientific Contribution Award (American Psychological Association, 2002), http://www.umich.edu/-bcalab/Meyer_Biography.html.

  22 John Serences and Steven Yantis, “Selective Visual Attention and Perceptual Coherence,” Trends in Cognitive Sciences 10, no. 1 (2006), pp. 38–45. Also Steven Yantis, “How Visual Salience Wins the Battle for Awareness,” Nature Neuroscience 8, no. 8 (2005), pp. 975–77.

  23 Serences and Yantis, “Selective Visual Attention,” p. 43.

  24 Yantis, “How Visual Salience Wins,” p. 975.

  25 Susan Landry et al., “Early Maternal and Child Influences on Children’s Later Independent Cognitive and Social Functioning,” Child Development 71, no. 2 (2000), p. 370.

  26 Charles O’Connor, Howard Egeth, and Steven Yantis, “Visual Attention: Bottom-Up versus Top-Down,” Current Biology 14, no. 19 (2004), pp. R850–52.

  27 “Linda Stone’s Thoughts on Attention,” http://continuouspartialattention.jot.com/ WikiHome.

  28 Alan Lightman, “The World Is Too Much with Me,” in Living with the Genie: Essays on Technology and the Quest for Human Mastery, ed. Alan Lightman, Daniel Sarewitz, and Christine Dresser (Island Press, 2003), pp. 287 and 292.

  29 Joshua Rubenstein, David Meyer, and Jeffrey Evans, “Executive Control of Cognitive Processes in Task-Switching,” Journal of Experimental Psychology, Human Perception and Performance 27, no. 4 (2001), pp. 763–97.

  30 Clive Thompson, “Meet the Life Hackers,” New York Times Magazine , October 16, 2005, pp. 40–45.

  31 Gloria Mark, Victor Gonzalez, and Justin Harris, “No Task Left Behind? Examining the Nature of Fragmented Work,” proceedings of the Conference on Human Factors in Computer Systems (Portland, Oregon, 2005), pp. 321–30. Also interview with Gloria Mark, July 2006.

  32 Ibid.

  33 Thompson, “Meet the Life Hackers,” p. 42.

  34 Tony Gillie and Donald Broadbent, “What Makes Interruptions Disruptive? A Study of Length, Similarity and Complexity,” Psychological Research 50 (1989), pp. 243–50.

  35 Jonathan Spira and Joshua Feintuch, The Cost of Not Paying Attention: How Interruptions Impact Knowledge Worker Productivity (Basex, 2005), pp. 2 and 10.

  36 Suzanne Ross, “Two Screens Are Better Than One,” Microsoft Research News and Highlights, http://research.microsoft.com/displayArticle.aspx?id=433&0sr=a. Also Tara Matthews et al., “Clipping Lists and Change Borders: Improving Multitasking Efficiency with Peripheral Information Design,” Proceedings of the Conference on Human Factors in Computer Systems (April 2006), pp. 989–98.

  37 Scott Brown and Fergus I. M. Craik, “Encoding and Retrieval of Information,” Oxford Handbook of Memory, ed. Endel Tulving and Fergus I. M. Craik (Oxford University Press, 2000), p. 79.

  38 Ibid. See also Sadie Dingfelder, “A Workout for Working Memory,” Monitor on Psychology 36, no. 8 (2005), http://www.apa.orgmonitor/sep05/ workout.html, and Jan de Fockert et al., “The Role of Working Memory in Visual Selective Attention,” Science 291, no. 5509 (2001), pp. 1803–1804.

  39 Lori Bergen, Tom Grimes, and Deborah Potter, “How Attention Partitions Itself During Simultaneous Message Presentations,” Human Communication Research 31, no. 3 (2005), pp. 311–36.

  40 Interview with Mary Czerwinski, July 2006.

  41 W. Wayt Gibbs, “Considerate Computing,” Scientific American (January 2005), pp. 55–61. See also Peter Weiss, “Minding Your Business,” Sc
ience News 163, no. 18 (2006), p. 279.

  42 Horwitz quoted in Gibbs, “Considerate Computing.”

  43 Searle quoted in Weiss, “Minding Your Business.”

  44 Paul Virilio, The Vision Machine (Indiana University Press, 1994), p. 59.

  45 Jane Healy, Endangered Minds: Why Our Children Don’t Think (Simon & Schuster, 1990), p. 153.

  46 Arthur T. Jersild, “Reminiscences of Arthur Thomas Jersild: Oral History 1967,” interviewer T. Hogan (Columbia University, 1972), pp. 2, 20, 40–41, 79, and 246.

  47 Brown and Craik, Oxford Handbook of Memory, pp. 93–97. See also John T. Wixted, “A Theory About Why We Forget What We Once Knew,” Current Directions in Psychological Science 14, no. 1 (2005), pp. 6–9.

  48 Alan Lightman, The Diagnosis (Pantheon Books, 2000), pp. 3–20.

 

  a dream come true

  Excerpted from Against the Machine (pp. 125–37).

  LEE SIEGEL is The Daily Beast’s senior columnist. He publishes widely on culture and politics and is the author of three books: Falling Upwards: Essays in Defense of the Imagination (2006), Not Remotely Controlled: Notes on Television (2007), and, most recently, Against the Machine: How the Web Is Reshaping Culture and Commerce—And Why It Matters (2008). In 2002, he received a National Magazine Award for reviews and criticism.

  WEB 2.0” is the Internet’s characteristically mechanistic term for the participatory culture that it has now consummated and established as a social reality. In this topsy-turvy galaxy, no person, fact, or event is beyond your grasp.

  Web 2.0 is what the Internet calls its philosophy of interactivity. It applies to any online experience that allows the user to help create, edit, or revise the content of a website, interact with other users, share pictures, music, and so on. Amazon.com is a product of 2.0 technology because it allows visitors to write their own reviews of books that Amazon offers for sale, and to sell their own used books as well. eBay is 2.0-based because buyers and sellers interact with each other. Web 2.0 rules the social-networking sites like MySpace, Facebook, and Friendster, and also the blogosphere, whose essence is the online exchange of opinions, ideas—and spleen.

  Although Web 2.0 is the brainchild of businessmen, many of its promoters extol it with the rhetoric of “democracy,” that most sacred of American words. But democracy is also the most common and effective American political and social pretext. While the liberal blogosphere thundered with cries of hypocrisy about Bush’s claim that he was bringing democracy to Iraq, no one bothered to peek behind the Internet’s use of the word “democracy” to see if that was indeed what the Internet was bringing to America.

  Here is Lawrence Lessig, the foremost advocate of Internet freedom in the realm of copyright law, on the Internet’s capacity for “capturing and sharing” content—in other words, for offering full participation in the culture:You could send an e-mail telling someone about a joke you saw on Comedy Central, or you could send the clip. You could write an essay about the inconsistencies in the arguments of the politician you most love to hate, or you could make a short film that puts statement against statement. You could write a poem that expresses your love, or you could weave together a string—a mash-up—of songs from your favorite artists in a collage and make it available on the Net . . . This “capturing and sharing” promises a world of extraordinarily diverse creativity that can be easily and broadly shared. And as that creativity is applied to democracy, it will enable a broad range of citizens to use technology to express and criticize and contribute to the culture all around.

  Before you try to figure out what Lessig is saying, you have to get through the Internetese, this new, strangely robotic, automatic-pilot style of writing: “A poem that expresses your love” . . . for what? How do you “express . . . the culture all around”? As usual, the Internet’s supreme self-confidence results in lazy tautology: “This ‘capturing and sharing’ . . . can be easily and broadly shared.” And never mind that elsewhere, in the same book—Free Culture: How Big Media Uses Technology and the Law to Lock Down Culture and Control Creativity—Lessig defines democracy, strangely, as “control through reasoned discourse,” which would seem to disqualify Comedy Central from being considered one of the pillars of American democracy.

  More telling is Lessig’s idea of “democracy,” a word that in the American context means government by the people through freely elected representatives. Lessig seems to think it means “creativity,” or, as they like to say on the Internet, “self-expression.” But even tyrants allow their subjects to write love poems or exchange favorite recordings. The Roman emperor Augustus cherished Ovid for the latter’s love poetry—until Ovid’s romantic dallying came too close to the emperor’s own interests. And only tyrants forbid their subjects to make political criticisms—loving to hate a politician in public is hardly an expansion of democracy. It’s the result of democracy. Lessig has confused what makes democracy possible—certain political, not cultural, mechanisms—with what democracy makes possible: free “expression.”

  Lessig isn’t the only one singing 2.0’s praises who seems confused about fundamental terms. Jay Rosen, a professor of journalism at New York University, is maybe the most voluble booster of the “citizen journalism” that he believes fulfills the blogosphere’s social promise.

  Rosen has started a blog-based initiative called Assignment Zero, in which anyone, journalist or not, can file an “investigative” news article. Rosen called this “crowdsourcing” in an interview with The New York Times’s David Carr, who reported the story without expressing the slightest skepticism and without presenting an opposing view to Rosen’s. And there is an opposing point of view. In the world of Assignment Zero, if you are someone working for a politician with an ax to grind, you could use Assignment Zero to expose a pesky journalist. Or you could just go on the blog to take down someone who has rubbed you the wrong way. No institutional layers of scrutiny, such as exist at newspapers, would be there to obstruct you.

  Yet Rosen celebrates the 2.0-based blogosphere for what he portrays as its anticommercial gifts to democracy.

  We’re closer to a vision of “producer democracy” than we are to any of the consumerist views that long ago took hold in the mass media, including much of the journalism presented on that platform. We won’t know what a producer public looks like from looking at the patterns of the media age, in which broadcasting and its oneto-many economy prevailed.

  But we do know what a “producer public” will look like. Alvin Toffler described it thirty years ago. It will look like a totalized “consumerist” society, where everyone’s spare moment is on the market and where journalists in the blogosphere will have their every word quantified and evaluated by vigilant advertisers. Where “producers” are simply consumers made more dependent on the marketplace by the illusion of greater participation in the marketplace. On the blog Assignment Zero, the public pays for the stories it wants to see reported. Rosen hasn’t escaped the constrictions of commerce. He’s made them tighter.

  Lessig and Rosen are true believers in the Internet, people who have staked their professional (and economic) futures on its untrammeled success. It’s in their interest to confuse American democracy’s meaning with what American democracy means to them. Time magazine, on the other hand, has no stake in the triumph of the Internet.

  Yet like every other “old” media news organization, Time is so frightened by the Internet boosters’ claims of “old” media’s impending irrelevance that for its “Person of the Year” in 2006, it put a picture of a computer screen on the magazine’s cover with the single word “You.” Then it went on to celebrate Web 2.0 as “the new digital democracy”:It’s a story about community and collaboration on a scale never seen before. It’s about the cosmic compendium of knowledge Wikipedia and the million-channel people’s network YouTube and the online metropolis MySpace. It’s about the many wresting power from the few and helping one another for nothing and how that will not only change
the world, but also change the way the world changes.... Silicon Valley consultants call it Web 2.0, as if it were a new version of some old software. But it’s really a revolution.... We’re looking at an explosion of productivity and innovation, and it’s just getting started, as millions of minds that would otherwise have drowned in obscurity get backhauled into the global intellectual economy.

  Who are these people? Seriously, who actually sits down after a long day at work and says, I’m not going to watch Lost tonight. I’m going to turn on my computer and make a movie starring my pet iguana? I’m going to mash up 50 Cent’s vocals with Queen’s instrumentals? I’m going to blog about my state of mind or the state of the nation or the steak-frites at the new bistro down the street? Who has that time and that energy and that passion?

  The answer is, you do. And for seizing the reins of the global media, for founding and framing the new digital democracy, for working for nothing and beating the pros at their own game, Time’s Person of the Year for 2006 is you.

  Yes, seriously, who has the time, energy, and passion to make a movie about his pet iguana and broadcast it over the Internet? Who has reached that level of commitment to democracy? Who has the time, energy, and passion to mash up 50 Cent’s vocals with Queen’s instrumentals, to blog about his state of mind or the state of the nation or steak-frites? Time’s encomium to a brave new world reads like a forced confession’s rote absurdity.

  About one thing, however, Time was right. All this so-called play was not play at all. Everyone was getting “backhauled”—whatever that means—into the “global intellectual economy,” though by “intellectual” Time meant nonmaterial, mental. Deliberately or not, Time was adding its voice to the general gulling of Internet boosterism and giving a helpful push to the facile shift of culture to commerce.

 

‹ Prev