Is the Internet Changing the Way You Think?

Home > Other > Is the Internet Changing the Way You Think? > Page 14
Is the Internet Changing the Way You Think? Page 14

by John Brockman


  Digital media and networks can empower only the people who learn how to use them—and pose dangers to those who don’t know what they are doing. Yes, it’s easy to drift into distraction, fall for misinformation, allow attention to fragment rather than focus, but those mental temptations pose dangers only for the untrained mind. Learning the mental discipline to use thinking tools without losing focus is one of the prices I am glad to pay to gain what the Web has to offer.

  Those people who do not gain fundamental literacies of attention, crap detection, participation, collaboration, and network awareness are in danger of all the pitfalls critics point out—shallowness, credulity, distraction, alienation, addiction. I worry about the billions of people who are gaining access to the Net without the slightest clue about how to find knowledge and verify it for accuracy, how to advocate and participate rather than passively consume, how to discipline and deploy attention in an always-on milieu, how and why to use those privacy protections that remain available in an increasingly intrusive environment.

  The realities of my life as a professional writer—if the words didn’t go out, the money didn’t come in—drove me to evolve a set of methods and disciplines. I know that others have mastered far beyond my own practice the mental habits I’ve stumbled upon, and I suspect that learning these skills is less difficult than learning long division. I urge researchers and educators to look more systematically where I’m pointing.

  When I started out as a freelance writer in the 1970s, my most important tools were a library card, a typewriter, a notebook, and a telephone. In the early 1980s, I became interested in the people at Xerox Palo Alto Research Center (PARC) who were using computers to edit text without physically cutting, pasting, and retyping pages. Through PARC, I discovered Douglas Engelbart, who had spent the first decade of his career trying to convince somebody, anybody, that using computers to augment human intellect was not a crazy idea. Engelbart set out in the early 1960s to demonstrate that computers could be used to automate low-level cognitive support tasks, such as cutting, pasting, and revising text, and also to enable intellectual tools, such as the hyperlink, that weren’t possible with Gutenberg-era technology.

  He was convinced that this new way to use computers could lead to

  increasing the capability of a man to approach a complex problem situation, to gain comprehension to suit his particular needs, and to derive solutions to problems. Increased capability in this respect is taken to mean a mixture of the following: more-rapid comprehension, better comprehension, the possibility of gaining a useful degree of comprehension in a situation that previously was too complex, speedier solutions, better solutions, and the possibility of finding solutions to problems that before seemed insolvable.*

  Important caveats and unpredicted side effects notwithstanding, Engelbart’s forecasts have come to pass in ways that surprised him. What did not surprise him was the importance of both the know-how and how-to-know that unlock the opportunities afforded by augmentation technology.

  From the beginning, Engelbart emphasized that the hardware and software created at his Stanford Research Institute laboratory, from the mouse to the hyperlink to the word processor, were part of a system that included “humans, language, artifacts, methodology, and training.” Long before the Web came along, Engelbart was frustrated that so much progress had been made in the capabilities of the artifacts but so little study had been devoted to advancing the language, methodology, and training—the literacies that necessarily accompany the technical capabilities.

  Attention is the fundamental literacy. Every second I spend online, I make decisions about where to spend my attention. Should I devote any mind share at all to this comment or that headline?—a question I need to answer each time an attractive link catches my eye. Simply becoming aware of the fact that life online requires this kind of decision making was my first step in learning to tune a fundamental filter on what I allow into my head—a filter that is under my control only if I practice controlling it. The second level of decision making is whether I want to open a tab on my browser because I’ve decided this item will be worth my time tomorrow. The third decision: Do I bookmark this site because I’m interested in the subject and might want to reference it at some unspecified future time? Online attention taming begins with what meditators call mindfulness—the simple, self-influencing awareness of how attention wanders.

  Life online is not solitary. It’s social. When I tag and bookmark a Website, a video, an image, I make my decisions visible to others. I take advantage of similar knowledge curation undertaken by others when I start learning a topic by exploring bookmarks, find an image to communicate an idea by searching for a tag. Knowledge sharing and collective action involve collaborative literacies.

  Crap detection—Hemingway’s name for what digital librarians call credibility assessment—is another essential literacy. If all schoolchildren could learn one skill before they go online for the first time, I think it should be the ability to find the answer to any question and the skills necessary to determine whether the answer is accurate or not.

  Network awareness, from the strength of weak ties and the nature of small-world networks to the power of publics and the how and why of changing Facebook privacy settings, would be the next literacy I would teach, after crap detection. Networks aren’t magic, and knowing the principles by which they operate confers power on the knowledgeable. How could people not use the Internet in muddled, frazzled, fractured ways, when hardly anybody instructs anybody else about how to use the Net salubriously? It is inevitable that people will use it in ways that influence how they think and what they think.

  It is not inevitable that these influences will be destructive. The health of the online commons will depend on whether more than a tiny minority of Net users become literate Netizens.

  Information Metabolism

  Esther Dyson

  Catalyst, information technology start-ups, EDventure Holdings; former chairman, Electronic Frontier Foundation and ICANN; author, Release 2.1

  I love the Internet. It’s a great tool precisely because it is so content- and value-free. Anyone can use it for his own purposes, good or bad, big or small, trivial or important. It impartially transmits all kinds of content, one-way or two-way or broadcast, public or private, text or video or sound or data.

  But it does have one overwhelming feature: immediacy. (And when the immediacy is ruptured, its users gnash their teeth.) That immediacy is seductive: You can get instant answers, instant responses. If you’re lonely, you can go online and find someone to chat with. If you want business, you can send out an e-mail blast and get at least a few responses—a .002 percent response rate means 200 messages back (including some hate mail) for a small list. If you want to do good, there are thousands of good causes competing for your attention at the click of your mouse.

  But sometimes I think much of what we get on the Internet is empty calories. It’s sugar—short videos, pokes from friends, blog posts, Twitter posts (even blogs seem long-winded now), pop-ups, visualizations . . . Sugar is so much easier to digest, so enticing—and ultimately it leaves us hungrier than before.

  Worse than that, over a long period many of us are genetically disposed to lose our ability to digest sugar if we consume too much of it. It makes us sick long-term, as well as giving us indigestion and hypoglycemic fits. Could that be true of information sugar as well? Will we become allergic to it even as we crave it? And what will serve as information insulin?

  In the spirit of brevity if not immediacy, I leave it to the reader to ponder these questions.

  Ctrl + Click to Follow Link

  George Church

  Professor, Harvard University; director, Personal Genome Project.

  If time did permit, I’d begin with the “How” of “How is the Internet changing the way you think?” Not “how much?” or “in what manner?” but “for what purpose?” “To be, that is the question.”

  Does the Internet pose an existen
tial risk to all known intelligence in the universe or a path to survival? Yes; we see sea change from I-Ching to e-Change.

  Yes; it (IT) consumes 100 billion watts, but this is only 0.7 percent of human power consumption.

  Yes; it might fragment the attention span of the Twitter generation. (For my world, congenitally shattered by narcolepsy and dyslexia, reading/chatting online in 1968 was no big deal.)

  Before cuneiform, we revered the epic poet. Before Gutenberg, we exalted good handwriting. We still gasp at feats of linear memory, Lu Chao reciting 67,890 digits of π or Kim Peek’s recall of 12,000 books (60 gigabytes)—even though pathetic compared to the Internet of 10 exabytes (double that in five years).

  But the Internet is amazing not for storage (or math) but for connections. Going from footnotes to hypertext to search engines dramatically opens doors for evidence-based thinking, modeling, and collaboration. It transforms itself from mere text to Google Goggles for places and Picasa for faces.

  But still it can’t do things that Einstein and Curie could. Primate brains changed dramatically from early apes at 400 cc to Habilis at 750 cc to Neanderthal at 1,500 cc.

  “How did that change the way you think?” and “For what purpose?” How will we think to rebuild the ozone after the next nearby supernova? Or nudge the next earth-targeted asteroid? Or contain a pandemic in our dense and well-mixed population? And how will we prepare for those rare events by solving today’s fuel, food, psychological, and poverty problems, which prevent 6.7 billion brains from achieving our potential? The answer is blowin’ in the Internet wind.

  Replacing Experience with Facsimile

  Eric Fischl and April Gornik

  Visual artists

  We might rephrase the question as “How has the Internet changed the way you see?”

  For the visual artist, seeing is essential to thought. It organizes information and how we develop thoughts and feelings. It’s how we connect.

  So, how has the Internet changed us visually? The changes are subtle yet profound. They did not start with the computer. The changes began with the camera and other film-based media, and the Internet has had an exponential effect on that change.

  The result is a leveling of visual information, whereby it all assumes the same characteristics. One loss is a sense of scale. Another is a loss of differentiation between materials and the process of making. All visual information “looks” the same, with film/photography being the common denominator.

  Art objects contain a dynamism based on scale and physicality that produces a somatic response in the viewer. The powerful visual experience of art locates the viewer very precisely as an integrated self within the artist’s vision. With the flattening of visual information and the randomness of size inherent in reproduction, the significance of scale is eroded. Visual information becomes based on image alone. Experience is replaced with facsimile.

  As admittedly useful as the Internet is, easy access to images of everything and anything creates a false illusion of knowledge and experience. The world pictured as pictures does not deliver the experience of art seen and experienced physically. It is possible for an art-experienced person to “translate” what is seen online, but the experience is necessarily remote.

  As John Berger pointed out in his 1978 essay “The Uses of Photography,” the nature of photography is a memory device that allows us to forget. Perhaps something similar can be said about the Internet. In terms of art, the Internet expands the network of reproduction that replaces the way we “know” something. It replaces experience with facsimile.

  Outsourcing the Mind

  Gerd Gigerenzer

  Psychologist; director of the Center for Adaptive Behavior and Cognition at the Max Planck Institute for Human Development, Berlin; author, Gut Feelings

  When I came to the Center for Advanced Study in Palo Alto in the fall of 1989, I peered into my new cabinlike office. What struck me was the complete absence of technology. No telephone, e-mail, or other communication facilitators. Nothing could interrupt my thoughts. Technology could be accessed outside the offices whenever one wished, but it was not allowed to enter. This protective belt was there to make sure that scholars had time to think, and to think deeply.

  In the meantime, though, the center, like other institutions, has surrendered to technology. Today people’s minds are in a state of constant alert, waiting for the next e-mail, the next SMS, as if these will deliver the final, earth-shattering insight. I find it surprising that scholars in the “thinking profession” would so easily let their attention be controlled from the outside, minute by minute, just like letting a cell phone interrupt a good conversation. Were messages to pop up on my screen every second, I would not be able to think straight. Maintaining the center’s spirit, I check my e-mail only once a day and keep my cell phone switched off when I’m not making a call. An hour or two without interruption is heaven for me.

  But the Internet can be used in an active rather than a reactive way—that is, by not letting it determine how long we can think and when we have to stop. So the question is, Does an active use of the Internet change our way of thinking? I believe so. The Internet shifts our cognitive functions from searching for information inside the mind toward searching outside the mind. But it is not the first technology to do so.

  Consider the invention that changed human mental life more than anything else: writing, and subsequently the printing press. Writing made analysis possible; it allowed us to compare texts, which is difficult in an oral tradition. Writing also made exactitude possible, as in higher-order arithmetic—without any written form, these mental skills quickly meet their limits. But writing makes long-term memory less important than it once was, and schools have largely replaced the art of memorization by training in reading and writing.

  Most of us can no longer memorize hour-long folktales and songs, as in an oral tradition. The average modern mind has a poorly trained long-term memory, forgets rather quickly, and searches for information more often in outside sources, such as books, rather than from inside memory. The Internet has amplified this trend of shifting knowledge from the inside to the outside and taught us new strategies for finding what we want by using search machines.

  This is not to say that before writing, the printing press, and the Internet our minds did not have the ability to retrieve information from outside sources. But these sources were other people, and the skills were social, such as the art of persuasion and conversation. To retrieve information from Wikipedia, say, social skills are unnecessary.

  The Internet is essentially a huge storage room of information. We are in the process of outsourcing information storage and retrieval from mind to computer, just as many of us have already outsourced doing mental arithmetic to the pocket calculator. We may lose some skills in this process, such as the ability to concentrate over an extended period of time and the ability to store large amounts of information in long-term memory, but the Internet is also teaching us new skills for accessing information.

  It is important to realize that mentality and technology are one extended system. The Internet is a kind of collective memory, to which our minds will adapt until a new technology eventually replaces it. Then we will begin outsourcing other cognitive abilities and—it is to be hoped—learning new ones.

  A Prehistorian’s Perspective

  Timothy Taylor

  Archaeologist, University of Bradford, United Kingdom; author, The Artificial Ape: How Technology Changed the Course of Human Evolution

  I do not think the Internet has significantly changed the way we think: It was designed for people like me, by people like me, most of them English-speakers. Fundamentally reflecting Western, rationalist, objective, data-organizing drives, the Internet simply enhances my ability to think in familiar ways, letting me work longer, more often, with better focus, free from the social tyranny of the library and the uncertainty of the mails. The Internet has changed what I think, however—most notably about where the human rac
e is now headed. From a prehistorian’s perspective, I judge that we have been returned to a point last occupied at the time of our evolutionary origin.

  When the first stone tool was chipped more than 2 million years ago, it signaled a new way of being. The ancestral community learned to make flint axes, and those first artificial objects, in turn, critically framed a shared, reflective consciousness that began to express itself in language. An axe could be both made and said, used and asked for. The invention of technology brought the earliest unitary template for human thought into being. It can even be argued that it essentially created us as characteristically human.

  What happened next is well known: Technology accelerated adaptation. The original ancestral human culture spread out across continents and morphed into cultures, plural—myriad ways of being. While isolated groups unconsciously drifted into ever greater idiosyncrasy, those who found themselves in competition for the same resources consciously strove to differentiate themselves from their neighbors. This ever deepening cultural specificity facilitated the dehumanization of enemies that successful warfare, driven by jealously guarded technological innovation, required.

  Then reunification began, starting 5,000 years ago, with the development of writing—a technology that allowed the transcription of difference. War was not over, but alien thoughts did begin to be translated, at first very approximately, across the boundaries of local incomprehension. The mature Internet marks the completion of this process and thus the reemergence of a fully contiguous human cultural landscape. We now have the same capacity for being united under a common language and shared technology that our earliest human ancestors had.

  So in a crucial sense we are back at the beginning, returned into the presence of a shared template for human thought. From now on, there are vanishingly few excuses for remaining ignorant of objective scientific facts, and ever thinner grounds for cultivating hatred through willful failure to recognize our shared humanity. Respecting difference has its limits, however: The fact of our knowing that there is a humanity to share means we must increasingly work toward agreeing on common moral standards. The Internet means that there is nowhere to hide and no way to shirk responsibility when the whole tribe makes informed decisions (as it now must) about its shared future.

 

‹ Prev