The Internet of Us

Home > Other > The Internet of Us > Page 1
The Internet of Us Page 1

by Michael P. Lynch




  The

  Internet

  of Us

  Knowing More and

  Understanding Less

  in the Age of Big Data

  Michael

  Patrick Lynch

  Liveright Publishing Corporation

  A Division of W. W. Norton & Company

  Independent Publishers Since 1923

  New York London

  For Rene

  In the past, the things that men could do were very limited. . . . But with every increase in knowledge, there has been an increase in what men could achieve. In our scientific world, and presumably still more in the more scientific world of the not distant future, bad men can do more harm, and good men can do more good, than had seemed possible to our ancestors even in their wildest dreams.

  —Bertrand Russell

  All I know is that I don’t know.

  All I know is that I don’t know nothing.

  —Operation Ivy

  Contents

  Preface

  Part I: The New Old Problems of Knowledge

  1. Our Digital Form of Life

  Neuromedia

  Socrates on the Way to Larissa

  Welcome to the Library

  2. Google-Knowing

  Easy Answers

  Being Receptive: Downloading Facts

  John Locke Agrees with Mom

  Being Reasonable: Uploading Reasons

  3. Fragmented Reasons: Is the Internet Making Us Less Reasonable?

  The Abstract Society

  When Fights Break Out in the Library

  The Rationalist’s Delusion

  Democracy as a Space of Reasons

  4. Truth, Lies and Social Media

  Deleting the Truth

  The Real as Virtual

  Interlude: To SIM or Not to SIM

  Falsehood, Fakes and the Noble Lie

  Objectivity and Our Constructed World

  Part II: How We Know Now

  5. Who Wants to Know: Privacy and Autonomy

  Life in the Panopticon

  The Values of Privacy

  The Pool of Information

  Privacy and the Concept of a Person

  Transparency and Power

  6. Who Does Know: Crowds, Clouds and Networks

  Dead Metaphors

  Knowledge Ain’t Just in (Your) Head

  The Knowing Crowd

  The “Netography” of Knowledge

  7. Who Gets to Know: The Political Economy of Knowledge

  Knowledge Democratized?

  Epistemic Equality

  Walmarting the University

  8. Understanding and the Digital Human

  Big Knowledge

  The End of Theory?

  Understanding Understanding

  Knowing How to Chuck

  Coming to Understand as a Creative Act

  9. The Internet of Us

  Technology and Understanding

  Information and the Ties That Bind

  Acknowledgments

  Notes

  Bibliography

  Index

  Preface

  The changes wrought by the Internet are sometimes compared to those brought about by the printing press. In both cases, technological advances led to new ways of distributing information. Knowledge became more widely and cheaply available, which in turn led to mass education, new economies and even social revolution.

  But in truth, the comparison with the printing press underplays the significance of the changes being brought about by the Internet today. The better comparison is with the written word.

  Writing is a technology, a tool. Yet its invention wasn’t just a change in how information and knowledge was distributed. It was a new way of knowing itself. Writing allows us to communicate across time—both with ourselves and with others. It allows us to outsource memory tasks and therefore lessen our cognitive load.

  Not long ago, for example, I discovered a note my father had written, taped to the back of an old chainsaw I had inherited from him. It was more than a note, really; it was a little essay, detailing good and bad practice with the saw. My dad’s house was peppered with such memos. He would write them as a reminder of how best to go about various tasks that one might do only irregularly—replacing the fuel filter on the lawn mower, shutting down the water heater. He would then tape them in a spot where he would be sure to later run across them. When I was a teenager, I found it embarrassing, but I get it now. He was a busy man and knew that he might forget a trick or lesson he’d learned while doing something for the first time. He was, in short, communicating with his future self, while simultaneously relieving his present self of the burden of remembering. That, in microcosm, is what writing allows us to do, and also why its invention is one of the most important developments in human history. It allows us to time-travel and share the thoughts of those who have come before.

  The Internet is bringing about a similar revolution in our ways of knowing. Where the written word allows us to time-travel, the Internet allows us to teleport—or at least to communicate in an immediate way across spatial gulfs. Changes in information technology are making space increasingly irrelevant. Our libraries are no longer bounded by physical walls, and our ways of processing and accessing what is in those libraries don’t require physical interaction. As a result, we no longer have to travel anywhere to find the information we need. Today, the fastest and easiest way of knowing is Google-knowing, which means not just “knowledge by search engine” but the way we are increasingly dependent on knowing via digital means. That can be a good thing; but it can also weaken and undermine other ways of knowing, ways that require more creative, holistic grasps of how information connects together.

  New technology has always spurred a similar debate—and it should. During the heyday of postwar technological expansion in the 1950s, philosophers and artists worried about what that nuclear weapon technology was doing to us, and whether our ethical thinking was keeping up with it. Bertrand Russell, writing in the Saturday Evening Post, argued that we need more than expanded access to knowledge; we need wisdom, which he took as a combination of knowledge, will and feeling.1 Russell’s point was simple: growth in knowledge without a corresponding growth in wisdom is dangerous. This book is motivated by a similar worry and with a desire to do something about it. Yet where Russell was concerned with a specific kind of knowledge —knowledge of nuclear bombs—my concern is with the expansion of knowledge itself, with how the rapid changes in technology are affecting how we know and the responsibilities we have toward that knowledge.

  Still, this is not an “anti-technology” book. I’m a dedicated user of social media and the platforms that enable it (the rise of which is sometimes called “Web 2.0“). I tweet, I Facebook, I have a smartphone, a tablet, and more computers than I care to admit. I am in no position to write an anti-technology book. Technology itself is not the problem. Unlike nuclear weapons or guns, information technology itself is generally not designed to kill people (although it can certainly lend a hand). Information technologies are more like cars: so fast, sleek and super-useful that we can overrely on them, overvalue them and forget that their use has serious consequences. The problems, such as they are, are due to how we are using such technologies.

  My aim is to examine the philosophical foundations of what I’ll call our digital form of life. And whether or not my conclusions are correct, it is clear that this is a task we must engage in if we want to avoid the fate that worried Russell: being swallowed up by our technology.

  Storrs, CT

  October 2015

  Part I

  The New Old

  Problems of

  Knowledge

  1

  Our Digital Form of Life


  Neuromedia

  Imagine a society where smartphones are miniaturized and hooked directly into a person’s brain. With a single mental command, those who have this technology—let’s call it neuromedia—can access information on any subject. Want to know the capital of Bulgaria or the average flight velocity of a swallow? It’s right there. Users of neuromedia can take pictures with a literal blink of the eye, do complex calculations instantly, and access, by thought alone, the contact information for anyone they’ve ever met. If you are part of this society, there is no need to remember the name of the person you were introduced to last night at the dinner party; a sub-cellular computing device does it for you.

  For the people of this society, it is as if the world is in their heads. It is a connected world, one where knowledge can be instantly shared with everyone in an extremely intimate way. From the inside, accessing the collective wisdom of the ages is as simple as accessing one’s own memory. Knowledge is not only easy; everyone knows so much more.

  Of course, as some fusspots might point out, not all the information neuromedia allows its users to mentally access is really “knowledge.” Moreover, they might claim, technological windows are two-way. A device that gives you a world of information also gives the world huge amounts of information about you, and that might seem like a threat to privacy. Others might fret about fragmentation—that neuromedia encourages people to share more information with those who already share their worldview, but less with those who don’t. They would worry that this would make us less autonomous, more dependent on our particular hive-mind—less human.

  But we can imagine that many in the society see these potential drawbacks as a price worth paying for immediate and unlimited access to so much information. New kinds of art and experiences are available, and people can communicate and share their selves in ways never before possible. The users of neuromedia are not only free from the burden of memorization, they are free from having to fumble with their smartphone, since thoughts can be uploaded to the cloud or shared at will. With neuromedia, you have the answer to almost any question immediately without effort—and even if your answers aren’t always right, they are right most of the time. Activities that require successful coordination between many people—bridge building, medicine, scientific inquiry, wars—are all made easier by such pooled shared “knowledge.” You can download your full medical history to a doctor in an emergency room by allowing her access to your own internal files. And of course, some people will become immensely wealthy providing and upgrading the neural transplants that make neuromedia possible. All in all, we can imagine, many people see neuromedia as a net gain.

  Now imagine that an environmental disaster strikes our invented society after several generations have enjoyed the fruits of neuromedia. The electronic communication grid that allows neuromedia to function is destroyed. Suddenly no one can access the shared cloud of information by thought alone. Perhaps backup systems preserved the information and knowledge that people had accumulated, and they can still access that information in other ways: personal computers, even books can be dusted off. But for the inhabitants of the society, losing neuromedia is an immensely unsettling experience; it’s like a normally sighted person going blind. They have lost a way of accessing information on which they’ve come to rely. And that, while terrible, also reveals a certain truth. Just as overreliance on one sense can weaken the others, so overdependence on neuromedia might atrophy the ability to access information in other ways, ways that are less easy and require more creative effort.

  While neuromedia is currently still in the realm of science fiction, it may not be as far off as you think.1 The migration of technology into our bodies—the cyborging of the human—is no longer just fantasy.2 And it shouldn’t surprise anyone that the possibilities are not lost on companies such as Google: “When you think about something and don’t really know much about it, you will automatically get information,” Google CEO Larry Page is quoted as saying in Steven Levy’s recent book In the Plex. “Eventually you’ll have an implant, where if you think about a fact, it will just tell you the answer.”3

  This possibility raises some disquieting questions about society, identity and the mind. But as Larry Page’s remark suggests, the deeper question is about information and knowledge itself. How is information technology affecting what we know and how we know it? And what happens to society if we not only know more about the world but the world knows more about us? Taken seriously, these questions force us to grapple not only with how we know with technology, but with how we should. That’s the really important problem, and it is the philosophical and ethical question at the core of this book—one I’ll argue we ignore at our peril.

  My hypothesis is that information technology, while expanding our ability to know in one way, is actually impeding our ability to know in other, more complex ways; ways that require 1) taking responsibility for our own beliefs and 2) working creatively to grasp and reason how information fits together. Put differently, information technologies, for all their amazing uses, are obscuring a simple yet crucial fact: greater knowledge doesn’t always bring with it greater understanding.

  So, a large part of my aim in this book is to explore how the Internet is changing our minds and lives. That it is doing so is beyond doubt. If you are like me, you already feel a lot smarter when you have access to Google, and somewhat frustrated when you do not—in almost the exact way you feel when you suddenly can’t remember something you knew just yesterday. Knowing by Google is now so familiar that it has an unnoticed seamlessness that we earlier only attached to perception. Where we used to say that seeing is believing, now we might say “googling is believing.” And yet this very fact also makes it easier for people to believe that Barack Obama is a Muslim, or that the measles vaccine is dangerous. Just as we often see what we want to see, we often google what we want to google.

  The increasingly seamless integration of our digital experiences into our lives is not the result of a single shift but the result of a gradual series of changes. Internet wonks tend to think that we are seeing the arrival of the “third wave” of the Internet. First there was Web 1.0 (the ancient days of “Wow! You should check out this email thing!”). Then, starting in the early 2000s, there was Web 2.0. (“Wow! You should check out this Facebook thing!”). Now we have Web 3.0 (the “smart Web”) and, most significantly, the so-called Internet of Things (“Wow! You should check out my smart . . . watch, refrigerator, lamp, socks!”).

  In essence, the “Internet of Things” is a way of describing the phenomenon of networked objects—objects that are embedded with data-streaming sensors and software that connect them to the Net. The “things” in question run the gamut from autonomous connected devices like smartphones to the tiny radio-frequency identification (RFID) microchips and other sorts of sensors attached to everything from UPS trucks and cargo containers to pets, farm animals, cars, thermostats, and NFL helmets. By 2007 there were already 10 million sensors of all sorts connected to the Internet, and some projections have that number rising to 100 trillion by 2030 if not before.4 These sensors are being used not only for economic purposes but for scientific ones (to track migratory animals, for example), and for security and military purposes (such as tracking human beings). According to Jeremy Rifkin, a leading economist of the digital world, the Internet of Things is even giving rise to a “Third Industrial Revolution,” precipitating huge changes in how human beings around the globe interact with one another, economically and otherwise.5

  The Internet of Things is made possible by—and is also producing—big data. The term “big data” has no fixed definition, but rather three connected uses. First, it names the ever-expanding volume of data that surrounds us. You’ve heard some of the statistics. As long ago as 2009, there were already 260 million page views per month on Facebook; in 2012, there were 2.7 billion likes per day. An estimated 130 million blogs exist; there are around 500 million tweets per day; and billions of video view
s on YouTube. By some estimates, the amount of data in the world in 2013 was already something around 1,200 exabytes; now it is in the zetabytes. That’s hard to get your mind around. As Viktor Mayer-Schönberger and Kenneth Cukier estimate in their recent book, Big Data: A Revolution That Will Transform How We Live, Work, and Think, if you placed that much information on CD-ROMs (remember them?) it would stretch to the moon five times. It would be like giving every single person on the earth 320 times as much information as was stored in the ancient library of Alexandria.6 And by the time you are reading this, the numbers will be even bigger.

  So, one use of the term “big data” refers to the massive amount of data making up our digital form of life. In a second sense, it can be used to talk about the analytic techniques used to extract useful information from that data. Over the last several decades, our analytic methods for information extraction have increased in sophistication along with the increasing size of the data sets we have to work with. And these techniques have been put to a mind-boggling assortment of uses, from Wall Street to science of all sorts. A simple example is the data “exhaust” you are leaving as you read these very words on your Kindle or iPad. How much of this book you read, the digital notes you take on it, is commercially available information, extracted from the trail of data you leave behind as you access it in the cloud. Booksellers like Barnes and Noble and Amazon can, and have, used this sort of information to further target the types of products they market.

  As a consequence of the increasing importance of data analytics, we might employ “big data” in a third sense—to refer to firms like Google or Amazon that utilize data analytics as an essential part of their business model, and government agencies like the NSA that use these techniques as an essential part of, well, their business model. In this third sense, Big Data is like Big Oil. Large oil conglomerates are powerful because they control how the world’s major energy resource is not only distributed but how it is extracted. The tech giants are similar. Energy is not information, but both are resources, and resources by which the world runs. And Big Data, like Big Oil, is big precisely because it can control access to data as well as the extraction of information and knowledge from that data. Big Data refines data for information and knowledge, and we need to pay attention to that fact because knowledge, like energy, is not just a passive, inert resource. It is fuel: fuel for our ideas, our actions, everything. And the power that comes with control over that fuel is therefore formidable. Knowledge, as Sir Francis Bacon said, is power.

 

‹ Prev