The Internet of Us

Home > Other > The Internet of Us > Page 2
The Internet of Us Page 2

by Michael P. Lynch


  The big numbers behind big data, and the power inherent in those numbers, are impressive. Not long ago, it was said we were living in a time of information “glut”; we were “flooded”; we were “overloaded.” While some still feel this way, for most of us, the sense of being overwhelmed by information is passing. Digital data is something that is no longer drowning us. We are adapting to life under water, we are breathing it all in, becoming digitally human. Information is the atmosphere—what the philosopher Luciano Floridi calls the infosphere—of our lives.7 But the fact that we live in the infosphere, that it is becoming ordinary, doesn’t mean that we understand it, nor how it is changing us and what Ludwig Wittgenstein might have called our form of life. A form of life, as I mean it here, is the myriad practices of a culture that create their philosophies, but also, in Stanley Cavell’s words, their “routes of interest and feeling, sense of humor, and of significance, and of fulfillment of what is outrageous, of what is similar to what else.”8 As I read him, Wittgenstein thought that once a set of practices is ingrained enough to become your form of life, it is difficult to substantively critique them or even to recognize them as what they are. That’s because our form of life is “what has to be accepted, the given.”9 We can no longer get outside of it.

  One way of describing the direction in which our own culture is moving is that many of us are starting to adapt what we might call a digital form of life—one which takes life in the infosphere for granted, precisely because the digital is so seamlessly integrated into our lives. The Internet of Things is becoming the Internet of Us, and figuratively, if not yet literally, we are becoming digital humans.

  What is amazing is not that this is happening, but how quickly it is happening, how quickly we are settling in and accepting our new ways of being. That is particularly true, I think, with regard to our new practices of knowing. If anything, recent years have seen a rushing tide of enthusiasm. We’ve been told that the Internet has been a force for undiluted knowledge expansion and democratization. Not only do we know more, but more people know. Our minds work faster, multitask more, and just plain get more stuff done.10

  William James famously said that once a current of thought like this starts to surge, there is little you can do. Trying to stop it is like planting a stick in a river, “round your obstacle flows the water and ‘gets there just the same.’ ”11 He’s got a point, but I endeavor to plant another stick in the river anyway—not because I am unhappy with my iPhone, or hostile to the growth of knowledge, but for a simpler reason. Acceptance without reflection is dangerous, and while our stick may not stop the flow, it can help us measure and assess its depth and direction. As the literary critic and writer Leon Wieseltier remarks, “every technology is used before it is completely understood.”12 There is a lag time, and when we are living in the midst of a lag, that is precisely when we need to pay attention: before the river becomes settled in its course, something we take for granted as part of the natural landscape.

  For another example, think about cars. The automobile remains an incredible invention. It increased autonomy, allowed for the distribution of goods and services into remote areas and driving one can be a lot of fun. But needless to say, our unthinking commitment to the technology—our willingness from early on to let it swamp other technologies, to treat it as having more value than other means of transportation—has had seriously negative consequences as well. The devaluing (at least in the United States) of public transportation systems like trains and the rise of carbon emissions and pollution are just the more obvious examples. In the United States especially, it has been difficult for us, as a culture, to come to grips with these problems. The technology has become so embedded, so part of our form of life, that we have a hard time even noticing how dependent on it we really are.

  In the same way, paying attention to our digital form of life—seeing it for what it is, both good and bad—is easier said than done. Forms of life are complicated and filled with contradictions. That’s true of our emerging digital form of life too. We digital humans do have access to more information than ever before—whether or not we have neuromedia. But it is also true that in other respects we know less, that the walls of our digital life make real objective knowledge harder to come by, and that the Internet has promoted a more passive, more deferential way of knowing.13 Like our imaginary neuromedians, we are in danger of depending too much on one way of accessing the world and letting our other senses dull.

  Socrates on the Way to Larissa

  Data is not the same thing as information. As the founding father of information theory, Claude Shannon, put it back in the 1940s, data signals are noisy, and if you want to filter out what’s meaningful from those signals, you have to filter out at least some of that noise. For Shannon the noise was literal. His groundbreaking work concerned how to extract discernible information from the signals sent across telephone lines.14 But the moral is entirely general: bits of code aren’t themselves information; information is what we extract from those bits. They are the meaningful leftovers after we filter out the noise.

  Yet not all information is good information; information alone still doesn’t amount to knowledge. So, what is knowledge?

  If you want to find out what anything really is—what knowledge is, in this case—a really good way to begin is to ask why anyone should give a damn about it. Plato himself asked this question in a famous dialogue from the third century bce, where he imagines his teacher Socrates asking about why it matters that someone should know, rather than merely guess, directions to Larissa. Today as then, Larissa is a busy cultural and urban center, nestled in the mountainous Greek region of Thessaly. Legend had it that Achilles founded the city, and Hippocrates, the famous physician, supposedly died there. It was also the birthplace of the Greek general Meno—a man now more famous for having the starring role in this particular dialogue than any military victory.

  Near the end of Plato’s piece, Socrates asks Meno: Why does knowledge matter anyway? His questioning is pointed. In particular, he wants Meno to tell him why knowledge matters more than “true opinion.” After all, Socrates says, if I ask some passing stranger directions to Larissa, we’ll get there as long as he has a true opinion about the matter—even if it is a lucky guess. I won’t get there any faster by asking someone who really “knows” the answer—such as someone who has traveled there before. And that brings us to Socrates’ inquiry: why does knowledge seem to matter so much since having accurate information can often get us to where we want to go? Meno fumbles about, and uncharacteristically, Socrates himself is quick to offer an answer in the form of a metaphor. Opinions without knowledge—even true ones—he says, are like the statues of Daedalus: so lifelike that they would get up and walk away if not tied to the ground. Knowledge, he seems to suggest, is true opinion that is tied down or grounded.

  Plato’s dialogue illustrates three simple points that are good to keep in mind when thinking about knowledge (what the Greeks sometimes called epistêmê, from which we get the word epistemology, or the study of knowledge). It is worth getting these points out in front.

  Knowing something is different from just having an opinion about it. Any old fool can opine, but few can know. We might put this another way by saying that mere information or data isn’t knowledge; information can be better or worse, accurate or inaccurate. When we want to know, we want the right or true information. But we also want something more.

  Having accurate information still isn’t enough to know either. Making a lucky guess isn’t the same as knowing. The lucky guesser doesn’t have any ground or justification for his opinion, and as a result, he is not a reliable source of information on that topic. Ask him again tomorrow and he might guess something else. That’s why his information is ultimately less valuable in most situations. When we want to know we want more than guesses; we want some sort of basis for trust.

  What grounds our opinions or beliefs matters for action. The old intelligence services adage is that k
nowledge is actionable information. Actionable information is information you can work with—that, in short, you can trust. Guesses are not actionable—even if they are lucky, precisely because they are guesses. What’s actionable is what is justified, what has some ground.

  So: whatever else it is, knowing is having a correct belief (getting it right, having a true opinion) that is grounded or justified, and which can therefore guide our action. Call this the minimal definition of knowledge.

  The minimal definition of knowledge is helpful to a point. But like a lot of pithy definitions, it obscures as well as illuminates. In particular, it passes over the fact that how a belief is grounded comes in different forms. Suppose I ask you the best way to get to Larissa and you give me the correct answer, not because you guess but because you have some grounds for it. There are lots of different ways that could happen. For example, you might:

  Look at the map on your phone.

  Recall how you got there last year.

  Do both of these things but also explain why certain routes that look good on the map are actually slower because of localized road construction, etc.

  All three of these points might allow you to know, but in different ways. They represent three different ways our opinions can be grounded, by being based on:

  Reliable sources.

  Experience or reasons that we possess.

  A grasp of the big picture.

  The first sort of knowing is the sort we do when we absorb information from expert textbooks or good Internet resources. The second is the sort of knowing we value whenever possessing reasons or experience matters. And the third is different still—it is the sort of knowing we expect of our most creative experts—even if those experts are more intuitive than discursive in their abilities. This is what I’ll call understanding.

  Understanding, as in our example, often incorporates the other ways of knowing, but goes farther. It is what people do when they are not only responsive to the evidence, they have creative insight into how that evidence hangs together, into the explanation of the facts, not just the facts themselves. Understanding is what we have when we know not only the “what” but the “why.”15 Understanding is what the scientist is after when trying to find out why Ebola outbreaks happen (not just predict how the disease spreads). It is what you are after when trying to understand why your friend is so often depressed (as opposed to knowing that she is).

  In real life, all the ways we have of knowing are important. But without understanding, something deeper is missing. And our digital form of life, while giving us more facts, is not particularly good at giving us more understanding. Most of us sense this. That is one reason we try to limit our children’s screen time and encourage them to play outside. Interaction with the world brings with it an understanding of how and why things happen physically that no online experience can give. And it is why so many of us who use Facebook are still troubled by its siren song: it is a simulacrum of intimacy, a simulacrum of mutual understanding, not the real thing. The pattern of what people like or don’t like tells us something about them—more, in fact, than they may wish. But it doesn’t tell us why they like what they like. It doesn’t allow us to understand them. Facebook knows, but doesn’t understand.

  As we’ll see in more detail later, understanding not only gets us the “why,” it brings with it the “which”—as in which question to ask. Those who know, do. But those who understand also ask the right question—and therefore can find out what to do next. Asking questions was Socrates’ special skill. It is perhaps for that reason that the Oracle of Delphi famously told Socrates that he was the wisest man in Athens. According to Plato, Socrates himself said that all he knew was that he didn’t know much. And maybe he didn’t. But one can’t read Plato without thinking that the Oracle was on to something. Socrates was a champion not of knowledge per se, but of understanding. That’s the skill we need to remember now. It may sound trite but it is true nonetheless: we need to rediscover our inner Socrates.

  Welcome to the Library

  In one of his most famous stories, the great Argentinian writer Jorge Luis Borges imagined what it would be like to live in a world comprised of a single, almost infinite library, containing a virtually uncountable number of books, ranging from tomes of incomprehensible nonsense to treatises on everything from politics to particle physics. In one way, the library seems to make knowledge easy; all the truths of the world would be at your disposal. Of course, so would many falsehoods. And if you lived there, the library would be all you knew. There would be no escaping its walls to find an independent check—no way, except by appeal to the library itself, of knowing which books contain the truth and which do not.

  The story I want to tell is the story of how our culture is dealing with the fact that most of us are living in the library now—the Library of Babel, as Borges dubbed it in the short story by that name. It is a virtual library, and one that may indeed migrate right into our brain, should neuromedia ever come to pass. But whether or not that happens, the story I have to tell is an unapologetically philosophical one. As Borges knew more than most, our philosophies are part of what make up our culture, our form of life, and we need to come to grips with them if we want to understand ourselves.

  Even in a story of ideas, central characters have a backstory. The backstory of our ideas about knowledge is that they’ve grown up shaped by some very ancient problems, problems that the surging changes in information technology are dragging to the surface of our cultural consciousness and casting in new forms.

  Compare, for example, neuromedia with an old philosophical chestnut—the thought experiment of the Brain in the Vat. It goes like this: How do you know that you aren’t simply a brain hooked up to a computer that is busily making it seem as if you have a body, and are reading this book (and thinking about brains in vats)? If so, the world is just an illusion, manufactured for your benefit, and almost everything you think you know is actually false. This is the philosophical position known as skepticism. The skeptic’s basic idea is that we can’t ever determine whether what seems to us to be the case really is true. If so, then either we punt on knowing what is true, or punt on truth itself.

  The Brain in the Vat is itself an updated version of Descartes’ seventeenth-century story of the evil demon who spends all his time deceiving us. The idea is that all of our experience might be misleading. Descartes thought that even the demon (or the lab-coated evil scientist running the vat) couldn’t fool us about everything: for he can’t trick you into thinking you don’t exist without your existing already. Nonetheless, it seems like he could trick you about almost everything else. If the illusion is so perfect, what further experience could prove to us that our experiences are illusory? Whatever experience it is (including Laurence Fishburne turning up in cool shades with blue and red pills in The Matrix) would just be more experience. It could be an illusion too. And that will hold no matter how hard we work at gathering data, how open-minded we are to new information, or how objective we are in considering the facts. If the world is but a dream, then so too is our best science. And if that is possible, then maybe we don’t really know most of what we think we do.

  Now think about our two stories—neuromedia and the Brain in the Vat—together for a minute. In some ways they are mirror images. The one puts the computer in your brain; the other puts your brain in the computer. The one appears to make knowledge easy, the other makes it impossible. But look closer and you’ll see that they are more alike than they appear at first glance. They raise some of the same underlying philosophical questions. For example, in our earlier discussion, we were assuming that much of the information you’d be able to access through neuromedia is true. But how would we know that exactly? By checking neuromedia? Of course, we might ask someone else. But if everyone—or at least the people nearby—are also hooked up to the same sources, then it is not clear what we would really know. In both cases, it seems, real knowledge—knowledge of what is the case as opposed to wh
at we just happen to think is the case—is possible only by escaping the machine and getting to the world “outside.”

  Yet what if there is no getting “outside” the machine? What if even “brains in vats” aren’t real, and we are all just living a completely simulated life? That worry is closer to Descartes’ original nightmare. Its closest contemporary analogue is the thought that you and I are really just SIMs. A SIM is a “simulated person”—simulated by a computer program, for example. SIMs already exist. Popular Web-based games like Second Life, for example, have allowed people to create artificial “people” with SIM backgrounds, jobs, spouses, etc., for years. These programs even allow your SIM to continue to interact with other SIMs when you aren’t actively playing the game, pursuing its career, relationships and so on. And that, as the philosopher Nick Bostrom has recently suggested, raises the possibility that the universe in which we live is and always has been a simulation run by a computer program created for the amusement of super-beings with superior technology.16 If so, then we aren’t just wrong about whether, for example, we have arms and legs (as opposed to just being brains in vats). We are wrong about the nature of our universe itself: we might be living in a universe completely composed of information, whose underlying particles are really just the 1s and 0s of computer code.

 

‹ Prev