The Internet of Us

Home > Other > The Internet of Us > Page 10
The Internet of Us Page 10

by Michael P. Lynch


  The connection between autonomy and privacy may sound surprising to some. After all, one could say: we are in fact willing to trade away our privacy as never before, precisely for the purpose of increasing autonomy. Our willingness—the thought goes—to trade privacy for security is just one example of this phenomenon. Another is our near total passivity when it comes to the trading of our data for profit by private corporations. We want more autonomy and they are providing it, by giving us convenience. Indeed, that’s precisely the business model of corporations like Facebook or Amazon—to maximize convenience and anticipate our needs. Thus, one might say, it is not surprising that we click past all the privacy policies on the Web because we want the choices, the convenience—the autonomy—that only the playground of the infosphere can bring. Privacy suffers, but autonomy increases.

  This argument, however, gets things the wrong way around. When we systematically collect private data about someone, we implicitly adopt what the philosopher Peter Strawson called the “objective” or detached attitude toward her.12 We see her as something to be manipulated or controlled—even if, in fact, we never get around to the actual manipulating or controlling. Where privacy is limited in the detention camp or prison, the adoption of this attitude toward the inmate is of course explicit. It is an intrinsic feature of the enterprise and it is intuitively felt as such by those detained. Crucially, however, it remains implicit in more subtle invasions of privacy. In some cases, this is unsurprising. When a business sells or otherwise profits from your private information—your Web searches, for example, or email address—it intentionally treats you as an object: an object of profit. Indeed, the nominal idea behind the privacy policies none of us read is to inform us of how our information will be used. They are a nod to our status as autonomous beings.

  In truth, however, the Internet of Us is making privacy policies moot. When almost every object we interact with is wired, it becomes useless to assume that we consent to the mining of the data trail attached to our use of that object. That’s because we simply have no way of being able to anticipate how the data being extracted from our refrigerators, for example, might be used in the future—by a company or by a government. Once the data is out there, it is out there. Any illusion we might have had about controlling or owning it gradually disappears. As Sue Halpern, an astute observer of the digital age, remarks: “The Internet of Things creates the perfect conditions to bolster and expand the surveillance state. In the world of the Internet of Things, your car, your heating system, your refrigerator, your fitness apps, your credit card, your television set, your window shades, your scale, your medications, your camera, your heart rate monitor, your electric toothbrush, and your washing machine—to say nothing of your phone—generate a continuous stream of data that resides largely out of reach of the individual but not of those willing to pay for it or in other ways commandeer it.”13

  Earlier I noted there are two marks to information privacy: control and protection. Control over our information may be increasingly under threat by the Internet of Things. But that only makes concentrating on restricting and regulating information flow all the more important. The Internet of Things is enlarging the pool of data and information available for future use; that’s why we need more fencing. We need the fences of regulations not only because they help prevent abuses, but because the pool threatens our autonomy.

  There is another point here as well. Surveillance treats us as means, not as ends. And that is another reason the incidental collection of our data should worry us. A government that sees its citizens’ private information as subject to tracking and collection has implicitly adopted a stance toward those citizens inconsistent with the respect due to their inherent dignity as autonomous individuals. It has begun to see them not as persons but as objects to be understood and controlled. That attitude is inconsistent with the demands of democracy itself.

  Transparency and Power

  Invasions of privacy aren’t always wrong. If they were, we wouldn’t have to spend so much time talking about the issue. My point is that they are always pro tanto wrong, as the legal scholars say. They are wrong—but wrong other things being equal.

  Invasions of privacy can therefore be justified in the overall context. Searches of people’s homes are judged “warranted” (that is, justified) for all sorts of reasons by the courts, as are surveillance operations of criminal suspects. Or consider the case of metal detectors and full body scanners at airports. The latter were (and still are) controversial on privacy grounds; moreover, more than one person argued that the scanner violated their dignity. But while scans like this can make you uncomfortable, this sort of directed, publicly known invasion of one’s privacy is not equivalent to the systematic program of incidental collection and meta-analysis of phone call data practiced by the NSA. That’s because full body scans are given to commercial airplane passengers for a very specific reason: to detect whether they have a concealed weapon or explosives. This reason is well understood—or should be—by those given the scans. It is, in fact, a classic case of trading privacy for more security. It is a trade that may be justified, all things considered. Airport body scans are not stored indefinitely and open to the scrutiny of security agencies. They are made, examined, and eliminated. And they aren’t being done secretly either. A better analogy would be this: secret scanners are set up so scans are taken of every person in his or her home. No one is told about the scans. They are stored indefinitely, and a wide range of agencies can examine them without a warrant. Still think that would be justified?

  The possible negative consequences of losses of privacy in the digital age suggest that we must prepare for the worst even as we hope for the best. Think again of the swimming pool example. We need fences around our digital pools of information too. That’s why, for example, some of the steps recently proposed by the Obama administration—to strengthen the FISA court’s powers, and to limit some of the NSA’s surveillance programs—are at least steps in the right direction.14

  No one denies that governments naturally diminish our autonomy in all sorts of ways. Just participating in a government, as Hobbes stressed, is a trade-off. But the point I’ve been making in this chapter is that there is something different in the case of systematic, unknown invasions of privacy. By invading our privacy without our knowledge, governments are making invisible decisions for the citizenry as a whole. That’s not the same as restricting autonomy by asking people to go through a scanner at the airport. That’s power visible to all, applied to all. Nor is it like wiretapping a particular citizen whom the courts have decided is a potential danger. Rather, these systematic, unknown invasions of privacy treat the citizenry as a whole in an unhealthy way. We are being regarded as unworthy of making up our own minds, whether we know it or not. That is an attitude that is corrosive of democracy, one made all the more corrosive by not being visible.

  These reflections also give the lie to the idea that privacy of information is a modern creation. It is not. The source of privacy’s value is deeper, lying at the intersection of autonomy and personhood itself. That is why privacy still matters. We are wise not to forget that, even as we trade it away.

  Knowledge may be transparent, but power rarely is.

  6

  Who Does Know: Crowds, Clouds and Networks

  Dead Metaphors

  Truths, Nietzsche once wrote, are worn-out metaphors, “coins that have lost their pictures, and now only matter as metal, not as coins.”1 The word “network” has lost its luster in just this way: we now just accept it as a literal description of the facts. Our economy is a network; our social relations are networked; our brains are composed of neural networks; and of course, the Internet, the World Wide Web, is a network. Thus, we might wonder whether knowledge is too. This idea has become reasonably common in tech circles. Some believe it is a game-changer. Again, David Weinberger is at the forefront: “In a networked world, knowledge lives not in books or in heads but in the network itself.”2 Indeed
, in Weinberger’s view, the information age is basically over. We live in the networked age, where information doesn’t come in discrete packets but in structured wholes.

  Let’s start unpacking that notion by looking at the idea of a network itself. Think of the ways in which one can describe—or map—a transportation system, such as a subway. One way is to simply superimpose the path of the train tracks onto an existing street map. That works fine, as long as the street map is not too detailed itself, and as long as there aren’t too many underground tubes and tracks. If, for example, there is just one track, with two stops, then passengers only need to know where these stops are in order to orient themselves. But what if there are dozens of stops, and the lines crisscross and don’t follow the paths of the streets overhead? That was the problem that Harry Beck, an employee of the London Underground, aimed to solve in 1931 by developing a new Tube map—one which, with additions, is still familiar to riders today. What was different about Beck’s map is that he ignored the geography of the city and concentrated solely on showing, without reference to scale, the sequence of stations and the intersection of the Underground lines.

  By doing so, Beck was able to bring to the fore the information that Tube riders really wanted most: how many stops are in between the present stop and the one you want to get to, and where the lines interconnect. By knowing these two facts, you can deduce how to get from A to B.

  As the information theorists Guido Caldarelli and Michele Catanzaro note, Beck’s map is like a graph. As such, it displays a basic feature of a network: “in networks, topology is more important than metrics. That is, what is connected to what is more important than how far apart two things are: in other words, the physical geography is less important then the ‘netography’ of the graph.”3 The reason why, in this case, is pretty clear. The netography or topology of the Underground matters to us because what we are interested in is how information

  is distributed in that system—or, more bluntly, in how we riders are distributed along the lines of the Underground tracks. What Beck’s map shows is that thinking of something as a network is useful when what matters is a complex pattern of distribution between points rather than the points (the “nodes”) themselves. This is part of the reason it makes sense to say that knowledge is becoming more and more networked. The infosphere has made it possible to distribute information so efficiently, and so quickly, that these facts about the distribution become important in themselves.

  But really, we are more networked than that. We are increasingly composing a knowledge network—or is it composing us?

  Knowledge Ain’t Just in (Your) Head

  Let’s go back to neuromedia. What would happen if it became available to the general population? The nature of communication would change, certainly. But that’s not all. The boundaries between ourselves and others would, in certain key respects, change as well—especially with regard to how we come to know about the world.

  Suppose everyone in a particular community has access to this technology. They can query Google and its riches “internally”; they can comment on one another’s blog posts using “internal” commands. In short, they can share knowledge—they can review one another’s testimony—in a purely internal fashion. This would have, to put it lightly, an explosive effect on each individual’s “body of knowledge.” That’s because whatever I “post” mentally would then be mentally and almost instantly accessible by you (in a way that would be, we might imagine, similar to accessing memory). We’d share a body of knowledge by virtue of being part of a network. But that is not the most drastic fallout of neuromedia. The more radical thought is that we are sharing the very cognitive processes that allow us to form our opinions. And to the extent that those processes are trustworthy and accurate, we can say we are sharing ways of knowing.

  The traditional view has always been that humans know via processes such as vision, hearing, memory and so on. These ways of getting information are internal; they are in the head, so to speak. But if you had neuromedia, the division between ways of forming beliefs that are internal and ways that are not would no longer be clear. The process by which you access posts on a webpage would be as internal as access to your own memory. So, plausibly, if you come to know, or even justifiably believe, something based on information you’ve downloaded via neuromedia, that’s not just a matter of what is happening in your own head. It will depend on whether the source you are downloading from is reliable—and that source will include the neural networks and cognitive processes of other people. In short, were we to have neuromedia, the difference between relying on yourself for knowledge and relying on others for knowledge would be a difference that would make less of a difference.

  Andy Clark and David Chalmers’ “extended mind” hypothesis suggests that, in fact, our minds are already extended past the boundaries of our skin.4 When we remember what we are looking for in a store by consulting a shopping list on our phone, they argue, our mental state of remembering to buy bread is spread out; part of that state is neural, and part of it is digital. The phone’s notes app is part of my remembering. If Clark and Chalmers are right, then neuromedia doesn’t extend the mind any more than it already is extended. We already share minds when I consult your memory and you consult mine.

  The extended mind hypothesis is undoubtedly interesting, and it may just be true. But we don’t actually have to go so far to think knowledge is extended. Even if we don’t literally share minds (now, at least), we do share the processes that ground or justify what our individual minds believe and think. As philosopher Sandy Goldberg has pointed out, when I come to believe something based on information you’ve given me, whether or not I’m justified in that belief doesn’t depend just on what is going on in my brain. Part of what justifies my belief is whether you, the teacher, are a reliable source. What justifies my receptive beliefs on the relevant topic—what grounds them—is the reliability of a process that includes the teacher’s expertise. So whether I know something in the receptive sense already can depend as much on what is going on with the teacher as it does the student.5

  Goldberg’s hypothesis seems particularly apt when we form beliefs receptively via digital sources—which, as I said, can be understood as knowing via testimony. In relying on TripAdvisor, or Google Maps, or Reddit, I form beliefs by a process that is essentially socially embedded—a process the elements of which include not just chips and bits but aspects of other people’s minds, social norms and my own cognition and visual cortex. How I know is already entangled with how you know.

  The Knowing Crowd

  So far then, we’ve seen that knowledge has become increasingly networked in at least two discernible ways: Google-knowing is the result of a network. And our cognitive processes are increasingly entangled with those of other people.

  This raises an obvious question. Is it possible that the smartest guy in the room is the room? That is, can networks themselves know?

  There are a few different ways to approach this question. One way has to do with what those in the AI (artificial intelligence) biz call “the singularity”—a term usually credited to the mathematician John von Neumann. The basic idea is that at some point machines—particularly computer networks—will become intelligent enough to become self-aware, and powerful enough to take control.

  The possibility of the singularity raises a host of interesting philosophical questions, but I want to focus on one issue that is already with us. As we’ve discussed, there are reasons to think that we digital humans are, in a very real sense, components of a network already. So, could networked groups literally know things over and above what their individual members know? And if groups know things as a collective—in any sense of “know”—then they have to be able to have their own true, justified beliefs. Is that possible?

  Some philosophers have argued that it is, and cite the fact that groups can pass judgments even when no individual in the group agrees with the judgment. For example, imagine a group of interviewe
rs trying to choose the best person for the job. Suppose they interview three candidates and each of the interviewers ranks each of the candidates by order (with one being highest). It might turn out that nobody ranks candidate B as number one but that B still turns out as the candidate with the highest cumulative ranking (if, for example, everyone ranks B second but split their remaining votes). If so, then the group “believes” that B is the best candidate for the job even though no individual in the group has ranked that candidate number one.

  The eminent philosopher of sociology Margaret Gilbert has argued that, if they exist, real group beliefs are the product of what she calls “joint commitments.”6 A joint commitment is the result of two or more people expressing a readiness to do something together as a unit—like dancing a waltz, performing a play, starting a business, or interviewing a job applicant. You don’t, Gilbert emphasizes, always have to engage in a joint commitment deliberately. Often we express our willingness to act together only implicitly, as I might if I just held out my hand to you and gestured toward the dance floor. But however individuals express their readiness to jointly commit, their expression must be common knowledge to all; it must be something that is so taken for granted that everyone knows and everyone knows that everyone knows. In Gilbert’s view, when these conditions are in place and a group has a joint commitment of this sort, it makes sense to think of groups as having a belief just as individuals have beliefs.

 

‹ Prev