The Internet of Us

Home > Other > The Internet of Us > Page 12
The Internet of Us Page 12

by Michael P. Lynch


  A slightly more updated, but similar metaphor might be the “wiki.” A wiki is a platform by which numerous people can participate in shaping a document or webpage. There is no single editor with a single “foundational” vision of how the work should turn out: changes are often made piecemeal, dropped in, replaced, reedited and so on. No single bit of information is immune to change. Or we might think of a fabric: Descartes understood knowledge to be secured by the strength of its foundations, but the fabric metaphor sees it being secured by the strength of its connections. What we know—if we are lucky enough to know—is woven together and constructed from many interlocking strands. Each of the strands supports the rest—some directly, some more remotely. In most weaves, no single thread or set of threads supports any of the others. The support they provide is, we might say, holistic, not linear. The same with spiderwebs. Or World Wide Webs.

  The point of all these metaphors is the same. Webs, fabrics, interlocking planks of a raft and wikis are all networks, but they are not networks with foundational nodes; the nodes are where the individual lines and threads cross. And that, of course, is the point. Our beliefs are nodes in a network, supported by the overall coherence of the fabric of beliefs to which they belong.

  The “coherentist” picture of reasons does seem like a better description of how we justify our beliefs to one another in the Internet age. Nowadays, when we want to know whether something is true, we look it up on the Web. Practically speaking, that means checking to see how the relevant proposition hangs together with other things we think we know. Suppose I wanted to know the average size of sea turtles. I google it and find several pages that give me an answer. I pick Wikipedia. I then want to check whether Wikipedia is an accurate source of information on sea turtles. So I google that—and find that Wikipedia has an extensive page on whether Wikipedia is reliable (which is, in fact, the case—check it out). I may indeed find that my original belief about the average size of a sea turtle is justified; I find that it is confirmed by a page whose reliability is also confirmed. The whole pattern or structure of reasons here takes the form of an interlocking network.

  As we saw in the first part of this book, however, knowledge comes in more than one form. That’s crucial to remember right here, for the simple reason that only using Wikipedia to check on Wikipedia is circular. I’ve never really left the network of information I was consulting. In many cases, that is fine. If the circle of reasons is wide and big enough, we may not need to worry. But generally speaking, being trapped in a circle of reasons is—or should be—a disquieting fact. For it leaves open the possibility that our networks of reasons are just massive and mutually reinforcing fantasies. If our networks of reasons are really going to be justified, if they are really going to get us knowledge, then at some point they need to be anchored to something else, something beyond themselves. That is why it is a mistake—and I think here Weinberger and I might agree—to think that facts, and justifying our beliefs in light of them, are no longer important to the pursuit of knowledge. Giving up on the Cartesian dream of certain, immutable foundations doesn’t mean that we should give up on anchoring our beliefs altogether.

  How then are they anchored? In two ways. First, by the objective world itself—by what is true and what isn’t. That’s why, as I urged earlier, we don’t want to give up on the idea of truth. The second is that reason-giving isn’t all there is to knowing. We can also know by being receptive to the facts outside of ourselves, by having what the contemporary thinker Ernest Sosa calls “animal” knowledge or Descartes before him called cognitio.15 That’s a good thing to remember in this context. My network of reasons isn’t just floating at sea. Some of the beliefs for which I have reasons are also ones that I know receptively, by responding to the environment in which I live with the senses I have. Others I may know receptively without being able to defend them with reasons.

  Humans can get this kind of anchoring knowledge by getting up off the couch and plunging into the whirlpool of actual experience. It is still the best way, in my view—although not the only way—for us digital humans. To escape your circle of justification, do what you do with any circle: step outside its borders and breathe in the environment on the outside.

  Of course for our receptive beliefs to be actually anchored to reality, reality must cooperate. And sadly, it often does not. That is why we are always forced back to look for reasons, to standards of reasonableness. We need assurance that the anchor is truly set. That gets us back to our network of reasons; we are back in the circle. As the philosopher Duncan Pritchard has noted, it is probably our lot as knowers to always be in some state of angst about our knowledge.16 There is no getting around the fact that in order to know receptively, we have to be lucky; we either track the facts around us or we don’t and are fooled again. The anchor sets or it doesn’t.

  I’ve argued in this chapter that the zeitgeist takes us to be networked knowledge machines. That’s one of the lessons to draw from reflecting on the neuromedia thought experiment. As I’ve pointed out, knowledge and the process of justification is growing more networked in several ways: in its structure, in its source and, most radically, in the fact that our own cognitive capacities themselves are networked. In and of itself, this increasingly networked nature of knowledge isn’t good or bad. It is just what is happening. What can be good or bad is how we react to this fact. As I’ve been urging, what we don’t want to do is assume that because knowledge is networked, the nodes in the network—the individual knowers—no longer matter.

  7

  Who Gets to Know: The Political Economy of Knowledge

  Knowledge Democratized?

  The Internet of Things and the networked knower are changing not only how we know, they are changing the politics of knowledge. And like all politics, the politics of knowledge is about power. In this case, it is the power over who gets to count as a knower and what gets to count as known. As Larry Sanger, philosopher and cofounder of Wikipedia, says, this is an awesome sort of power, because “it can shape legislative agendas, steer the passions of crowds, educate whole generations, direct reading habits and tar as radical or nutty whole groups of people that otherwise might seem perfectly normal.”1

  For much of Western history, it was the Church that determined what passed for knowledge. The means for exercising this power largely consisted in its ability to control who could read and what was written down—the Church both ran the universities and controlled the copying (by hand) of texts. Of course, after the print revolution, that began to change. The printing press allowed more people the opportunity to not only write down but mass-produce and distribute their own thoughts. Thus, what counted as knowledge became more diffuse, but also more accessible. Before long, however, power began to shift toward those who controlled the presses and means of distribution—and state imposition of copyright laws and censorship quickly became more prevalent and important. Since the eighteenth century, contemporary liberal societies have slowly (and not without much backsliding) made efforts to curtail state censorship and to allow ideas to spread more freely. Of course they too have had their own gatekeepers, even if their gates were more permeable: libraries, universities, publishers, the media. Yet as anyone who has been paying attention to these trends knows, those gates too have been coming down.

  The Internet, it is often said, is democratizing knowledge. This is perhaps the single most heralded upside of the changes in informational technology we’ve been experiencing for the last two decades. But what does it mean to “democratize” knowledge—and how might current technologies contribute to that process?

  First, and most obviously, the Internet, like the printing press before it, has made bodies of knowledge more widely available. The possibility of mass-produced books lowered the price at which knowledge could be bought and sold. As such, it brought such knowledge—and the possibility of literacy—to millions of people who had previously lacked access to it. Web 2.0 has greatly expanded this process while also chan
ging both the sheer amount of different kinds of information available and the speed at which that information can be accessed.

  A good example concerns this very topic. Try googling “How many people have access to the Internet,” and sources such as Wikipedia and the International Telecommunications Union in Geneva will tell you that while roughly 94 percent of Swedes have Internet access, and 84 percent of Americans, only 2.1 percent of the population of Chad does. Nonetheless, the very availability of these statistics is a great example of the sort of information that just a few years ago you’d have had to go to a large research university to find or rely on journalists to report. While billions of people continue to have no access to it, millions have immediate access to the sort of information they wouldn’t have had just a decade ago. In short: while it is far from ubiquitous, “more information to more people” is one obvious way that the Internet is making knowledge—or its acquisition—“more democratic.”

  The Internet is also democratizing knowledge by making its production more inclusive. One common example here is open source software like Mozilla’s Firefox Web browser. When security vulnerabilities or bugs arise in Firefox software, a diverse and widespread community of volunteers works on fixes and plug-ins. Open source software operates similarly to an online co-op. It is software by the people, for the people.

  Epistemic inclusivity is also a by-product of the growing number of open access research sharing sites such as Academia.edu. Founded in 2008, Academia.edu allows its millions of users (I’m one) a platform upon which to share and comment on one another’s research. It allows researchers to pass their work directly to those who might be interested in it, or benefit from it.

  Inclusivity of a different sort can come about by what Wired’s Jeff Howe dubbed “crowdsourcing” in 2006. Crowdsourcing is not simply any activity that uses the World Wide Web as a platform for people to network about problems—like Intrade or rankings on Amazon. As computer scientist Daren Brabham defines it, crowdsourcing is an online problem-solving and production model that “leverages the collective intelligence of online communities to serve specific organizational goals.”2 In other words, it is the top-down organized use of the Internet hive-mind. An organization throws out a problem, and those who want (or those granted access to the relevant site or network) contribute solutions, and see what sticks.

  The popular incentive-based innovation platform InnoCentive is often cited as an example of the inclusivity of crowdsourcing. (InnoCentive is ancient in Web 2.0 terms: it was founded in 2002.) Here’s how it works: nonprofits and businesses post prize competitions for solutions to challenges. These can run the gamut—from retail product positioning to early detection mechanisms for inflammatory bowel disease. The prizes themselves vary in size, with some topping nearly a million dollars but many being significantly less. InnoCentive is only one example of how crowdsourcing can work, of course. Other famous examples include Amazon’s Mechanical Turk, which allows companies (and scientific researchers) to outsource specific tasks to a huge network of “Turkers” to perform tasks that humans are still better at than computers, such as image identification and translation. Still another is Threadless, an organization that assigns a crowd of T-shirt designers the job of selecting (and creating) new T-shirt designs.

  Challenge-specific prizes, like those used by InnoCentive, have been useful sources of innovation in scientific research for centuries. The British Crown spurred a huge leap forward in marine navigation in the seventeenth century, for example, by offering a prize for a device that could calculate a ship’s longitude—resulting in the invention of the marine chronometer. Competitions like this work partly because they provide an incentive for “fresh eyes” on the problem. Indeed, researchers Lars Bo Jeppesen and Karim Lakhani, in their 2010 study of InnoCentive, suggested that there is an inverse relationship between a solver’s likelihood of solving a problem and his or her degree of expertise in the field in question.3 As Brabham writes, this means that, for example, “a biologist may fare better than a chemist would at solving a chemical engineering problem.” 4 The same study also found that women significantly outperformed men as problem solvers on the site—despite, and possibly because of, the fact that they are often on the edges of the “scientific establishment.” That is inclusivity of a very obvious sort.

  A third way that the Internet has democratized knowledge is by making what is known more transparent—particularly with regard to information held by governments. The most obvious and controversial example of this is WikiLeaks, a nonprofit organization that publishes news leaks and classified governmental information online. Its disclosure of videos and documents related to the Iraq and Afghanistan wars in 2010 and 2011 caused a worldwide uproar. Supporters defended it as a tool for exposing the important facts that are relevant and needed in order for citizens to make informed democratic decisions. Critics denounced the organization as putting the lives of soldiers and diplomats at risk. Of course, both of these claims can be true—and whether or not WikiLeaks is ultimately beneficial or harmful, it is just the most visible example of the use of the Internet to enforce or encourage transparency. From revelations of NSA spying to videos of police mistreatment, the Internet can be used to shine a light on all sorts of activities, arguably empowering citizens.

  So there is no doubt that the Internet has changed how we distribute, produce and reveal knowledge, and in many ways for the better. But using the language of “democratization” to describe these changes obscures as much as it describes. It ignores the fact that these changes aren’t necessarily leading to more democratic ways of organizing our information society.

  Epistemic Equality

  Changes in how knowledge is distributed, produced and revealed indirectly affect the politics of knowledge—who gets to count as a knower and why—because they directly influence the economy of knowledge. By the economy of knowledge I mean, roughly, the structure of relations that divides epistemic labor and governs its exchange. The simple fact is that not all changes in the economy of knowledge—even those that can be legitimately described as “democratizing”—are leading societies to become more democratic.

  Begin with a familiar economic bedtime story. Once upon a time, if you wanted a new chair, you either made it yourself or you went to a specialized craftsman. That craftsman would make the chair, but in turn received his materials and tools from still other craftsmen, and they in turn from others. In this way their labor—and expertise—was divided. No one person was responsible for the chair. A group—or a network of individuals, all of whom had some specialized knowledge—created it. Then (according to the story) one day the Industrial Revolution came, and the expansion of capital, and the invention of large machines to mass-produce chairs, allowed factories to create chairs without having to employ legions of expert furniture-makers. This lowered the price, despite the fact that the chair you bought from the factory was also produced by a network—indeed, an even larger network—with more specialized nodes. Some nodes were responsible for raw goods, as before, some for transportation, some for building special machine parts, and some for operating the individual machines that jointly produced the parts that another node would assemble into the chair. Then, one day (again according to the story), the global economy was born. And the network responsible for the chair you bought at Walmart got even bigger. It stretched across the globe, with the factory making the chair—or the iPhone on which you shopped for it—now in China, where the workers would be so specialized that some were hired simply for the size of their hands, and could be paid so little that the price of the products back home in the West could be cheaper than ever, despite the distance each product had to travel in order to find its way to your home.

  Just as no one person can build everything, no one person can know everything. As a result, societies have always divided not only manual labor but intellectual labor between highly skilled laborers and expend significant public resources on training and rewarding those with such
skills. That’s precisely the way we still organize the medical, legal and scientific professions, for example. There is a network of knowledge, but it is a network in which the individual nodes are sources of expertise, and hence have a better chance of passing on good information and weeding out the bad.

  The increasingly networked nature of knowledge challenges this model of the economy of knowledge. Indeed, the bestselling economist Jeremy Rifkin thinks that we are seeing the emergence of a new world order and the death of capitalism. We are instead, Rifkin suggests, seeing the rise of the Collaborative Commons:

  The IoT [the Internet of Things] enables billions of people to engage in peer-to-peer social networks and cocreate the many new economic opportunities and practices that constitute life on the emerging Collaborative Commons. That platform turns everyone into a prosumer and every activity into a collaboration . . . allowing social capital to flourish on an unprecedented scale, making a shared economy possible.5

  As a result, Rifkin argues, the Internet of Things and the networked nature of our digital form of life are moving us toward “the zero marginal cost society.” In turn, that challenges the central capitalist tenet, that increased human productivity requires increased human labor. “The traditional dream of rags to riches is being supplanted by a new dream of sustainable quality of life”—a life where we can spend more time engaged in pursuits that interest us, such as making music, cooking better food and thinking about philosophy.6

 

‹ Prev