The Internet of Us

Home > Other > The Internet of Us > Page 11
The Internet of Us Page 11

by Michael P. Lynch


  Gilbert’s hypothesis explains why we do sometimes hold groups responsible over and above their members. As I write this, the corporation British Petroleum received a billion-dollar-plus fine for its role in the Deepwater Horizon oil spill in the Gulf of Mexico. Corporations, while they might be treated as “legal” people, are actually groups of people jointly committed to a common end of profiting from a particular enterprise or enterprises. When we hold groups who are jointly committed in this way responsible, we are holding a group responsible, not the individuals within the group. And it does seem as if we hold groups responsible not just for their actions but for their views—for example, if our job interviewers were, as a group, to believe that a man was the best applicant for the job even though a woman with far better credentials had applied. In such a case, we might hold that the belief—no matter how sincere—was unreasonable.

  In some cases, Gilbert’s view of joint commitments may also explain some group commitments made by digital humans. The digital “groups” that we form are often bolstered by a joint commitment to something, whether it be a political ideology, a hobby, a sport, or the practice and theory of hate. Such groups often do have the sort of common knowledge that joint commitment requires. But it is less clear whether people participating in Internet chat rooms, or posting on a comment thread on a popular blog, are really intending to “do something together.” Sometimes that may indeed be the case—Wikipedia is a good example of a network where posters are committed to a joint enterprise—but often the opposite is true. From the standpoint of Gilbert’s theory, Internet groups and networks may not have any group knowledge at all.7

  Yet even if social networks don’t literally know like individuals do—a view that Weinberger himself shies away from—there is still another way of thinking about the question of whether networks know. Groups can certainly generate knowledge, in the sense that the aggregating of individual opinions can give us information, and possibly accurate, reliable information, that no one individual could. Consider that ubiquitous feature of your online life: the ranking. There was a day when the only way to get information on whether a movie, restaurant or book was to your taste was to consult a professional review. Now we also have the star system. Instead of one review, we can get dozens, hundreds or even thousands. And in addition to the “qualitative” comments, we get an overall ranking, the average of individual rankings assigned to the product. Useful? Certainly. And most of us know some simple facts about such systems as well. To name the most obvious: the more rankings, the more reliable we tend to take the average score to be (1,000 rankings with an average of 4 stars is far more impressive than three rankings of 4.5 stars). Of course, we also know that the fact that many people like something doesn’t mean we’ll like it too.

  The fact that we so often trust such rankings—at least, in the right conditions—points out that we already tend to abide by the main lesson of James Surowiecki’s 2004 landmark book The Wisdom of Crowds. Surowiecki’s point was that in certain conditions, the aggregated answers of large groups could be wiser—could display more knowledge—than an individual, even an individual expert. Suroweicki’s most famous example comes from the work of Francis Galton, a British scientist. Galton examined a competition in which 787 contestants at a country fair estimated the weight of an ox. The average of all guesses was 1,197 pounds. The ox weighed 1,198 pounds.8

  Another of the most famous results in social science helps explain the limits and lessons we can draw from examples like the weight of the ox. Suppose a group of people vote on a yes-or-no question, where only one of the answers can be right. Suppose too that the probability that any one person gets the answer right is over 50%. According to what is called the Condorcet Jury Theorem, the larger the group, the more the probability of a correct answer by a majority of the group goes upward or approaches 100%. The basic math is intuitive: as the probability of a right answer by an individual goes upward, the probability that the collective answer is correct also rises (where the correct answer is decided by majority vote). So, if you have enough people, then even if they are only a little better than chance at getting it right, the group can be exceedingly good at tracking the right answer.

  There’s a hitch, however. Groups do better than individuals only under certain conditions, including the assumptions we stated at the outset: each individual is better than chance at getting the right answer, and the answers are aggregated by majority rule. In addition, the theorem applies best when the individuals in question (the “voters”) are independent of one another, and not (at least in a statistically meaningful way) influenced by other voters’ decisions (thus lowering the chances that they are participating in information cascades).

  There is considerable debate about the extent to which the Condorcet Jury Theorem maps onto real-life situations. One question, for example, is whether it can help justify the thought that democratic institutions are reliable mechanisms, other things being equal, for determining the best public policy. The theorem gives some comfort to that idea—again, assuming that the conditions are met. And sometimes they are. Voters in many elections are unaware of the votes of others at the time they cast their vote. (Although, famously, turnout on the West Coast of the United States can be dampened in presidential elections due to the news media’s polling of voters in later time zones. And polling results in general may shift voting patterns.) Relatedly, in some online rankings, consumers may be reasonably independent in their decision-making.

  Of course, it is just as often the case that these assumptions are not met. First, people are often not better than chance at judging the truth. We are all susceptible to bias and prejudice, and our opinions are often not all that independent (causally or statistically). Second, even if the members of a particular crowd are a bit better than chance at judging correctly, that doesn’t help much if the “crowd” in question is pretty small. And third, sometimes we might even be worse than chance. And in those situations where people are worse than chance at judging a situation, then the more people you get to answer the question, the higher the probability that you’ll get the wrong answer. In that case, the crowd is not wise but unwise.

  This brings us to a key point: whether or not a network “knows” something (even in the nonliteral sense) depends on the cognitive capacities (and incapacities) of the nodes on that network—the individual people who make it up.

  A good example is prediction markets. Markets like this trade in futures, but participants aren’t betting on whether a given company’s monetary value will rise but whether, for example, a politician will win an election or a particular movie will win an Oscar. These markets had some notable early successes; Intrade, for example, was famously better at predicting the 2006 midterm elections than cable news. (Intrade was one of the most widely cited before it closed in 2013.) In a certain obvious sense, markets like this can be seen as encoding the information of the network of investors that make it up—not only about what may happen in the future but about the state of an election at any given time. But the way in which this information is aggregated is not, as in the cases above, statistical. Prediction markets don’t average the views of their participants, rather they work in the same way other markets work: the more confident buyers are that a given candidate will win, the higher his or her “stock” goes—the more value is attached to it, no matter how many people may own that stock.

  But prediction markets also have their limits. A well-known example, noted by David Leonhardt of the New York Times, was the 2012 Supreme Court decision on the Affordable Care Act.9 Right up to the last minute, Intrade was indicating a 75 percent chance that the Act’s mandate would be declared unconstitutional. That was wrong—and, in fact, as Leonhardt noted, many insiders had been going the other way. Arguably the insiders’ information was better, and their take on it more legally sophisticated, than that of the larger crowd. In this sort of case, the larger crowd is not the one you want to listen to. One might think the same goes for predictin
g something like the success of medical surgery. Unless the crowd has the same information and training as the relevant experts, it is not clear that they have wisdom to impart. As Leonhardt’s colleague Nate Silver noted during the final run-up to the 2012 election, such markets may contain more or less sophisticated participants, and the more sophisticated the average participant, the more other sophisticated participants tend to trust it. Moreover, when a given market is highly cited in the press, “that opens up the possibility that someone could place a wager on [a candidate] in order to influence the news media’s perceptions about which candidate has the momentum.”10 If so, the market may not be reflecting or mapping voter opinion but helping to determine it.

  So, although networks can embody knowledge, or at least true information, not held by any particular individual, the extent to which they do so depends very much on the cognitive capacities of the individuals that make them up. You can’t take the individual out of the equation.

  The importance of the individual remains even though what we know as individuals depends on the social networks to which we belong. Take two indistinguishable people, Alycia and Bri, with the same belief and all the same evidence available by introspection. Stipulate that they are equally good (or bad) bullshit spotters, equally good (or bad) detectors of reliable testimony.11 Suppose each is hooked in to a different online community: different friends on Facebook, different Twitter feeds, different news stations and so on. If Alycia’s social network has high standards for belief and Bri’s network has very lax standards, then there will be more unreliable testimonies floating around in Bri’s network than Alycia’s. That’s because Alycia lives in a network where people in general are more critical and discerning. So more folks will just believe less, period. And what that means is that since Alycia will be getting at least some information from her network, a higher percentage of her information will be accurate even if she may have fewer firm opinions and beliefs overall. The opposite is true of Bri’s community. They are more inclined to believe what others say just because they say it. As a result, Bri’s social network will have more opinions—they might, depending on how lucky they are, even have more true opinions. But as a result, it is likely that a lower percentage of Bri’s total number of beliefs will be true. So Bri’s beliefs that are formed on the basis of testimony are less safe than Alycia’s; they are more easily wrong, and more prone to be right by luck when they are right at all.

  How much that matters will depend, of course, on what’s at issue. When the question is which cats are cuter than other cats, we can afford to shrug our shoulders. But when the stakes are high—when the questions concern matters like whether climate change is real or whether the measles vaccine causes autism—the situation is different. Our community, our network, is only as smart as its standards for evidence allow. Even if you are as tough-minded as they come, if your social network is gullible, then you are more likely to receive unsafe testimony—and thus you are less likely to know. And that leaves us with a very clear lesson: our standards and epistemic principles matter. Reasonableness matters; being critical matters. And that, in turn, shows that what’s important isn’t just how we “train” our networks, but how we train the individuals that compose them. It is in our joint interest to support institutions that encourage the pursuit of critical public discourse by individuals.

  In other, blunter words: the growing networked nature of knowledge makes the independent thinker more, not less, important than ever before. We need more of them.

  The “Netography” of Knowledge

  So far, we’ve left out a subtler but possibly more important point: our networked lives might be altering the very structure of knowledge.

  Weinberger puts the matter this way:

  Our system of knowledge is a clever adaption to the fact that our environment is too big to be known by any one person. A species that gets answers and can then stop asking is able to free itself for new inquiries. . . . [T]his strategy is perfectly adapted to paper-based knowledge. Books are designed to contain all the information required to stop inquiries within the book’s topic. . . . [But with our new] connective medium . . . our strategy is changing. And it is changing the very shape of knowledge.12

  Weinberger’s point is that we’ve traditionally seen building or expanding the body of knowledge as the expansion of a series of “stopping points.” Inquiry, scientific or otherwise, was aimed at getting to someplace safe, an answer we could trust. Once we got there, we could move on. But in Weinberger’s view, the new mediums for knowledge created by digital technology are changing this picture. That’s because, in his view, the Internet doesn’t really deal with stopping points.

  The notion that knowledge can have a shape—a structure—goes back a long way in Western culture. Plato himself drew something of a graph of knowledge, his so-called “divided line,” which depicted a difference between true knowledge and mere opinion. True knowledge was founded, Plato thought, on a grasp of the eternal essences he called the Forms—another structural metaphor. Ultimate reality was comprised of the Forms, and hence the only true knowledge started with knowledge of them.

  Since the seventeenth century, the dominant structural metaphor for knowledge has been architectural. Again, René Descartes deserves much of the credit (or the blame, depending).13 According to so-called Cartesian foundationalism, the structure of knowledge is like a building or pyramid, with the foundation supporting the upper floors. Similarly, our beliefs are supported by other beliefs and ultimately by foundational beliefs and principles. Descartes’ view was that if our house of belief was going to be stable and lasting, it had to end at certain propositions that were so obviously true that they were beyond any shadow of a doubt. These were the foundation stones—the ultimate stopping points.

  The classic problem with Descartes’ own version of this network was that he was a picky mason. Foundation stones that met his high standards were hard to find. To his mind, a foundational belief had to be certain and self-evident. His most famous example of the perfect foundation stone was a belief about himself: I think, I exist, is necessarily true whenever any of us thinks it. And that seems right: if anything is self-evident, it is that I think, for the simple reason that I can’t doubt that I think without thinking. It is, as it were, a thought I can’t escape. The problem is that this doesn’t get us very far.

  Later philosophers have tended to be less picky, but many stuck with the metaphor. Philosophers like Locke sensibly emphasized that the foundational nodes also had to include experience with an objective world. What grounded our beliefs, ultimately, was logic and experience. This is an idea that has sunk deep into our cultural bones: beliefs are justified, we think, when they are “supported” or “grounded.” These words themselves reflect the foundationalist perspective of Descartes.

  If you think of the body of knowledge as having a foundational structure, you’ll be apt to be careful about what you add to the body of your beliefs. You’ll want to make sure that new additions are secure, safe and overwhelmingly likely to be true. Otherwise they may upset the stability of the structure as a whole. As Weinberger argues, it is a way of thinking about knowledge that would come naturally in a world where knowledge is expensive—where recording and storing knowledge is itself a costly project. As he puts it, “Traditional knowledge has been an accident of paper.” When the results of what we know can only be recorded slowly, when data must be written down, then the cost of that recording raises standards as to what we collect. We might want to collect all that is worth knowing, but libraries are finite physical spaces. They cost money to maintain. And so they require gatekeepers and filters to decide what gets into the library. And that encourages a certain picture of how knowledge fits together.

  But in the infosphere, things look different. First, the library of the Internet is vast. The body of information available to us is so bloated that it would totter on any foundation, no matter how strong. Walls can’t contain the digital library that sur
rounds us. And, of course, that information is growing every second.

  Moreover, there is very little in the way of “gatekeeping” on the Internet. When ISIS beheaded an American journalist in 2014, displaying it on the Internet, digital giant YouTube responded by dropping access to the video and Google blocked searches of it. But that didn’t stop it from continuing to get out. In the digital realm, information, even bad, morally reprehensible information, always finds a way.

  Add these points to the facts we’ve discussed above—that not only what we know but also how we know is networked—and one can sympathize with the thought that it is no longer accurate, or even useful, to think of knowledge and justification as having a pyramid structure. Perhaps knowledge has no foundations. Indeed, Weinberger goes so far as to suggest that knowledge no longer rests on facts of any kind: “the idea that the house of knowledge is built on foundations of facts is not itself a fact. It is an idea with a history that is now taking a sharp turn.”14

  Actually, suspicion of the foundationalist picture of the structure of justification is hardly new. The logical positivist Otto Neurath—a member of the famous Vienna Circle gathering of intellectuals in the early twentieth century—famously suggested another metaphor. He likened justifying our beliefs to rebuilding a raft at sea. If we are to work on one of the planks, we must stand on another. If we later need to repair that second plank, then we must go back to standing on the first. We can’t repair all the planks on a boat at sea at the same time. In other words, when we support our beliefs about one kind of thing, we take other beliefs for granted as justified. But we might later throw those into question, and take the first ones for granted. There is no point outside of the raft—outside our framework of beliefs—on which to stand.

 

‹ Prev