Thought experiments like this show that ancient skeptical worries about knowledge are not only still with us, they are being made anew. But really, we don’t need to appeal to SIMs, Brains in Vats or nearly infinite libraries to see that our culture is facing an intellectual crisis about knowledge. As we’ll see over the next few chapters, the new “old” philosophical problems that make up the backstory of our form of life are actually far more immediate, more pressing, and less abstract. That’s what makes them so unnerving.
2
Google-Knowing
Easy Answers
One day in the summer of 2014, I wrote down four questions to which I didn’t know (or had forgotten) the answers. The challenge: to answer the questions without relying—at all—on the Internet. The four questions were:
1. What is the capital of Bulgaria?
2. Is a four-stroke outboard engine more efficient than a two-stroke?
3. What is the phone number of my U.S. representative?
4. What is the best-reviewed restaurant in Austin, Texas, this week?
Number 1, unsurprisingly, was the easiest. I suspected it was Sofia, and a map of Europe and a large reference dictionary I had in the house confirmed it. (I was briefly worried about how up-to-date the information was, as the dictionary was almost two decades old, the map older.)
Question 2 proved more difficult. I had a (nonfunctioning) four-stroke engine, and it had a manual, but it said nothing about the newer two-stroke engines. Some boating reference books I had lying around were of no help. So I went to the local marina and spoke to a mechanic I knew. He was full of information, and had time to give me the basics. I even got to look at an engine. That was great, until I got home and realized I had not taken any notes. I was coming to think that I would make a very poor investigative journalist.
Initially, I had thought number 3 would be the easiest, until I remembered we no longer had a phone book (with the blue government pages). I started to call information, but then wondered whether they’d be using the Internet. Assuming the answer was yes, I stopped in at the local library. The kid behind the counter looked at me funny when I asked. He suggested, more than a little wryly, that I use one of their computers. I countered by asking whether they had any local phone books. They actually did—it was several years old, but still relevant. Mission accomplished.
It was question 4 that stumped me. I knew no one in Austin well enough to call for an opinion. I thought of calling their local chamber of commerce, but I didn’t have a way to get that number. Besides, how would they even know the answer to such a question? My library in Connecticut didn’t have copies of any Texas papers. Books might help, but the ones I looked at, such as a few travel guides at the local bookstore, were not current enough. In short, I was out of luck.
None of this, I’m sure, surprises you. It is common knowledge that our ways of knowing about the world have changed. Most knowing now is Google-knowing—knowledge acquired online. But my little exercise brought it home for me, made it personal, in a way that I hadn’t before appreciated, and I encourage you to try it yourself. It feels historic, something akin to what I imagine it must be like to dress up in period costume and live in a tent, as some Civil War reenactors do.
Just a dozen or so years ago the processes I went through to answer my questions wouldn’t have seemed at all out of the ordinary. Research involved footwork, and many academics still doubted the veracity of information acquired online. But that battle is long lost. The Internet is the fountain of knowledge and Google is the mouth from which it flows. With the Internet, my challenge is no challenge at all; answering the questions is easy. Just ask the knowledge machine.
Speed is the most obvious distinguishing characteristic of how we know now. Google-knowing is fast. Yet as my exercise brings home, this speed is so dramatic that it does more than just save time. The engine diagram I can call up on my phone can be consulted again and again. I don’t need notes—or need them less, and I can store them on the cloud in case I do. Elected representatives are easier to track down than ever before; I can send my opinion to them (or at least to their addresses and offices) any number of ways and in seconds. Thanks to Google Street View, I can see what Sofia and its inhabitants look like up close and personal. And question 4—a question of a sort that probably wouldn’t even have been posed before the Internet—is addressed by any number of sites giving me rankings and reviews of restaurants.
Not everything about Google-knowing is new, however. And that itself is important to appreciate. One humorous illustration of this came in 2013, when the website College Humor asked: what if Google was a guy? The ensuing video was hilarious and a bit disturbing. The questions we ask our search engines (“Hedgehog, cute,” “Bitcoin unbuy fast,” “college girls?”) seem all the more ridiculous (and creepy) once we imagine asking them of an actual person—like an amiable but overworked bureaucrat behind a desk. But it also reminds us of a fact about how we treat Google and other search engines—a fact that is obvious enough but often overlooked. We treat them like personal reference librarians; we ask them questions, and they deliver up sources that claim to have the answers. And that means that we already treat their deliverances as akin—at least at the level of trust—to the deliverances of actual people. Of course, that is precisely why the bit is funny: Google isn’t a guy (or anyone, male or female); it doesn’t create information, it distributes it. Yet this is also why it makes sense for us to treat Google like a person—why the video rings true. The information we get from the links we access via Google is (mostly) from other people. When we trust it, we almost always trust someone else’s say-so—his or her “testimony.” Indeed, the entire Internet, including, of course, Wikipedia, Facebook, the blogosphere, Reddit, and most especially the Twitterverse, etc., can be described as one giant knowledge-through-testimony machine.
Fig. 1. Courtesy of Barbara Smaller/The New Yorker Collection/The Cartoon Bank
So, “Google-knowing” helps describe how we acquire information and knowledge via the testimony machine of the Internet. It is easy, fast and yet dependent on others. That is a combination that, at least in this extreme form, has never been seen before. Moreover, and as my exercise from last summer indicates, we can essentially no longer operate without it. I Google-know every day, and I’m sure you do too. But partly as a result, Google-knowing is increasingly swamping other ways of knowing. We treat it as more valuable, more natural, than other kinds of knowledge. That’s important, because as we’ll see, the human mind has evolved to be receptive to information in certain environments. As a result, we tend to trust our receptive abilities automatically. That makes sense in all sorts of cases, especially when we are talking about the senses—seeing, hearing etc. The problem is that Google-knowing really shouldn’t be like that; as the New Yorker cartoon implies, we shouldn’t trust it as a matter of course.
Being Receptive: Downloading Facts
You want to sort the good apples from the bad. Someone gives you a device and tells you to use it to do the sorting. If the device is reliable, then most of the apples you sort into the good pile will, in fact, be good. And this will be the case whether you possess any recognizable evidence to think it is so or not. As long as the device really does its job, it will give you useful information about apples whether or not you have any idea about its track record, or about how it is made, or even if you can’t tell a good apple from a pear, or a hole in the ground.
We all need good apple-sorting devices, and not just to sort apples. If you want to find food and avoid predators, which every organism does, you need a way to sort the good (true) information from the bad (false)—and to do so quickly, mechanically and reliably. Call this being receptive. When we know in this way, we are reliably tracking the good apples.
Being receptive is a matter of “taking in” the facts. We are being receptive when we open our eyes in the morning and see the alarm clock, when we smell the coffee, when we remember we are late. As we move about, we
“download” a tremendous amount of raw data—data that is processed into information by our sensory and neural systems. This information represents the world around us. And if our visual system, for example, is working as it should, and our representation of the world is accurate—if we see things as they are—then we come to know.
Receptive knowledge isn’t “intellectual.” It is how dogs, dolphins and babies know. To have this sort of knowledge, you don’t have to know that you know, or even be able to spell the word “knowledge” (or know that it is a word)—although if you do, that’s okay too. Receptive states of mind aim to track the organism’s environment, and they are causally connected to the organism’s stimuli and behavior. In human animals, we might call these states beliefs, and say that human beliefs can be true or false.
So, knowing by being receptive is something we have in common with other animals, and it is clear we need such an idea to explain how animals (including us) get around in the world. When we explain, for example, why a particular species can protect their nests by leading predators away, we assume they can reliably spot predators.1 We take them to have the capacity to accurately recognize features of their environment (“predator!”) in a non-accidental way. So the following seems like a reasonable hypothesis: having representational mechanisms that stably track the environment is more adaptive than having mechanisms that only work on Tuesday and Thursdays.
This kind of explanation is what biologists call a “just-so” story. It assumes that behavior that contributes to fitness makes informational demands on a species, and that species’ representational capacities were, at least in most cases, selected to play that role.2 But this story’s assumptions are widely held. It leaves us with a pretty clear picture: for purposes of describing animal and human cognition, we need to think of organisms as having the capacity to know about the world by being reliably receptive to their environment, to act as reliable downloaders.3
Here’s the crucial point for our purposes: an organism’s default attitude toward its receptive capacities—like vision or memory—is trust. And that makes sense. Even though we know, for example, that our eyesight and hearing can and do mislead us, perception is simply indispensable for getting around in the world. We can’t survive without it. Receptive thought is also non-reflective. We don’t think about it. That is because ordinarily, most of our receptive processes tick along under the surface of conscious attention. They don’t require active effort. This is most obvious in the case of vision or hearing: as you drive down the road to work, along a route you’ve traveled many times, you are absorbing information about the environment and putting it into immediate action. As we say, much of this happens on autopilot. The processes involved are reliable in most ordinary circumstances, which is why most of us can do something dangerous like drive a car without a major mishap. Of course, each of these processes is itself composed of highly complex sub-processes, and each of those is composed of still more moving parts, most of which do their jobs without our conscious effort. In normal operations, for example, our brain weeds out what isn’t coherent with our prior experiences, feelings and what else we think we know. This happens on various levels. At the most basic one, we—again, unconsciously—tend to compare the delivery of our senses, and we reject the information we are receiving if it doesn’t match.
This sort of automatic filtering that accompanies our receptive states of mind is described by Daniel Kahneman and other researchers as the product of “system 1” cognitive processes. System 1 information processing is automatic and unconscious, without reflective awareness. It includes not only quick situational assessment but also automatic monitoring and inference. Among the jobs of system 1 are “distinguishing the surprising from the normal,” making quick inferences from limited data and integrating current experience into a coherent (or what seems like a coherent) story.4 In many everyday circumstances, this sort of unconscious filtering—coherence and incoherence detection—is an important factor in determining whether our belief-forming practices are reliable. Think again about driving your car to work. Part of what allows you to navigate the various obstacles is not only that your sensory processes are operating effectively to track the facts, but that your coherence filters are working to weed out what is irrelevant and make quick sense of what is.
Yet the very same “fast thinking” processes that help us navigate our environment also lead us into making predictable and systematic errors. System 1, so to speak, “looks” for coherence in the world, looks for it to make sense, even when it has very limited information. That’s why people are so quick to jump to conclusions. Consider: How many animals of each kind did Moses take into the ark? Ask someone this question out of the blue (it is often called the “Moses Illusion”) and most won’t be able to spot what is wrong with it—namely, that it was Noah, not Moses, who supposedly built the ark. Our fast system 1 thinking expects something biblical given the context, and “Moses” fits that expectation: it coheres with our expectations well enough for it to slip by. 5 Something similar can happen even on a basic perceptual level; we can fail to perceive what is really there because we selectively attend to some features of our environment and not others. In a famous experiment, researchers Christopher Chabris and Daniel Simons asked people to watch a short video of six people passing a basketball around.6 Subjects were asked to count how many passes the people made. During the video, a person in a gorilla suit walks into the middle of the screen, beats its chest, and then leaves—something you’d think people would notice. But in fact, half of the people asked to count the passes missed the gorilla entirely.
So, the “fast” receptive processes we treat with default trust are reliable in certain circumstances, but they are definitely not reliable in all. This is a lesson we need to remember about the cognitive processing we use as we surf the Internet. Our ways of receiving information online—Google-knowing—are already taking on the hallmarks of receptivity. We are already treating it more like perception. Three simple facts suggest this. First, as the last section illustrated, Google-knowing is quickly starting to feel indispensable. It is our go-to way of forming beliefs about the world. Second, most Google-knowing is already fast. By that, I don’t just mean that our searches are fast—although that is true; if you have a reasonable connection, searches on major engines like Bing and Google deliver results in less than a second. What I mean is that when you look up something on your phone, the information you get isn’t the result of much effort on your part. You are engaging quick, relatively non-reflective cognitive processes. In other words, when we access information online, when we try to “Google-know,” we engage in an activity that is composed of a host of smaller cognitive processes ticking along beneath the surface of attention. Third, and as a result of the first two points, we often adopt an attitude of default trust toward digitally acquired information. It therefore tends to swamp other ways of knowing; we pay attention to it more.
That is not surprising. Google-knowing is often (although not always) fast and easy. If you consult a roughly reliable source (like Wikipedia) and engage cognitive processes that are generally reliable in that specific context, then you are being receptive to the facts out there in the world. You are tracking what is true—and that is what being a receptive knower is all about. You may not be able to explain why that particular bit of information is true; you may not have made a study of whether the source is really reliable; but you are learning. So, can’t we still say that you are knowing in one important sense?
We can. And we do. But Google-knowing is knowing only if you consult a reliable source and your unconscious brain is working the way you’d consciously like it to.
If. There, as always, is the rub.
John Locke Agrees with Mom
The day following the bombing of the Boston Marathon in April 2013, social media was clogged with posts of a man in a red shirt holding a wounded woman. The picture was tragic, and the posts made it more so: they told us that the man had planned
to propose to the woman when she finished the marathon—until the bomb went off. Hundreds of thousands of people reposted and tweeted the story, often contributing moving comments of their own.
The story, however, proved to be false. The man had not been planning to propose to the woman. They weren’t even acquainted. Nor was it true, as was widely reported even in the “mainstream” media (I heard it on my local NPR station the day of the bombing), that the authorities had purposefully shut down cell phone service in Boston (the system simply was flooded with too much traffic). These were rumors, circulating at the speed of tweet.
Rumors like this are also examples of the widely discussed phenomenon of information cascades—a phenomenon to which the Internet and social media are particularly susceptible.7 Information cascades happen when people post or otherwise voice their opinions in a sequence. If the first expressions of opinion form a pattern, then this fact alone can begin to outweigh or alter later opinions. People later in the sequence tend to follow the crowd more than their own private evidence. The mere fact that so many people prior have voiced a particular opinion—especially if they are in some sense within your social circle—the more likely it is that you’ll go with that opinion too, or at least give it more weight. Social scientists (and advertising executives) who have studied this phenomenon have used it to explain not only how information often moves around the Internet, but how and why songs and YouTube videos become popular. The more people have “liked” a video, the greater the chance even more people will like it, and pretty soon you end up with “Gangnam Style” and “What Does the Fox Say?”
The Internet of Us Page 3