But What If We're Wrong?
Page 13
Conflicting conceptions of “reality” have no impact on reality. And this does not apply exclusively to conspiracy theorists. It applies to everyone, all the time.
[4]On the evening of February 26, 2015, I (along with millions of other people) experienced a cultural event that—at least for a few hours—seemed authentically unexplainable. By March of that year, most of the world had moved on from this. But I still think about that night. Not because of what happened, but because of how it felt while it was transpiring.
A woman on the Internet posted a photograph of a dress. The dress was potentially going to be worn by someone’s mother at a Scottish wedding, but that detail is irrelevant. What mattered was the color of the dress. The image of the garment was tagged with the following caption:
guys please help me—is this dress white and gold, or blue and black? Me and my friends can’t agree and we are freaking the fuck out
When my wife saw this image, she said, “I don’t get what the joke is here. This is just a picture of a white and gold dress.” When I glanced at the image and told her it was plainly black and blue, she assumed I was playfully lying (which, to be fair, is not exactly outside my character). But I wasn’t. We were looking at the same thing and seeing something completely different. I texted a friend in California, who almost seemed pissed about this—he assumed everyone on Twitter claiming it was anything except blue (specifically periwinkle) and black was consciously trolling society. “I don’t know about that,” I responded. “Something is happening here.” And something was happening. Random pairs of people had differing opinions about something they both perceived to be independently obvious. At first, unscientific surveys suggested that most people thought the dress was gold and white, but the gap rapidly shrank to almost 50-50 (which might have been partially due to the discovery that the actual dress actually was blue and black).
The next day, countless pundits tried to explain why this had transpired. None of their explanations were particularly convincing. Most were rooted in the idea that this happened because we were all looking at a photo of a dress, as opposed to the dress itself. But that only shifts the debate, without really changing it—why, exactly, would two people see the same photograph in two completely different ways? There was a momentary sense that this stupid dress had accidentally collided with some previously unknown optic frequency that lay exactly between the two ways in which color can be perceived, and that—maybe, possibly, somehow—the human race did not see “blue” and “gold” (and perhaps every color) in the same, unified way. Which would mean that color is not a real thing, and that our perception of the color wheel is subjective, and that what we currently classify as “blue” might not be classified as “blue” in a thousand years.
But this, it seems, is not exactly a new debate.
The argument that color is not a static property has been gingerly waged for decades, and it always seems to hinge on the ancient work of a possibly blind, probably imaginary, thoroughly unreliable poet. In both The Iliad and The Odyssey, Homer describes the Aegean Sea. Again and again, he describes this sea as “wine-dark.” He unswervingly asserts that the ocean is the same color as red wine. To some, this suggests that the way we saw and understood color three thousand years ago was radically different from the way we see and understand it now. To others, this is just an example of a poet being poetic (or maybe an example of a blind poet getting bad advice). It’s either meaningful or meaningless, which is probably why no one will ever stop talking about it.
“I think people really overstate the significance of that passage from Homer. He’s mostly just being evocative,” says Zed Adams, an assistant professor of philosophy at the New School for Social Research. “But I think it does hint at one important difference between the Greek use of certain ‘color words’ and our own. The shiny/matte distinction seems like it might have been more central for them than it is for us, so Homer might have been thinking of water and wine as similarly colored, in the sense that they are both shiny. But, beyond that, I think the ocean is sometimes wine-colored, so I don’t think the passage is that big of a deal.”
Adams is the author of On the Genealogy of Color. He believes the topic of color is the most concrete way to consider the question of how much—or how little—our experience with reality is shared with the experience of other people. It’s an unwieldy subject that straddles both philosophy and science. On one hand, it’s a physics argument about the essential role light plays in our perception of color; at the same time, it’s a semantic argument over how color is linguistically described differently by different people. There’s also a historical component: Up until the discovery of color blindness in the seventeenth century, it was assumed that everyone saw everything the same way (and it took another two hundred years before we realized how much person-to-person variation there is). What really changed four hundred years ago was due (once again) to the work of Newton and Descartes, this time in the field of optics. Instead of things appearing “red” simply because of their intrinsic “redness” (which is what Aristotle50 believed), Newton and Descartes realized it has to do with an object’s relationship to light. This, explains Adams, led to a new kind of separation between the mind and the world. It meant that there are all kinds of things we can’t understand about the world through our own observation, and it made it intellectually conceivable that two people could see the same thing differently.
What’s particularly interesting here is that Adams believes Descartes misunderstood his own discovery about light and experience. The basis for his argument is extremely wonky (and better explained by his own book). But the upshot is this: Adams suspects the way we’ll talk about color in a distant future will be different from the way we talk about it now. And this will be because future conversations will be less interpretative and more precise. It’s an optimistic view of our current inexact state of perception—someday, we might get this right. We might actually agree that “blue” is blue, and arguments about the hue of online dresses will last all of three seconds.
“Descartes thought that the mind, and specifically ‘what mental experience is like,’ somehow stood outside of the physical world, such that [this mental experience] could vary while everything physical about us would stay the same,” Adams says. “I think that idea will gradually become less and less intuitive, and will just start to seem silly. I’d like to imagine that in a hundred years, if I said to you, ‘But how could I ever really know whether your color experience is the same as mine?’ your response would just be, ‘Well, if our eyes and brains are the same, then our color experiences are the same.’ End of story.”
[5]Metaphoric sheep get no love. There’s no worse thing to be compared to, at least among conspiracy theorists. “You’re just a sheep,” they will say. “You believe what they want you to believe.” But this implies that they—the metaphoric shepherds—have something they want you to accept. It implies that these world-altering shepherds are consciously leading their sheeple to a conclusion that plays to their benefit. No one considers the possibility of a shepherd just aimlessly walking around the meadow, pointing his staff in whatever direction he happens to be facing.
On the same day I spoke with Linklater about dreams, there was a story in The New York Times about a violent incident that had occurred a few days prior in Manhattan. A man had attacked a female police officer with a hammer and was shot by the policewoman’s partner. This shooting occurred at ten a.m., on the street, in the vicinity of Penn Station. Now, one assumes seeing a maniac swinging a hammer at a cop’s skull before being shot in broad daylight would be the kind of moment that sticks in a person’s mind. Yet the Times story explained how at least two of the eyewitness accounts of this event ended up being wrong. Linklater was fascinated by this: “False memories, received memories, how we fill in the blanks of conjecture, the way the brain fills in those spaces with something that is technically incorrect—all of these errors allow us to make sense o
f the world, and are somehow accepted enough to be admissible in a court of law. They are accepted enough to put someone in prison.” And this, remember, was a violent incident that had happened only hours before. The witnesses were describing something that had happened that same day, and they had no incentive to lie. But video surveillance proved their depictions of reality were inaccurate.
This is a level of scrutiny that can’t be applied to the distant past, for purely practical reasons. Most of history has not been videotaped. But what’s interesting is our communal willingness to assume most old stories may as well be true, based on the logic that (a) the story is already ancient, and (b) there isn’t any way to confirm an alternative version, despite the fact that we can’t categorically confirm the original version, either.
A week before Manhattan cops were being attacked by hammer-wielding schizophrenics, Seymour Hersh published a ten-thousand-word story in the London Review of Books headlined “The Killing of Osama bin Laden.” Hersh’s wide-ranging story boiled down to this: The accepted narrative of the 2011 assassination of bin Laden was a fabrication, deliberately perpetrated by the Obama administration. It was not a clandestine black ops attack by Navy SEALs, working off the CIA’s meticulous intelligence gathering; it was the result of a former Pakistani intelligence officer exchanging the whereabouts of bin Laden for money, thereby allowing the SEALs to just walk into his compound and perform an execution. It was not a brazen military gamble; the government of Pakistan knew it was going to happen in advance and quietly allowed the cover-up. During the first thirty-six hours of the story’s publication, it felt like something unthinkable was suddenly transparent: Either we were being controlled by a shadow government where nothing was as it seemed, or the finest investigative reporter of the past half century had lost his goddamn mind. By the end of the week, most readers leaned in the direction of the latter. Some of this was due to a follow-up interview Hersh gave to Slate that made him seem unreliable, slightly crazy, and very old. But most of the skepticism came from a multitude of sources questioning the validity of specific particulars in Hersh’s account, even though the refutation of those various details did not really contradict the larger conspiratorial thesis. Hersh’s alternative narrative was scrutinized far more aggressively than the conventional narrative, even though the mainstream version of bin Laden’s assassination was substantially more dramatic (if film director Kathryn Bigelow had used Hersh’s story as the guide for Zero Dark Thirty, it might have qualified as mumblecore).
By the first week of June, “The Killing of Osama bin Laden” had been intellectually discarded by most people in the United States. Every subsequent conversation I had about the Hersh story (and I had many) drifted further and further from seriousness. More than a year later, journalist Jonathan Mahler wrote a story for The New York Times Magazine reexamining the dispute from a media perspective. “For many,” wrote Mahler, “[the official bin Laden story] exists in a kind of liminal state, floating somewhere between fact and mythology.” Considering what can be conclusively verified about the assassination, that’s precisely where the story should float. But I don’t believe that it does. Judging from the (mostly incredulous) reaction to Mahler’s story, I don’t think a sizable chunk of US citizenry distrusts the conventional depiction of how bin Laden was killed. This acceptance is noteworthy for at least two reasons. The first is that—had this kind of alternative story emerged from a country like Russia, and if the man orchestrating the alleged conspiracy was Vladimir Putin—nobody in America would question it at all. It would immediately be accepted as plausible, and perhaps even probable. The second is a discomfiting example of how “multiple truths” don’t really mesh with the machinations of human nature: Because we were incessantly told one version of a story before hearing the second version, it’s become impossible to overturn the original template. It was unconsciously assumed that Hersh’s alternative story had to both prove itself and disprove the primary story, which automatically galvanizes the primary version as factual. It took only four years for that thinking to congeal. Extrapolate that phenomenon to forty years, or to four hundred years, or to four thousand years: How much of history is classified as true simply because it can’t be sufficiently proven false? In other words, there’s no way we can irrefutably certify that an event from 1776 didn’t happen in the manner we’ve always believed, so there’s no justification for presenting a counter-possibility. Any counter-possibility would have to use the same methodology, so it would be (at best) equally flawed. This becomes more and more ingrained as we move further and further from the moment of the event. So while it’s absurd to think that all of history never really happened, it’s almost as absurd to think that everything we know about history is real. All of which demands a predictable question: What significant historical event is most likely wrong? And not because of things we know that contradict it, but because of the way wrongness works.
We understand the past through the words of those who experienced it. But those individuals aren’t necessarily reliable, and we are reminded of this constantly. The average person can watch someone attack a cop with a hammer and misdescribe what he saw twenty minutes after it happened. But mistakes are only a fraction of the problem. There’s also the human compulsion to lie—and not just for bad reasons, but for good reasons, and sometimes for no reasons, beyond a desire to seem interesting. When D. T. Max published his posthumous biography of David Foster Wallace, it was depressing to discover that many of the most memorable, electrifying anecdotes from Wallace’s nonfiction were total fabrications. Of course, that accusation would be true for countless essays published before the fact-checking escalation of the Internet. The defining works of Joseph Mitchell, Joan Didion, and Hunter Thompson all contain moments of photographic detail that would never withstand the modern verification process51—we’ve just collectively decided to accept the so-called larger truth and ignore the parts that skew implausible. In other words, people who don’t know better are often wrong by accident, and people who do know better are sometimes wrong on purpose—and whenever a modern news story explodes, everyone recognizes that possibility. But we question this far less when the information comes from the past. It’s so hard to get viable info about pre-twentieth-century life that any nugget is reflexively taken at face value. In Ken Burns’s documentary series The Civil War, the most fascinating glimpses of the conflict come from personal letters written by soldiers and mailed to their families. When these letters are read aloud, they almost make me cry. I robotically consume those epistles as personal distillations of historical fact. There is not one moment of The Civil War that feels false. But why is that? Why do I assume the things Confederate soldiers wrote to their wives might not be wildly exaggerated, or inaccurate, or straight-up untruths? Granted, we have loads of letters from lots of unrelated Civil War veterans, so certain claims and depictions can be fact-checked against each other. If multiple letters mention that there were wheat weevils in the bread, we can concede that the bread was infested with wheat weevils. But the American Civil War isn’t exactly a distant historical event (amazingly, a few Civil War veterans were still alive in the 1950s). The further we go back, the harder it becomes to know how seriously any eyewitness account can be taken, particularly in cases where the number of accounts is relatively small.
There’s a game I like to play with people when we’re at the bar, especially if they’re educated and drunk. The game has no name, but the rules are simple: The player tries to answer as many of the following questions as possible, without getting one wrong, without using the same answer twice, and without looking at a phone. The first question is, “Name any historical figure who was alive in the twenty-first century.” (No one has ever gotten this one wrong.) The second question is, “Name any historical figure who was alive in the twentieth century.” (No one has ever gotten this one wrong, either.) The third question is, “Name any historical figure who was alive in the nineteenth century.” The fourth question i
s, “Name any historical figure who was alive in the eighteenth century.” You continue moving backward through time, in centurial increments, until the player fails. It’s mildly shocking how often highly intelligent people can’t get past the sixteenth century; if they make it down to the twelfth century, it usually means they either know a lot about explorers or a shitload about popes. What this game illustrates is how vague our understanding of history truly is. We know all the names, and we have a rough idea of what those names accomplished—but how much can that be trusted if we can’t even correctly identify when they were alive? How could our abstract synopsis of what they did be internalized if the most rudimentary, verifiable detail of their lives seems tricky?
It’s hard to think of a person whose portrait was painted more than Napoleon. We should definitely know what he looked like. Yet the various firsthand accounts of Napoleon can’t even agree on his height, much less his actual appearance. “None of the portraits that I had seen bore the least resemblance to him,” insisted the poet Denis Davydov when he met Napoleon in 1807. Here again, we’re only going back about two hundred years. What is the realistic probability that the contemporary understanding of Hannibal’s 218 BC crossing of the Alps on the back of war elephants is remotely accurate? The two primary texts that elucidate this story were both composed decades after it happened, by authors52 who were not there, with motives that can’t be understood. And there’s no conspiracy here; this is just how history is generated. We know the story exists and we know how the Second Punic War turned out. To argue that we know—really, truly know—much more than that is an impossibly optimistic belief. But this is the elephant-based Hannibal narrative we’ve always had, and any story contradicting it would be built on the same kind of modern conjecture and ancient text. As far as the world is concerned, it absolutely happened. Even if it didn’t happen, it happened.