The Black Swan
Page 12
So we pull memories along causative lines, revising them involuntarily and unconsciously. We continuously renarrate past events in the light of what appears to make what we think of as logical sense after these events occur.
By a process called reverberation, a memory corresponds to the strengthening of connections from an increase of brain activity in a given sector of the brain—the more activity, the stronger the memory. While we believe that the memory is fixed, constant, and connected, all this is very far from truth. What makes sense according to information obtained subsequently will be remembered more vividly. We invent some of our memories—a sore point in courts of law since it has been shown that plenty of people have invented child-abuse stories by dint of listening to theories.
The Madman’s Narrative
We have far too many possible ways to interpret past events for our own good.
Consider the behavior of paranoid people. I have had the privilege to work with colleagues who have hidden paranoid disorders that come to the surface on occasion. When the person is highly intelligent, he can astonish you with the most far-fetched, yet completely plausible interpretations of the most innocuous remark. If I say to them, “I am afraid that …,” in reference to an undesirable state of the world, they may interpret it literally, that I am experiencing actual fright, and it triggers an episode of fear on the part of the paranoid person. Someone hit with such a disorder can muster the most insignificant of details and construct an elaborate and coherent theory of why there is a conspiracy against him. And if you gather, say, ten paranoid people, all in the same state of episodic delusion, the ten of them will provide ten distinct, yet coherent, interpretations of events.
When I was about seven, my schoolteacher showed us a painting of an assembly of impecunious Frenchmen in the Middle Ages at a banquet held by one of their benefactors, some benevolent king, as I recall. They were holding the soup bowls to their lips. The schoolteacher asked me why they had their noses in the bowls and I answered, “Because they were not taught manners.” She replied, “Wrong. The reason is that they are hungry.” I felt stupid at not having thought of this, but I could not understand what made one explanation more likely than the other, or why we weren’t both wrong (there was no, or little, silverware at the time, which seems the most likely explanation).
Beyond our perceptional distortions, there is a problem with logic itself. How can someone have no clue yet be able to hold a set of perfectly sound and coherent viewpoints that match the observations and abide by every single possible rule of logic? Consider that two people can hold incompatible beliefs based on the exact same data. Does this mean that there are possible families of explanations and that each of these can be equally perfect and sound? Certainly not. One may have a million ways to explain things, but the true explanation is unique, whether or not it is within our reach.
In a famous argument, the logician W. V. Quine showed that there exist families of logically consistent interpretations and theories that can match a given series of facts. Such insight should warn us that mere absence of nonsense may not be sufficient to make something true.
Quine’s problem is related to his finding difficulty in translating statements between languages, simply because one could interpret any sentence in an infinity of ways. (Note here that someone splitting hairs could find a self-canceling aspect to Quine’s own writing. I wonder how he expects us to understand this very point in a noninfinity of ways).
This does not mean that we cannot talk about causes; there are ways to escape the narrative fallacy. How? By making conjectures and running experiments, or as we will see in Part Two (alas), by making testable predictions.* The psychology experiments I am discussing here do so: they select a population and run a test. The results should hold in Tennessee, in China, even in France.
Narrative and Therapy
If narrativity causes us to see past events as more predictable, more expected, and less random than they actually were, then we should be able to make it work for us as therapy against some of the stings of randomness.
Say some unpleasant event, such as a car accident for which you feel indirectly responsible, leaves you with a bad lingering aftertaste. You are tortured by the thought that you caused injuries to your passengers; you are continuously aware that you could have avoided the accident. Your mind keeps playing alternative scenarios branching out of a main tree: if you did not wake up three minutes later than usual, you would have avoided the car accident. It was not your intension to injure your passengers, yet your mind is inhabited with remorse and guilt. People in professions with high randomness (such as in the markets) can suffer more than their share of the toxic effect of look-back stings: I should have sold my portfolio at the top; I could have bought that stock years ago for pennies and I would now be driving a pink convertible; et cetera. If you are a professional, you can feel that you “made a mistake,” or, worse, that “mistakes were made,” when you failed to do the equivalent of buying the winning lottery ticket for your investors, and feel the need to apologize for your “reckless” investment strategy (that is, what seems reckless in retrospect).
How can you get rid of such a persistent throb? Don’t try to willingly avoid thinking about it: this will almost surely backfire. A more appropriate solution is to make the event appear more unavoidable. Hey, it was bound to take place and it seems futile to agonize over it. How can you do so? Well, with a narrative. Patients who spend fifteen minutes every day writing an account of their daily troubles feel indeed better about what has befallen them. You feel less guilty for not having avoided certain events; you feel less responsible for it. Things appear as if they were bound to happen.
If you work in a randomness-laden profession, as we see, you are likely to suffer burnout effects from that constant second-guessing of your past actions in terms of what played out subsequently. Keeping a diary is the least you can do in these circumstances.
TO BE WRONG WITH INFINITE PRECISION
We harbor a crippling dislike for the abstract.
One day in December 2003, when Saddam Hussein was captured, Bloomberg News flashed the following headline at 13:01: U.S. TREASURIES RISE; HUSSEIN CAPTURE MAY NOT CURB TERRORISM.
Whenever there is a market move, the news media feel obligated to give the “reason.” Half an hour later, they had to issue a new headline. As these U.S. Treasury bonds fell in price (they fluctuate all day long, so there was nothing special about that), Bloomberg News had a new reason for the fall: Saddam’s capture (the same Saddam). At 13:31 they issued the next bulletin: U.S. TREASURIES FALL; HUSSEIN CAPTURE BOOSTS ALLURE OF RISKY ASSETS.
So it was the same capture (the cause) explaining one event and its exact opposite. Clearly, this can’t be; these two facts cannot be linked.
Do media journalists repair to the nurse’s office every morning to get their daily dopamine injection so that they can narrate better? (Note the irony that the word dope, used to designate the illegal drugs athletes take to improve performance, has the same root as dopamine.)
It happens all the time: a cause is proposed to make you swallow the news and make matters more concrete. After a candidate’s defeat in an election, you will be supplied with the “cause” of the voters’ disgruntlement. Any conceivable cause can do. The media, however, go to great lengths to make the process “thorough” with their armies of fact-checkers. It is as if they wanted to be wrong with infinite precision (instead of accepting being approximately right, like a fable writer).
Note that in the absence of any other information about a person you encounter, you tend to fall back on her nationality and background as a salient attribute (as the Italian scholar did with me). How do I know that this attribution to the background is bogus? I did my own empirical test by checking how many traders with my background who experienced the same war became skeptical empiricists, and found none out of twenty-six. This nationality business helps you make a great story and satisfies your hunger for ascription of causes. It seems to be the d
ump site where all explanations go until one can ferret out a more obvious one (such as, say, some evolutionary argument that “makes sense”). Indeed, people tend to fool themselves with their self-narrative of “national identity,” which, in a breakthrough paper in Science by sixty-five authors, was shown to be a total fiction. (“National traits” might be great for movies, they might help a lot with war, but they are Platonic notions that carry no empirical validity—yet, for example, both the English and the non-English erroneously believe in an English “national temperament.”) Empirically, sex, social class, and profession seem to be better predictors of someone’s behavior than nationality (a male from Sweden resembles a male from Togo more than a female from Sweden; a philosopher from Peru resembles a philosopher from Scotland more than a janitor from Peru; and so on).
The problem of overcausation does not lie with the journalist, but with the public. Nobody would pay one dollar to buy a series of abstract statistics reminiscent of a boring college lecture. We want to be told stories, and there is nothing wrong with that—except that we should check more thoroughly whether the story provides consequential distortions of reality. Could it be that fiction reveals truth while nonfiction is a harbor for the liar? Could it be that fables and stories are closer to the truth than is the thoroughly fact-checked ABC News? Just consider that the newspapers try to get impeccable facts, but weave them into a narrative in such a way as to convey the impression of causality (and knowledge). There are fact-checkers, not intellect-checkers. Alas.
But there is no reason to single out journalists. Academics in narrative disciplines do the same thing, but dress it up in a formal language—we will catch up to them in Chapter 10, on prediction.
Besides narrative and causality, journalists and public intellectuals of the sound-bite variety do not make the world simpler. Instead, they almost invariably make it look far more complicated than it is. The next time you are asked to discuss world events, plead ignorance, and give the arguments I offered in this chapter casting doubt on the visibility of the immediate cause. You will be told that “you overanalyze,” or that “you are too complicated.” All you will be saying is that you don’t know!
Dispassionate Science
Now, if you think that science is an abstract subject free of sensationalism and distortions, I have some sobering news. Empirical researchers have found evidence that scientists too are vulnerable to narratives, emphasizing titles and “sexy” attention-grabbing punch lines over more substantive matters. They too are human and get their attention from sensational matters. The way to remedy this is through meta-analyses of scientific studies, in which an überresearcher peruses the entire literature, which includes the less-advertised articles, and produces a synthesis.
THE SENSATIONAL AND THE BLACK SWAN
Let us see how narrativity affects our understanding of the Black Swan. Narrative, as well as its associated mechanism of salience of the sensational fact, can mess up our projection of the odds. Take the following experiment conducted by Kahneman and Tversky, the pair introduced in the previous chapter: the subjects were forecasting professionals who were asked to imagine the following scenarios and estimate their odds.
A massive flood somewhere in America in which more than a thousand people die.
An earthquake in California, causing massive flooding, in which more than a thousand people die.
Respondents estimated the first event to be less likely than the second. An earthquake in California, however, is a readily imaginable cause, which greatly increases the mental availability—hence the assessed probability—of the flood scenario.
Likewise, if I asked you how many cases of lung cancer are likely to take place in the country, you would supply some number, say half a million. Now, if instead I asked you many cases of lung cancer are likely to take place because of smoking, odds are that you would give me a much higher number (I would guess more than twice as high). Adding the because makes these matters far more plausible, and far more likely. Cancer from smoking seems more likely than cancer without a cause attached to it—an unspecified cause means no cause at all.
I return to the example of E. M. Forster’s plot from earlier in this chapter, but seen from the standpoint of probability. Which of these two statements seems more likely?
Joey seemed happily married. He killed his wife.
Joey seemed happily married. He killed his wife to get her inheritance.
Clearly the second statement seems more likely at first blush, which is a pure mistake of logic, since the first, being broader, can accommodate more causes, such as he killed his wife because he went mad, because she cheated with both the postman and the ski instructor, because he entered a state of delusion and mistook her for a financial forecaster.
All this can lead to pathologies in our decision making. How?
Just imagine that, as shown by Paul Slovic and his collaborators, people are more likely to pay for terrorism insurance than for plain insurance (which covers, among other things, terrorism).
The Black Swans we imagine, discuss, and worry about do not resemble those likely to be Black Swans. We worry about the wrong “improbable” events, as we will see next.
Black Swan Blindness
The first question about the paradox of the perception of Black Swans is as follows: How is it that some Black Swans are overblown in our minds when the topic of this book is that we mainly neglect Black Swans?
The answer is that there are two varieties of rare events: a) the narrated Black Swans, those that are present in the current discourse and that you are likely to hear about on television, and b) those nobody talks about, since they escape models—those that you would feel ashamed discussing in public because they do not seem plausible. I can safely say that it is entirely compatible with human nature that the incidences of Black Swans would be overestimated in the first case, but severely underestimated in the second one.
Indeed, lottery buyers overestimate their chances of winning because they visualize such a potent payoff—in fact, they are so blind to the odds that they treat odds of one in a thousand and one in a million almost in the same way.
Much of the empirical research agrees with this pattern of overestimation and underestimation of Black Swans. Kahneman and Tversky initially showed that people overreact to low-probability outcomes when you discuss the event with them, when you make them aware of it. If you ask someone, “What is the probability of death from a plane crash?” for instance, they will raise it. However, Slovic and his colleagues found, in insurance patterns, neglect of these highly improbable events in people’s insurance purchases. They call it the “preference for insuring against probable small losses”—at the expense of the less probable but larger impact ones.
Finally, after years of searching for empirical tests of our scorn of the abstract, I found researchers in Israel that ran the experiments I had been waiting for. Greg Barron and Ido Erev provide experimental evidence that agents underweigh small probabilities when they engage in sequential experiments in which they derive the probabilities themselves, when they are not supplied with the odds. If you draw from an urn with a very small number of red balls and a high number of black ones, and if you do not have a clue about the relative proportions, you are likely to underestimate the number of red balls. It is only when you are supplied with their frequency—say, by telling you that 3 percent of the balls are red—that you overestimate it in your betting decision.
I’ve spent a lot of time wondering how we can be so myopic and shorttermist yet survive in an environment that is not entirely from Mediocristan. One day, looking at the gray beard that makes me look ten years older than I am and thinking about the pleasure I derive from exhibiting it, I realized the following. Respect for elders in many societies might be a kind of compensation for our short-term memory. The word senate comes from senatus, “aged” in Latin; sheikh in Arabic means both a member of the ruling elite and “elder.” Elders are repositories of complicated inductive le
arning that includes information about rare events. Elders can scare us with stories—which is why we become overexcited when we think of a specific Black Swan. I was excited to find out that this also holds true in the animal kingdom: a paper in Science showed that elephant matriarchs play the role of superadvisers on rare events.
We learn from repetition—at the expense of events that have not happened before. Events that are nonrepeatable are ignored before their occurrence, and overestimated after (for a while). After a Black Swan, such as September 11, 2001, people expect it to recur when in fact the odds of that happening have arguably been lowered. We like to think about specific and known Black Swans when in fact the very nature of randomness lies in its abstraction. As I said in the Prologue, it is the wrong definition of a god.
The economist Hyman Minsky sees the cycles of risk taking in the economy as following a pattern: stability and absence of crises encourage risk taking, complacency, and lowered awareness of the possibility of problems. Then a crisis occurs, resulting in people being shell-shocked and scared of investing their resources. Strangely, both Minsky and his school, dubbed Post-Keynesian, and his opponents, the libertarian “Austrian” economists, have the same analysis, except that the first group recommends governmental intervention to smooth out the cycle, while the second believes that civil servants should not be trusted to deal with such matters. While both schools of thought seem to fight each other, they both emphasize fundamental uncertainty and stand outside the mainstream economic departments (though they have large followings among businessmen and nonacademics). No doubt this emphasis on fundamental uncertainty bothers the Platonifiers.
All the tests of probability I discussed in this section are important; they show how we are fooled by the rarity of Black Swans but not by the role they play in the aggregate, their impact. In a preliminary study, the psychologist Dan Goldstein and I subjected students at the London Business School to examples from two domains, Mediocristan and Extremistan. We selected height, weight, and Internet hits per website. The subjects were good at guessing the role of rare events in Mediocristan-style environments. But their intuitions failed when it came to variables outside Mediocristan, showing that we are effectively not skilled at intuitively gauging the impact of the improbable, such as the contribution of a blockbuster to total book sales. In one experiment they underestimated by thirty-three times the effect of a rare event.