Book Read Free

The Black Swan

Page 9

by Nassim Nicholas Taleb


  The Skeptic, Friend of Religion

  While the ancient skeptics advocated learned ignorance as the first step in honest inquiries toward truth, later medieval skeptics, both Moslems and Christians, used skepticism as a tool to avoid accepting what today we call science. Belief in the importance of the Black Swan problem, worries about induction, and skepticism can make some religious arguments more appealing, though in stripped-down, anticlerical, theistic form. This idea of relying on faith, not reason, was known as fideism. So there is a tradition of Black Swan skeptics who found solace in religion, best represented by Pierre Bayle, a French-speaking Protestant erudite, philosopher, and theologian, who, exiled in Holland, built an extensive philosophical architecture related to the Pyrrhonian skeptics. Bayle’s writings exerted some considerable influence on Hume, introducing him to ancient skepticism—to the point where Hume took ideas wholesale from Bayle. Bayle’s Dictionnaire historique et critique was the most read piece of scholarship of the eighteenth century, but like many of my French heroes (such as Frédéric Bastiat), Bayle does not seem to be part of the French curriculum and is nearly impossible to find in the original French language. Nor is the fourteenth-century Algazelist Nicolas of Autrecourt.

  Indeed, it is not a well-known fact that the most complete exposition of the ideas of skepticism, until recently, remains the work of a powerful Catholic bishop who was an august member of the French Academy. Pierre-Daniel Huet wrote his Philosophical Treatise on the Weaknesses of the Human Mind in 1690, a remarkable book that tears through dogmas and questions human perception. Huet presents arguments against causality that are quite potent—he states, for instance, that any event can have an infinity of possible causes.

  Both Huet and Bayle were erudites and spent their lives reading. Huet, who lived into his nineties, had a servant follow him with a book to read aloud to him during meals and breaks and thus avoid lost time. He was deemed the most read person in his day. Let me insist that erudition is important to me. It signals genuine intellectual curiosity. It accompanies an open mind and the desire to probe the ideas of others. Above all, an erudite can be dissatisfied with his own knowledge, and such dissatisfaction is a wonderful shield against Platonicity, the simplifications of the five-minute manager, or the philistinism of the overspecialized scholar. Indeed, scholarship without erudition can lead to disasters.

  I Don’t Want to Be a Turkey

  But promoting philosophical skepticism is not quite the mission of this book. If awareness of the Black Swan problem can lead us into withdrawal and extreme skepticism, I take here the exact opposite direction. I am interested in deeds and true empiricism. So, this book was not written by a Sufi mystic, or even by a skeptic in the ancient or medieval sense, or even (we will see) in a philosophical sense, but by a practitioner whose principal aim is to not be a sucker in things that matter, period.

  Hume was radically skeptical in the philosophical cabinet, but abandoned such ideas when it came to daily life, since he could not handle them. I am doing here the exact opposite: I am skeptical in matters that have implications for daily life. In a way, all I care about is making a decision without being the turkey.

  Many middlebrows have asked me over the past twenty years, “How do you, Taleb, cross the street given your extreme risk consciousness?” or have stated the more foolish “You are asking us to take no risks.” Of course I am not advocating total risk phobia (we will see that I favor an aggressive type of risk taking): all I will be showing you in this book is how to avoid crossing the street blindfolded.

  They Want to Live in Mediocristan

  I have just presented the Black Swan problem in its historical form: the central difficulty of generalizing from available information, or of learning from the past, the known, and the seen. I have also presented the list of those who, I believe, are the most relevant historical figures.

  You can see that it is extremely convenient for us to assume that we live in Mediocristan. Why? Because it allows you to rule out these Black Swan surprises! The Black Swan problem either does not exist or is of small consequence if you live in Mediocristan!

  Such an assumption magically drives away the problem of induction, which since Sextus Empiricus has been plaguing the history of thinking. The statistician can do away with epistemology.

  Wishful thinking! We do not live in Mediocristan, so the Black Swan needs a different mentality. As we cannot push the problem under the rug, we will have to dig deeper into it. This is not a terminal difficulty—and we can even benefit from it.

  • • •

  Now, there are other themes arising from our blindness to the Black Swan:

  We focus on preselected segments of the seen and generalize from it to the unseen: the error of confirmation.

  We fool ourselves with stories that cater to our Platonic thirst for distinct patterns: the narrative fallacy.

  We behave as if the Black Swan does not exist: human nature is not programmed for Black Swans.

  What we see is not necessarily all that is there. History hides Black Swans from us and gives us a mistaken idea about the odds of these events: this is the distortion of silent evidence.

  We “tunnel”: that is, we focus on a few well-defined sources of uncertainty, on too specific a list of Black Swans (at the expense of the others that do not easily come to mind).

  I will discuss each of the points in the next five chapters. Then, in the conclusion of Part One, I will show how, in effect, they are the same topic.

  * I am safe since I never wear ties (except at funerals).

  * Since Russell’s original example used a chicken, this is the enhanced North American adaptation.

  * Statements like those of Captain Smith are so common that it is not even funny. In September 2006, a fund called Amaranth, ironically named after a flower that “never dies,” had to shut down after it lost close to $7 billion in a few days, the most impressive loss in trading history (another irony: I shared office space with the traders). A few days prior to the event, the company made a statement to the effect that investors should not worry because they had twelve risk managers—people who use models of the past to produce risk measures on the odds of such an event. Even if they had one hundred and twelve risk managers, there would be no meaningful difference; they still would have blown up. Clearly you cannot manufacture more information than the past can deliver; if you buy one hundred copies of The New York Times, I am not too certain that it would help you gain incremental knowledge of the future. We just don’t know how much information there is in the past.

  * The main tragedy of the high impact-low probability event comes from the mismatch between the time taken to compensate someone and the time one needs to be comfortable that he is not making a bet against the rare event. People have an incentive to bet against it, or to game the system since they can be paid a bonus reflecting their yearly performance when in fact all they are doing is producing illusory profits that they will lose back one day. Indeed, the tragedy of capitalism is that since the quality of the returns is not observable from past data, owners of companies, namely shareholders, can be taken for a ride by the managers who show returns and cosmetic profitability but in fact might be taking hidden risks.

  Chapter Five

  CONFIRMATION SHMONFIRMATION!

  I have so much evidence—Can Zoogles be (sometimes) Boogles?—Corroboration shmorroboration—Popper’s idea

  As much as it is ingrained in our habits and conventional wisdom, confirmation can be a dangerous error.

  Assume I told you that I had evidence that the football player O. J. Simpson (who was accused of killing his wife in the 1990s) was not a criminal. Look, the other day I had breakfast with him and he didn’t kill anybody. I am serious, I did not see him kill a single person. Wouldn’t that confirm his innocence? If I said such a thing you would certainly call a shrink, an ambulance, or perhaps even the police, since you might think that I spent too much time in trading rooms or in cafés thinking about thi
s Black Swan topic, and that my logic may represent such an immediate danger to society that I myself need to be locked up immediately.

  You would have the same reaction if I told you that I took a nap the other day on the railroad track in New Rochelle, New York, and was not killed. Hey, look at me, I am alive, I would say, and that is evidence that lying on train tracks is risk-free. Yet consider the following. Look again at Figure 1 in Chapter 4; someone who observed the turkey’s first thousand days (but not the shock of the thousand and first) would tell you, and rightly so, that there is no evidence of the possibility of large events, i.e., Black Swans. You are likely to confuse that statement, however, particularly if you do not pay close attention, with the statement that there is evidence of no possible Black Swans. Even though it is in fact vast, the logical distance between the two assertions will seem very narrow in your mind, so that one can be easily substituted for the other. Ten days from now, if you manage to remember the first statement at all, you will be likely to retain the second, inaccurate version—that there is proof of no Black Swans. I call this confusion the round-trip fallacy, since these statements are not interchangeable.

  Such confusion of the two statements partakes of a trivial, very trivial (but crucial), logical error—but we are not immune to trivial, logical errors, nor are professors and thinkers particularly immune to them (complicated equations do not tend to cohabit happily with clarity of mind). Unless we concentrate very hard, we are likely to unwittingly simplify the problem because our minds routinely do so without our knowing it.

  It is worth a deeper examination here.

  Many people confuse the statement “almost all terrorists are Moslems” with “almost all Moslems are terrorists.” Assume that the first statement is true, that 99 percent of terrorists are Moslems. This would mean that only about .001 percent of Moslems are terrorists, since there are more than one billion Moslems and only, say, ten thousand terrorists, one in a hundred thousand. So the logical mistake makes you (unconsciously) overestimate the odds of a randomly drawn individual Moslem person (between the age of, say, fifteen and fifty) being a terrorist by close to fifty thousand times!

  The reader might see in this round-trip fallacy the unfairness of stereotypes—minorities in urban areas in the United States have suffered from the same confusion: even if most criminals come from their ethnic subgroup, most of their ethnic subgroup are not criminals, but they still suffer from discrimination by people who should know better.

  “I never meant to say that the Conservatives are generally stupid. I meant to say that stupid people are generally Conservative,” John Stuart Mill once complained. This problem is chronic: if you tell people that the key to success is not always skills, they think that you are telling them that it is never skills, always luck.

  Our inferential machinery, that which we use in daily life, is not made for a complicated environment in which a statement changes markedly when its wording is slightly modified. Consider that in a primitive environment there is no consequential difference between the statements most killers are wild animals and most wild animals are killers. There is an error here, but it is almost inconsequential. Our statistical intuitions have not evolved for a habitat in which these subtleties can make a big difference.

  Zoogles Are Not All Boogles

  All zoogles are boogles. You saw a boogle. Is it a zoogle? Not necessarily, since not all boogles are zoogles; adolescents who make a mistake in answering this kind of question on their SAT test might not make it to college. Yet another person can get very high scores on the SATs and still feel a chill of fear when someone from the wrong side of town steps into the elevator. This inability to automatically transfer knowledge and sophistication from one situation to another, or from theory to practice, is a quite disturbing attribute of human nature.

  Let us call it the domain specificity of our reactions. By domain-specific I mean that our reactions, our mode of thinking, our intuitions, depend on the context in which the matter is presented, what evolutionary psychologists call the “domain” of the object or the event. The classroom is a domain; real life is another. We react to a piece of information not on its logical merit, but on the basis of which framework surrounds it, and how it registers with our social-emotional system. Logical problems approached one way in the classroom might be treated differently in daily life. Indeed they are treated differently in daily life.

  Knowledge, even when it is exact, does not often lead to appropriate actions because we tend to forget what we know, or forget how to process it properly if we do not pay attention, even when we are experts. Statisticians, it has been shown, tend to leave their brains in the classroom and engage in the most trivial inferential errors once they are let out on the streets. In 1971, the psychologists Danny Kahneman and Amos Tversky plied professors of statistics with statistical questions not phrased as statistical questions. One was similar to the following (changing the example for clarity): Assume that you live in a town with two hospitals—one large, the other small. On a given day 60 percent of those born in one of the two hospitals are boys. Which hospital is it likely to be? Many statisticians made the equivalent of the mistake (during a casual conversation) of choosing the larger hospital, when in fact the very basis of statistics is that large samples are more stable and should fluctuate less from the long-term average—here, 50 percent for each of the sexes—than smaller samples. These statisticians would have flunked their own exams. During my days as a quant I counted hundreds of such severe inferential mistakes made by statisticians who forgot that they were statisticians.

  For another illustration of the way we can be ludicrously domain-specific in daily life, go to the luxury Reebok Sports Club in New York City, and look at the number of people who, after riding the escalator for a couple of floors, head directly to the StairMasters.

  This domain specificity of our inferences and reactions works both ways: some problems we can understand in their applications but not in textbooks; others we are better at capturing in the textbook than in the practical application. People can manage to effortlessly solve a problem in a social situation but struggle when it is presented as an abstract logical problem. We tend to use different mental machinery—so-called modules—in different situations: our brain lacks a central all-purpose computer that starts with logical rules and applies them equally to all possible situations.

  And as I’ve said, we can commit a logical mistake in reality but not in the classroom. This asymmetry is best visible in cancer detection. Take doctors examining a patient for signs of cancer; tests are typically done on patients who want to know if they are cured or if there is “recurrence.” (In fact, recurrence is a misnomer; it simply means that the treatment did not kill all the cancerous cells and that these undetected malignant cells have started to multiply out of control.) It is not feasible, in the present state of technology, to examine every single one of the patient’s cells to see if all of them are nonmalignant, so the doctor takes a sample by scanning the body with as much precision as possible. Then she makes an assumption about what she did not see. I was once taken aback when a doctor told me after a routine cancer checkup, “Stop worrying, we have evidence of cure.” “Why?” I asked. “There is evidence of no cancer” was the reply. “How do you know?” I asked. He replied, “The scan is negative.” Yet he went around calling himself doctor!

  An acronym used in the medical literature is NED, which stands for No Evidence of Disease. There is no such thing as END, Evidence of No Disease. Yet my experience discussing this matter with plenty of doctors, even those who publish papers on their results, is that many slip into the round-trip fallacy during conversation.

  Doctors in the midst of the scientific arrogance of the 1960s looked down at mothers’ milk as something primitive, as if it could be replicated by their laboratories—not realizing that mothers’ milk might include useful components that could have eluded their scientific understanding—a simple confusion of absence of evidence of the b
enefits of mothers’ milk with evidence of absence of the benefits (another case of Platonicity as “it did not make sense” to breast-feed when we could simply use bottles). Many people paid the price for this naïve inference: those who were not breast-fed as infants turned out to be at an increased risk of a collection of health problems, including a higher likelihood of developing certain types of cancer—there had to be in mothers’ milk some necessary nutrients that still elude us. Furthermore, benefits to mothers who breast-feed were also neglected, such as a reduction in the risk of breast cancer.

  Likewise with tonsils: the removal of tonsils may lead to a higher incidence of throat cancer, but for decades doctors never suspected that this “useless” tissue might actually have a use that escaped their detection. The same with the dietary fiber found in fruits and vegetables: doctors in the 1960s found it useless because they saw no immediate evidence of its necessity, and so they created a malnourished generation. Fiber, it turns out, acts to slow down the absorption of sugars in the blood and scrapes the intestinal tract of precancerous cells. Indeed medicine has caused plenty of damage throughout history, owing to this simple kind of inferential confusion.

  I am not saying here that doctors should not have beliefs, only that some kinds of definitive, closed beliefs need to be avoided—this is what Menodotus and his school seemed to be advocating with their brand of skeptical-empirical medicine that avoided theorizing. Medicine has gotten better—but many kinds of knowledge have not.

  Evidence

  By a mental mechanism I call naïve empiricism, we have a natural tendency to look for instances that confirm our story and our vision of the world—these instances are always easy to find. Alas, with tools, and fools, anything can be easy to find. You take past instances that corroborate your theories and you treat them as evidence. For instance, a diplomat will show you his “accomplishments,” not what he failed to do. Mathematicians will try to convince you that their science is useful to society by pointing out instances where it proved helpful, not those where it was a waste of time, or, worse, those numerous mathematical applications that inflicted a severe cost on society owing to the highly unempirical nature of elegant mathematical theories.

 

‹ Prev