The Black Swan

Home > Other > The Black Swan > Page 2
The Black Swan Page 2

by Nassim Nicholas Taleb


  The central idea of this book concerns our blindness with respect to randomness, particularly the large deviations: Why do we, scientists or nonscientists, hotshots or regular Joes, tend to see the pennies instead of the dollars? Why do we keep focusing on the minutiae, not the possible significant large events, in spite of the obvious evidence of their huge influence? And, if you follow my argument, why does reading the newspaper actually decrease your knowledge of the world?

  It is easy to see that life is the cumulative effect of a handful of significant shocks. It is not so hard to identify the role of Black Swans, from your armchair (or bar stool). Go through the following exercise. Look into your own existence. Count the significant events, the technological changes, and the inventions that have taken place in our environment since you were born and compare them to what was expected before their advent. How many of them came on a schedule? Look into your own personal life, to your choice of profession, say, or meeting your mate, your exile from your country of origin, the betrayals you faced, your sudden enrichment or impoverishment. How often did these things occur according to plan?

  What You Do Not Know

  Black Swan logic makes what you don’t know far more relevant than what you do know.* Consider that many Black Swans can be caused and exacerbated by their being unexpected.

  Think of the terrorist attack of September 11, 2001: had the risk been reasonably conceivable on September 10, it would not have happened. If such a possibility were deemed worthy of attention, fighter planes would have circled the sky above the twin towers, airplanes would have had locked bulletproof doors, and the attack would not have taken place, period. Something else might have taken place. What? I don’t know.

  Isn’t it strange to see an event happening precisely because it was not supposed to happen? What kind of defense do we have against that? Whatever you come to know (that New York is an easy terrorist target, for instance) may become inconsequential if your enemy knows that you know it. It may be odd that, in such a strategic game, what you know can be truly inconsequential.*

  This extends to all businesses. Think about the “secret recipe” to making a killing in the restaurant business. If it were known and obvious, then someone next door would have already come up with the idea and it would have become generic. The next killing in the restaurant industry needs to be an idea that is not easily conceived of by the current population of restaurateurs. It has to be at some distance from expectations. The more unexpected the success of such a venture, the smaller the number of competitors, and the more successful the entrepreneur who implements the idea. The same applies to the shoe and the book businesses—or any kind of entrepreneurship. The same applies to scientific theories—nobody has interest in listening to trivialities. The payoff of a human venture is, in general, inversely proportional to what it is expected to be.

  Consider the Indian Ocean tsunami of December 2004. Had it been expected, it would not have caused the damage it did—the areas affected would have been less populated, an early warning system would have been put in place. What you know cannot really hurt you.

  Experts and “Empty Suits”

  The inability to predict outliers implies the inability to predict the course of history, given the share of these events in the dynamics of events.

  But we act as though we are able to predict historical events, or, even worse, as if we are able to change the course of history. We produce thirty-year projections of social security deficits and oil prices without realizing that we cannot even predict these for next summer—our cumulative prediction errors for political and economic events are so monstrous that every time I look at the empirical record I have to pinch myself to verify that I am not dreaming. What is surprising is not the magnitude of our forecast errors, but our absence of awareness of it. This is all the more worrisome when we engage in deadly conflicts: wars are fundamentally unpredictable (and we do not know it). Owing to this misunderstanding of the causal chains between policy and actions, we can easily trigger Black Swans thanks to aggressive ignorance—like a child playing with a chemistry kit.

  Our inability to predict in environments subjected to the Black Swan, coupled with a general lack of the awareness of this state of affairs, means that certain professionals, while believing they are experts, are in fact not. Based on their empirical record, they do not know more about their subject matter than the general population, but they are much better at narrating—or, worse, at smoking you with complicated mathematical models. They are also more likely to wear a tie.

  Black Swans being unpredictable, we need to adjust to their existence (rather than naïvely try to predict them). There are so many things we can do if we focus on antiknowledge, or what we do not know. Among many other benefits, you can set yourself up to collect serendipitous Black Swans (of the positive kind) by maximizing your exposure to them. Indeed, in some domains—such as scientific discovery and venture capital investments—there is a disproportionate payoff from the unknown, since you typically have little to lose and plenty to gain from a rare event. We will see that, contrary to social-science wisdom, almost no discovery, no technologies of note, came from design and planning—they were just Black Swans. The strategy for the discoverers and entrepreneurs is to rely less on top-down planning and focus on maximum tinkering and recognizing opportunities when they present themselves. So I disagree with the followers of Marx and those of Adam Smith: the reason free markets work is because they allow people to be lucky, thanks to aggressive trial and error, not by giving rewards or “incentives” for skill. The strategy is, then, to tinker as much as possible and try to collect as many Black Swan opportunities as you can.

  Learning to Learn

  Another related human impediment comes from excessive focus on what we do know: we tend to learn the precise, not the general.

  What did people learn from the 9/11 episode? Did they learn that some events, owing to their dynamics, stand largely outside the realm of the predictable? No. Did they learn the built-in defect of conventional wisdom? No. What did they figure out? They learned precise rules for avoiding Islamic prototerrorists and tall buildings. Many keep reminding me that it is important for us to be practical and take tangible steps rather than to “theorize” about knowledge. The story of the Maginot Line shows how we are conditioned to be specific. The French, after the Great War, built a wall along the previous German invasion route to prevent reinvasion—Hitler just (almost) effortlessly went around it. The French had been excellent students of history; they just learned with too much precision. They were too practical and exceedingly focused for their own safety.

  We do not spontaneously learn that we don’t learn that we don’t learn. The problem lies in the structure of our minds: we don’t learn rules, just facts, and only facts. Metarules (such as the rule that we have a tendency to not learn rules) we don’t seem to be good at getting. We scorn the abstract; we scorn it with passion.

  Why? It is necessary here, as it is my agenda in the rest of this book, both to stand conventional wisdom on its head and to show how inapplicable it is to our modern, complex, and increasingly recursive environment.*

  But there is a deeper question: What are our minds made for? It looks as if we have the wrong user’s manual. Our minds do not seem made to think and introspect; if they were, things would be easier for us today, but then we would not be here today and I would not have been here to talk about it—my counterfactual, introspective, and hard-thinking ancestor would have been eaten by a lion while his nonthinking but faster-reacting cousin would have run for cover. Consider that thinking is time-consuming and generally a great waste of energy, that our predecessors spent more than a hundred million years as nonthinking mammals and that in the blip in our history during which we have used our brain we have used it on subjects too peripheral to matter. Evidence shows that we do much less thinking than we believe we do—except, of course, when we think about it.

  A NEW KIND OF INGRATITUDE

 
; It is quite saddening to think of those people who have been mistreated by history. There were the poètes maudits, like Edgar Allan Poe or Arthur Rimbaud, scorned by society and later worshipped and force-fed to schoolchildren. (There are even schools named after high school dropouts.) Alas, this recognition came a little too late for the poet to get a serotonin kick out of it, or to prop up his romantic life on earth. But there are even more mistreated heroes—the very sad category of those who we do not know were heroes, who saved our lives, who helped us avoid disasters. They left no traces and did not even know that they were making a contribution. We remember the martyrs who died for a cause that we knew about, never those no less effective in their contribution but whose cause we were never aware of—precisely because they were successful. Our ingratitude toward the poètes maudits fades completely in front of this other type of thanklessness. This is a far more vicious kind of ingratitude: the feeling of uselessness on the part of the silent hero. I will illustrate with the following thought experiment.

  Assume that a legislator with courage, influence, intellect, vision, and perseverance manages to enact a law that goes into universal effect and employment on September 10, 2001; it imposes the continuously locked bulletproof doors in every cockpit (at high costs to the struggling airlines)—just in case terrorists decide to use planes to attack the World Trade Center in New York City. I know this is lunacy, but it is just a thought experiment (I am aware that there may be no such thing as a legislator with intellect, courage, vision, and perseverance; this is the point of the thought experiment). The legislation is not a popular measure among the airline personnel, as it complicates their lives. But it would certainly have prevented 9/11.

  The person who imposed locks on cockpit doors gets no statues in public squares, not so much as a quick mention of his contribution in his obituary. “Joe Smith, who helped avoid the disaster of 9/11, died of complications of liver disease.” Seeing how superfluous his measure was, and how it squandered resources, the public, with great help from airline pilots, might well boot him out of office. Vox clamantis in deserto. He will retire depressed, with a great sense of failure. He will die with the impression of having done nothing useful. I wish I could go attend his funeral, but, reader, I can’t find him. And yet, recognition can be quite a pump. Believe me, even those who genuinely claim that they do not believe in recognition, and that they separate labor from the fruits of labor, actually get a serotonin kick from it. See how the silent hero is rewarded: even his own hormonal system will conspire to offer no reward.

  Now consider again the events of 9/11. In their aftermath, who got the recognition? Those you saw in the media, on television performing heroic acts, and those whom you saw trying to give you the impression that they were performing heroic acts. The latter category includes someone like the New York Stock Exchange chairman Richard Grasso, who “saved the stock exchange” and received a huge bonus for his contribution (the equivalent of several thousand average salaries). All he had to do was be there to ring the opening bell on television—the television that, we will see, is the carrier of unfairness and a major cause of Black Swan blindness.

  Who gets rewarded, the central banker who avoids a recession or the one who comes to “correct” his predecessors’ faults and happens to be there during some economic recovery? Who is more valuable, the politician who avoids a war or the one who starts a new one (and is lucky enough to win)?

  It is the same logic reversal we saw earlier with the value of what we don’t know; everybody knows that you need more prevention than treatment, but few reward acts of prevention. We glorify those who left their names in history books at the expense of those contributors about whom our books are silent. We humans are not just a superficial race (this may be curable to some extent); we are a very unfair one.

  LIFE IS VERY UNUSUAL

  This is a book about uncertainty; to this author, the rare event equals uncertainty. This may seem like a strong statement—that we need to principally study the rare and extreme events in order to figure out common ones—but I will make myself clear as follows. There are two possible ways to approach phenomena. The first is to rule out the extraordinary and focus on the “normal.” The examiner leaves aside “outliers” and studies ordinary cases. The second approach is to consider that in order to understand a phenomenon, one needs first to consider the extremes—particularly if, like the Black Swan, they carry an extraordinary cumulative effect.

  I don’t particularly care about the usual. If you want to get an idea of a friend’s temperament, ethics, and personal elegance, you need to look at him under the tests of severe circumstances, not under the regular rosy glow of daily life. Can you assess the danger a criminal poses by examining only what he does on an ordinary day? Can we understand health without considering wild diseases and epidemics? Indeed the normal is often irrelevant.

  Almost everything in social life is produced by rare but consequential shocks and jumps; all the while almost everything studied about social life focuses on the “normal,” particularly with “bell curve” methods of inference that tell you close to nothing. Why? Because the bell curve ignores large deviations, cannot handle them, yet makes us confident that we have tamed uncertainty. Its nickname in this book is GIF, Great Intellectual Fraud.

  PLATO AND THE NERD

  At the start of the Jewish revolt in the first century of our era, much of the Jews’ anger was caused by the Romans’ insistence on putting a statue of Caligula in their temple in Jerusalem in exchange for placing a statue of the Jewish god Yahweh in Roman temples. The Romans did not realize that what the Jews (and the subsequent Levantine monotheists) meant by god was abstract, all embracing, and had nothing to do with the anthropomorphic, too human representation that Romans had in mind when they said deus. Critically, the Jewish god did not lend himself to symbolic representation. Likewise, what many people commoditize and label as “unknown,” “improbable,” or “uncertain” is not the same thing to me; it is not a concrete and precise category of knowledge, a nerdified field, but its opposite; it is the lack (and limitations) of knowledge. It is the exact contrary of knowledge; one should learn to avoid using terms made for knowledge to describe its opposite.

  What I call Platonicity, after the ideas (and personality) of the philosopher Plato, is our tendency to mistake the map for the territory, to focus on pure and well-defined “forms,” whether objects, like triangles, or social notions, like utopias (societies built according to some blueprint of what “makes sense”), even nationalities. When these ideas and crisp constructs inhabit our minds, we privilege them over other less elegant objects, those with messier and less tractable structures (an idea that I will elaborate progressively throughout this book).

  Platonicity is what makes us think that we understand more than we actually do. But this does not happen everywhere. I am not saying that Platonic forms don’t exist. Models and constructions, these intellectual maps of reality, are not always wrong; they are wrong only in some specific applications. The difficulty is that a) you do not know beforehand (only after the fact) where the map will be wrong, and b) the mistakes can lead to severe consequences. These models are like potentially helpful medicines that carry random but very severe side effects.

  The Platonic fold is the explosive boundary where the Platonic mind-set enters in contact with messy reality, where the gap between what you know and what you think you know becomes dangerously wide. It is here that the Black Swan is produced.

  TOO DULL TO WRITE ABOUT

  It was said that the artistic filmmaker Luchino Visconti made sure that when actors pointed at a closed box meant to contain jewels, there were real jewels inside. It could be an effective way to make actors live their part. I think that Visconti’s gesture may also come out of a plain sense of aesthetics and a desire for authenticity—somehow it may not feel right to fool the viewer.

  This is an essay expressing a primary idea; it is neither the recycling nor repackaging of other people�
�s thoughts. An essay is an impulsive meditation, not science reporting. I apologize if I skip a few obvious topics in this book out of the conviction that what is too dull for me to write about might be too dull for the reader to read. (Also, to avoid dullness may help to filter out the nonessential.)

  Talk is cheap. Someone who took too many philosophy classes in college (or perhaps not enough) might object that the sighting of a Black Swan does not invalidate the theory that all swans are white since such a black bird is not technically a swan since whiteness to him may be the essential property of a swan. Indeed those who read too much Wittgenstein (and writings about comments about Wittgenstein) may be under the impression that language problems are important. They may certainly be important to attain prominence in philosophy departments, but they are something we, practitioners and decision makers in the real world, leave for the weekend. As I explain in the chapter called “The Uncertainty of the Phony,” for all of their intellectual appeal, these niceties have no serious implications Monday to Friday as opposed to more substantial (but neglected) matters. People in the classroom, not having faced many true situations of decision making under uncertainty, do not realize what is important and what is not—even those who are scholars of uncertainty (or particularly those who are scholars of uncertainty). What I call the practice of uncertainty can be piracy, commodity speculation, professional gambling, working in some branches of the Mafia, or just plain serial entrepreneurship. Thus I rail against “sterile skepticism,” the kind we can do nothing about, and against the exceedingly theoretical language problems that have made much of modern philosophy largely irrelevant to what is derisively called the “general public.” (In the past, for better or worse, those rare philosophers and thinkers who were not self-standing depended on a patron’s support. Today academics in abstract disciplines depend on one another’s opinion, without external checks, with the severe occasional pathological result of turning their pursuits into insular prowess-showing contests. Whatever the shortcomings of the old system, at least it enforced some standard of relevance.)

 

‹ Prev