The Black Swan

Home > Other > The Black Swan > Page 46
The Black Swan Page 46

by Nassim Nicholas Taleb


  † A Gaussian distribution is parsimonious (with only two parameters to fit). But the problem of adding layers of possible jumps, each with a different probability, opens up endless possibilities of combinations of parameters.

  ‡ One of the most common (but useless) comments I hear is that some solutions can come from “robust statistics.” I wonder how using these techniques can create information where there is none.

  * One consequence of the absence of “typicality” for an event on causality is as follows: Say an event can cause a “war.” As we saw, such war will still be undefined, since it may kill three people or a billion. So even in situations where we can identify cause and effect, we will know little, since the effect will remain atypical. I had severe problems explaining this to historians (except for Niall Ferguson) and political scientists (except for Jon Elster). Please explain this point (very politely) to your professor of Near and Middle Eastern studies.

  VI

  THE FOURTH QUADRANT, THE SOLUTION TO THAT MOST USEFUL OF PROBLEMS*

  Did Aristotle walk slowly?—Will they follow the principles?—How to manufacture a Ponzi scheme and get credit for it

  It is much more sound to take risks you can measure than to measure the risks you are taking.

  There is a specific spot on the map, the Fourth Quadrant, in which the problem of induction, the pitfalls of empiricism come alive—the place where, I repeat, absence of evidence does not line up with evidence of absence. This section will allow us to base our decision on sounder epistemological grounds.

  David Freedman, RIP

  First, I need to pay homage to someone to whom knowledge has a large debt. The late Berkeley statistician David Freedman, who perhaps better than anyone uncovered the defects of statistical knowledge, and the inapplicability of some of the methods, sent me a farewell gift. He was supposed to be present at the meeting of the American Statistical Association that I mentioned earlier, but canceled because of illness. But he prepared me for the meeting, with a message that changed the course of the Black Swan idea: be prepared; they will provide you with a certain set of self-serving arguments and you need to respond to them. The arguments were listed in his book in a section called “The Modelers’ Response.” I list most of them below.

  The Modelers’ Response: We know all that. Nothing is perfect. The assumptions are reasonable. The assumptions don’t matter. The assumptions are conservative. You can’t prove the assumptions are wrong. We’re only doing what everybody else does. The decision-maker has to be better off with us than without us. The models aren’t totally useless. You have to do the best you can with the data. You have to make assumptions in order to make progress. You have to give the models the benefit of the doubt. Where’s the harm?

  This gave me the idea of using the approach “This is where your tools work,” instead of the “This is wrong” approach I was using before. The change in style is what earned me the hugs and supply of Diet Coke and helped me get my message across. David’s comments also inspired me to focus more on iatrogenics, harm caused by the need to use quantitative models.

  David Freedman passed away a few weeks after the meeting.* Thank you, David. You were there when the Black Swan needed you. May you and your memory rest in peace.

  Which brings us to the solution. After all this undecidability, the situation is not dire at all. Why? We, simply, can build a map of where these errors are more severe, what to watch out for.

  DECISIONS

  When you look at the generator of events, you can tell a priori which environment can deliver large events (Extremistan) and which environment cannot deliver them (Mediocristan). This is the only a priori assumption we need to make. The only one.

  So that’s that.

  I. The first type of decision is simple, leading to a “binary” exposure: that is, you just care about whether something is true or false. Very true or very false does not bring you additional benefits or damage. Binary expoures do not depend on high-impact events as their payoff is limited. Someone is either pregnant or not pregnant, so if the person is “extremely pregnant” the payoff would be the same as if she were “slightly pregnant.” A statement is “true” or “false” with some confidence interval. (I call these M0 as, more technically, they depend on what is called the zeroth moment, namely on the probability of events, and not on their magnitude—you just care about “raw” probability.) A biological experiment in the laboratory and a bet with a friend about the outcome of a soccer game belong to this category.

  Clearly, binary outcomes are not very prevalent in life; they mostly exist in laboratory experiments and in research papers. In life, payoffs are usually open-ended, or, at least, variable.

  II. The second type of decision is more complex and entails more openended exposures. You do not just care about frequency or probability, but about the impact as well, or, even more complex, some function of the impact. So there is another layer of uncertainty of impact. An epidemic or a war can be mild or severe. When you invest you do not care how many times you gain or lose, you care about the cumulative, the expectation: how many times you gain or lose times the amount made or lost. There are even more complex decisions (for instance, when one is involved with debt) but I will skip them here.

  We also care about which

  A. Event generators belong to Mediocristan (i.e., it is close to impossible for very large deviations to take place), an a priori assumption.

  B. Event generators belong to Extremistan (i.e., very large deviations are possible, or even likely).

  Which provides the four quadrants of the map.

  THE FOURTH QUADRANT, A MAP

  First Quadrant. Simple binary payoffs, in Mediocristan: forecasting is safe, life is easy, models work, everyone should be happy. These situations are, unfortunately, more common in laboratories and games than in real life. We rarely observe these in payoffs in economic decision making. Examples: some medical decisions (concerning one single patient, not a population), casino bets, prediction markets.

  TABLE 1: TABLEAU OF DECISIONS BY PAYOFF

  M0 “True/False” M1 Expectations

  Medical results for one person (health, not epidemics) Epidemics (number of persons infected)

  Psychology experiments (yes/no answers) Intellectual and artistic success (defined as book sales, citations, etc.)

  Life/Death (for a single person, not for n persons) Climate effects (any quantitative metric)

  Symmetric bets in roulette War damage (number of casualties)

  Prediction markets Security, terrorism, natural catastrophes (number of victims)

  General risk management

  Finance: performance of a nonleveraged investment (say, a retirement account)

  Insurance (measures of expected losses)

  Economics (policy)

  Casinos

  Second Quadrant. Complex payoffs in Mediocristan: statistical methods may work satisfactorily, though there are some risks. True, use of Mediocristan models may not be a panacea, owing to preasymptotics, ack of independence, and model error. There clearly are problems here, but these have been addressed extensively in the literature, particularly by David Freedman.

  Third Quadrant. Simple payoffs in Extremistan: there is little harm in being wrong, because the possibility of extreme events does not impact the payoffs. Don’t worry too much about Black Swans.

  Fourth Quadrant, the Black Swan Domain. Complex payoffs in Extremistan: that is where the problem resides; opportunities are present too. We need to avoid prediction of remote payoffs, though not necessarily ordinary ones. Payoffs from remote parts of the distribution are more difficult to predict than those from closer parts.*

  Actually, the Fourth Quadrant has two parts: exposures to positive or negative Black Swans. I will focus here on the negative one (exploiting the positive one is too obvious, and has been discussed in the story of Apelles the painter, in Chapter 13).

  TABLE 2: THE FOUR QUADRANTS

  The recommendation is
to move from the Fourth Quadrant into the third one. It is not possible to change the distribution; it is possible to change the exposure, as will be discussed in the next section.

  What I can rapidly say about the Fourth Quadrant is that all the skepticism associated with the Black Swan problem should be focused there. A general principle is that, while in the first three quadrants you can use the best model or theory you can find, and rely on it, doing so is dangerous in the Fourth Quadrant: no theory or model should be better than just any theory or model.

  In other words, the Fourth Quadrant is where the difference between absence of evidence and evidence of absence becomes acute.

  Next let us see how we can exit the Fourth Quadrant or mitigate its effects.

  * This section should be skipped by those who are not involved in social science, business, or, something even worse, public policy. Section VII will be less mundane.

  * David left me with a second surprise gift, the best gift anyone gave me during my deserto: he wrote, in a posthumous paper, that “efforts by statisticians to refute Taleb proved unconvincing,” a single sentence which turned the tide and canceled out hundreds of pages of mostly ad hominem attacks, as it alerted the reader that there was no refutation, that the criticisms had no substance. All you need is a single sentence like that to put the message back in place.

  * This is a true philosophical a priori since when you assume events belong to Extremistan (because of the lack of structure to the randomness), no additional empirical observations can possibly change your mind, since the property of Extremistan is to hide the possibility of Black Swan events—what I called earlier the masquerade problem.

  VII

  WHAT TO DO WITH THE FOURTH QUADRANT

  NOT USING THE WRONG MAP: THE NOTION OF IATROGENICS

  So for now I can produce phronetic rules (in the Aristotelian sense of phronesis, decision-making wisdom). Perhaps the story of my life lies in the following dilemma. To paraphrase Danny Kahneman, for psychological comfort some people would rather use a map of the Pyrénées while lost in the Alps than use nothing at all. They do not do so explicitly, but they actually do worse than that while dealing with the future and using risk measures. They would prefer a defective forecast to nothing. So providing a sucker with a probabilistic measure does a wonderful job of making him take more risks. I was planning to do a test with Dan Goldstein (as part of our general research programs to understand the intuitions of humans in Extremistan). Danny (he is great to walk with, but he does not do aimless strolling, “flâner”) insisted that doing our own experiments was not necessary. There is plenty of research on anchoring that proves the toxicity of giving someone a wrong numerical estimate of risk. Numerous experiments provide evidence that professionals are significantly influenced by numbers that they know to be irrelevant to their decision, like writing down the last four digits of one’s social security number before making a numerical estimate of potential market moves. German judges, very respectable people, who rolled dice before sentencing issued sentences 50 percent longer when the dice showed a high number, without being conscious of it.

  Negative Advice

  Simply, don’t get yourself into the Fourth Quadrant, the Black Swan Domain. But it is hard to heed this sound advice.

  Psychologists distinguish between acts of commission (what we do) and acts of omission. Although these are economically equivalent for the bottom line (a dollar not lost is a dollar earned), they are not treated equally in our minds. However, as I said, recommendations of the style “Do not do” are more robust empirically. How do you live long? By avoiding death. Yet people do not realize that success consists mainly in avoiding losses, not in trying to derive profits.

  Positive advice is usually the province of the charlatan. Bookstores are full of books on how someone became successful; there are almost no books with the title What I Learned Going Bust, or Ten Mistakes to Avoid in Life.

  Linked to this need for positive advice is the preference we have to do something rather than nothing, even in cases when doing something is harmful.

  I was recently on TV and some empty-suit type kept bugging me for precise advice on how to pull out of the crisis. It was impossible to communicate my “what not to do” advice, or to point out that my field is error avoidance not emergency room surgery, and that it could be a stand-alone discipline, just as worthy. Indeed, I spent twelve years trying to explain that in many instances it was better—and wiser—to have no models than to have the mathematical acrobatics we had.

  Unfortunately such lack of rigor pervades the place where we expect it the least: institutional science. Science, particularly its academic version, has never liked negative results, let alone the statement and advertising of its own limits. The reward system is not set up for it. You get respect for doing funambulism or spectator sports—following the right steps to become “the Einstein of Economics” or “the next Darwin” rather than give society something real by debunking myths or by cataloguing where our knowledge stops.

  Let me return to Gödel’s limit. In some instances we accept the limits of knowledge, trumpeting, say, Gödel’s “breakthrough” mathematical limit because it shows elegance in formulation and mathematical prowess—though the importance of this limit is dwarfed by our practical limits in forecasting climate changes, crises, social turmoil, or the fate of the endowment funds that will finance research into such future “elegant” limits. This is why I claim that my Fourth Quadrant solution is the most applied of such limits.

  Iatrogenics and the Nihilism Label

  Let’s consider medicine (that sister of philosophy), which only started saving lives less than a century ago (I am generous), and to a lesser extent than initially advertised in the popular literature, as the drops in mortality seem to arise much more from awareness of sanitation and the (random) discovery of antibiotics rather than from therapeutic contributions. Doctors, driven by the beastly illusion of control, spent a long time killing patients, not considering that “doing nothing” could be a valid option (it was “nihilistic”)—and research compiled by Spyros Makridakis shows that they still do to some extent, particularly in the overdiagnoses of some diseases.

  The nihilism label has always been used to harm. Practitioners who were conservative and considered the possibility of letting nature do its job, or who stated the limits of our medical understanding, were until the 1960s accused of “therapeutic nihilism.” It was deemed “unscientific” to avoid embarking on a course of action based on an incomplete understanding of the human body—to say, “This is the limit; this is where my body of knowledge stops.” It has been used against this author by intellectual fraudsters trying to sell products.

  The very term iatrogenics, i.e., the study of the harm caused by the healer, is not widespread—I have never seen it used outside medicine. In spite of my lifelong obsession with what is called type 1 error, or the false positive, I was only introduced to the concept of iatrogenic harm very recently, thanks to a conversation with the essayist Bryan Appleyard. How can such a major idea remain hidden from our consciousness? Even in medicine, that is, modern medicine, the ancient concept “Do no harm” sneaked in very late. The philosopher of science Georges Canguilhem wondered why it was not until the 1950s that the idea came to us. This, to me, is a mystery: how professionals can cause harm for such a long time in the name of knowledge and get away with it.

  Sadly, further investigation shows that these iatrogenics were mere rediscoveries after science grew too arrogant by the Enlightenment. Alas, once again, the elders knew better—Greeks, Romans, Byzantines, and Arabs had a built-in respect for the limits of knowledge. There is a treatise by the medieval Arab philosopher and doctor Al-Ruhawi which betrays the familiarity of these Mediterranean cultures with iatrogenics. I have also in the past speculated that religion saved lives by taking the patient away from the doctor. You could satisfy your illusion of control by going to the Temple of Apollo rather than seeing the doctor. What is interesti
ng is that the ancient Mediterraneans may have understood the trade-off very well and may have accepted religion partly as a tool to tame the illusion of control.

  You cannot do anything with knowledge unless you know where it stops, and the costs of using it. Post-Enlightenment science, and its daughter superstar science, were lucky to have done well in (linear) physics, chemistry, and engineering. But at some point we need to give up on elegance to focus on something that was given short shrift for a very long time: the maps showing what current knowledge and current methods do not do for us; and a rigorous study of generalized scientific iatrogenics, what harm can be caused by science (or, better, an exposition of what harm has been done by science). I find it the most respectable of pursuits.

  Iatrogenics of Regulators. Alas, the call for more (unconditional) regulation of economic activity appears to be a normal response. My worst nightmares have been the results of regulators. It was they who promoted the reliance on ratings by credit agencies and the “risk measurement” that fragilized the system as bankers used them to build positions that turned sour. Yet every time there is a problem, we do the Soviet-Harvard thing of more regulation, which makes investment bankers, lawyers, and former-regulators-turned-Wall-Street-advisers rich. They also serve the interest of other groups.

 

‹ Prev