The Black Swan

Home > Other > The Black Swan > Page 45
The Black Swan Page 45

by Nassim Nicholas Taleb


  But the problem gets more interesting in some domains. Recall the Casanova problem in Chapter 8. For environments that tend to produce negative Black Swans, but no positive Black Swans (these environments are called negatively skewed), the problem of small probabilities is worse. Why? Clearly, catastrophic events will be necessarily absent from the data, since the survivorship of the variable itself will depend on such effect. Thus such distributions will let the observer become prone to overestimation of stability and underestimation of potential volatility and risk.

  This point—that things have a bias to appear more stable and less risky in the past, leading us to surprises—needs to be taken seriously, particularly in the medical field. The history of epidemics, narrowly studied, does not suggest the risks of the great plague to come that will dominate the planet. Also I am convinced that in doing what we are to the environment, we greatly underestimate the potential instability we will experience somewhere from the cumulative damage we have done to nature.

  One illustration of this point is playing out just now. At the time of writing, the stock market has proved much, much riskier than innocent retirees were led to believe from historical discourses showing a hundred years of data. It is down close to 23 percent for the decade ending in 2010, while the retirees were told by finance charlatans that it was expected to rise by around 75 percent over that time span. This has bankrupted many pension plans (and the largest car company in the world), for they truly bought into that “empirical” story—and of course it has caused many disappointed people to delay their retirement. Consider that we are suckers and will gravitate toward those variables that are unstable but that appear stable.

  Preasymptotics. Let us return to Platonicity with a discussion of preasymptotics, what happens in the short term. Theories are, of course, a bad thing to start with, but they can be worse in some situations when they were derived in idealized situations, the asymptote, but are used outside the asymptote (its limit, say infinity or the infinitesimal). Mandelbrot and I showed how some asymptotic properties do work well preasymptotically in Mediocristan, which is why casinos do well; matters are different in Extremistan.

  Most statistical education is based on these asymptotic, Platonic properties, yet we live in the real world, which rarely resembles the asymptote. Statistical theorists know it, or claim to know it, but not your regular user of statistics who talks about “evidence” while writing papers. Furthermore, this compounds what I called the ludic fallacy: most of what students of mathematical statistics do is assume a structure similar to the closed structures of games, typically with a priori known probability. Yet the problem we have is not so much making computations once you know the probabilities, but finding the true distribution for the horizon concerned. Many of our knowledge problems come from this tension between a priori and a posteriori.

  Proof in the Flesh

  There is no reliable way to compute small probabilities. I argued philosophically the difficulty of computing the odds of rare events. Using almost all available economic data—and I used economic data because that’s where the clean data was—I showed the impossibility of computing from the data the measure of how far away from the Gaussian one was. There is a measure called kurtosis that the reader does not need to bother with, but that represents “how fat the tails are,” that is, how much rare events play a role. Well, often, with ten thousand pieces of data, forty years of daily observations, one single observation represents 90 percent of the kurtosis! Sampling error is too large for any statistical inference about how non-Gaussian something is, meaning that if you miss a single number, you miss the whole thing. The instability of the kurtosis implies that a certain class of statistical measures should be totally disallowed. This proves that everything relying on “standard deviation,” “variance,” “least square deviation,” etc., is bogus.

  Further, I also showed that it is impossible to use fractals to get acceptably precise probabilities—simply because a very small change in what I called the “tail exponent” in Chapter 16, coming from observation error, would make the probabilities change by a factor of 10, perhaps more.

  Implication: the need to avoid exposure to small probabilities in a certain domain. We simply cannot compute them.

  FALLACY OF THE SINGLE EVENT PROBABILITY

  Recall from Chapter 10, with the example of the behavior of life expectancy, that the conditional expectation of additional life drops as one advances in age (as you get older you are expected to live a smaller number of years; this comes from the fact that there is an asymptotic “soft” ceiling to how old a human can get). Expressing it in units of standard deviations, the conditional expectation of a Mediocristani Gaussian variable, conditional on it being higher than a threshold of 0, is .8 (standard deviations). Conditional on it being higher than a threshold of 1, it will be 1.52. Conditional on it being higher than 2, it will be 2.37. As you see, the two numbers should converge to each other as the deviations become large, so conditional on it being higher than 10 standard deviations, a random variable will be expected to be just 10.

  In Extremistan, things work differently. The conditional expectation of an increase in a random variable does not converge to the threshold as the variable gets larger. In the real world, say with stock returns (and all economic variables), conditional on a loss being worse than 5 units, using any unit of measure (it makes little difference), it will be around 8 units. Conditional that a move is more than 50 units, it should be around 80 units, and if we go all the way until the sample is depleted, the average move worse than 100 units is 250 units! This extends to all areas in which I found sufficient samples. This tells us that there is “no” typical failure and “no” typical success. You may be able to predict the occurrence of a war, but you will not be able to gauge its effect! Conditional on a war killing more than 5 million people, it should kill around 10 million (or more). Conditional on it killing more than 500 million, it would kill a billion (or more, we don’t know). You may correctly predict that a skilled person will get “rich,” but, conditional on his making it, his wealth can reach $1 million, $10 million, $1 billion, $10 billion—there is no typical number. We have data, for instance, for predictions of drug sales, conditional on getting things right. Sales estimates are totally uncorrelated to actual sales—some drugs that were correctly predicted to be successful had their sales underestimated by up to 22 times.

  This absence of “typical” events in Extremistan is what makes something called prediction markets (in which people are assumed to make bets on events) ludicrous, as they consider events to be binary. “A war” is meaningless: you need to estimate its damage—and no damage is typical. Many predicted that the First World War would occur, but nobody really predicted its magnitude. One of the reasons economics does not work is that the literature is almost completely blind to this point.

  Accordingly, Ferguson’s methodology (mentioned in Chapter 1) in looking at the prediction of events as expressed in the price of war bonds is sounder than simply counting predictions, because a bond, reflecting the costs to the governments involved in a war, is priced to cover the probability of an event times its consequences, not just the probability of an event. So we should not focus on whether someone “predicted” an event without his statement having consequences attached to it.

  Associated with the previous fallacy is the mistake of thinking that my message is that these Black Swans are necessarily more probable than assumed by conventional methods. They are mostly less probable, but have bigger effects. Consider that, in a winner-take-all environment, such as the arts, the odds of success are low, since there are fewer successful people, but the payoff is disproportionately high. So, in a fat-tailed environment, rare events can be less frequent (their probability is lower), but they are so powerful that their contribution to the total pie is more substantial.

  The point is mathematically simple, but does not register easily. I’ve enjoyed giving graduate students in mathematics the following qu
iz (to be answered intuitively, on the spot). In a Gaussian world, the probability of exceeding one standard deviation is around 16 percent. What are the odds of exceeding it under a distribution of fatter tails (with the same mean and variance)? The right answer: lower, not higher—the number of deviations drops, but the few that take place matter more. It was puzzling to see that most graduate students get it wrong.

  Back to stress testing again. At the time of writing, the U.S. government is having financial institutions stress-tested by assuming large deviations and checking the results against the capitalization of these firms. But the problem is, Where did they get the numbers? From the past? This is so flawed, since the past, as we saw, is no indication of future deviations in Extremistan. This comes from the atypicality of extreme deviations. My experience of stress testing is that it reveals little about the risks—but the risks can be used to assess the degree of model error.

  Psychology of Perception of Deviations

  Fragility of Intuitions About the Typicality of the Move. Dan Goldstein and I ran a series of experiments about the intuitions of agents concerning such conditional expectations. We posed questions of the following sort: What is the average height of humans who are taller than six feet? What the average weight of people heavier than 250 pounds? We tried with a collection of variables from Mediocristan, including the above-mentioned height and weight, to which we added age, and we asked participants to guess variables from Extremistan, such as market capitalization (what is the average size of companies with capitalization in excess of $5 billion?) and stock performance. The results show that, clearly, we have good intuitions when it comes to Mediocristan, but horribly poor ones when it comes to Extremistan—yet economic life is almost all Extremistan. We do not have good intuition for that atypicality of large deviations. This explains both foolish risk taking and how people can underestimate opportunities.

  Framing the Risks. Mathematically equivalent statements, I showed earlier with my example of survival rates, are not psychologically so. Worse, even professionals are fooled and base their decisions on their perceptual errors. Our research shows that the way a risk is framed sharply influences people’s understanding of it. If you say that, on average, investors will lose all their money every thirty years, they are more likely to invest than if you tell them they have a 3.3 percent chance of losing a certain amount every year.

  The same is true of airplane rides. We have asked experimental participants: “You are on vacation in a foreign country and are considering flying a local airline to see a special island. Safety statistics show that, if you fly once a year, there will be on average one crash every thousand years on this airline. If you don’t take the trip, it is unlikely you’ll visit this part of the world again. Would you take the flight?” All the respondents said they would. But when we changed the second sentence so it read, “Safety statistics show that, on average, one in a thousand flights on this airline have crashed,” only 70 percent said they would take the flight. In both cases, the chance of a crash is 1 in 1,000; the latter formulation simply sounds more risky.

  THE PROBLEM OF INDUCTION AND CAUSATION IN THE COMPLEX DOMAIN

  What Is Complexity? I will simplify here with a functional definition of complexity—among many more complete ones. A complex domain is characterized by the following: there is a great degree of interdependence between its elements, both temporal (a variable depends on its past changes), horizontal (variables depend on one another), and diagonal (variable A depends on the past history of variable B). As a result of this interdependence, mechanisms are subjected to positive, reinforcing feedback loops, which cause “fat tails.” That is, they prevent the working of the Central Limit Theorem that, as we saw in Chapter 15, establishes Mediocristan thin tails under summation and aggregation of elements and causes “convergence to the Gaussian.” In lay terms, moves are exacerbated over time instead of being dampened by counterbalancing forces. Finally, we have nonlinearities that accentuate the fat tails.

  So, complexity implies Extremistan. (The opposite is not necessarily true.)

  As a researcher, I have only focused on the Extemistan element of complexity theory, ignoring the other elements except as a backup for my considerations of unpredictability. But complexity has other consequences for the conventional analyses, and for causation.

  Induction

  Let us look again, from a certain angle, at the problem of “induction.” It becomes one step beyond archaic in a modern environment, making the Black Swan problem even more severe. Simply, in a complex domain, the discussion of induction versus deduction becomes too marginal to the real problems (except for a limited subset of variables, even then); the entire Aristotelian distinction misses an important dimension (similar to the one discussed earlier concerning the atypicality of events in Extremistan). Even other notions such as “cause” take on a different meaning, particularly in the presence of circular causality and interdependence.* The probabilistic equivalent is the move from a conventional random walk model (with a random variable moving in a fixed terrain and not interacting with other variables around it), to percolation models (where the terrain itself is stochastic, with different variables acting on one another).

  Driving the School Bus Blindfolded

  Alas, at the time of writing, the economics establishment is still ignorant of the presence of complexity, which degrades predictability. I will not get too involved in my outrage—instead of doing a second deserto, Mark Spitznagel and I are designing another risk management program to robustify portfolios against model error, error mostly stemming from the government’s error in the projection of deficits, leading to excessive borrowing and possible hyperinflation.

  I was once at the World Economic Forum in Davos; at one of my sessions, I illustrated interdependence in a complex system and the degradation of forecasting, with the following scheme: unemployment in New York triggered by Wall Street losses, percolating and generating unemployment in, say, China, then percolating back into unemployment in New York, is not analyzable analytically, because the feedback loops produced monstrous estimation errors. I used the notion of “convexity,” a disproportionate nonlinear response stemming from a variation in input (as the tools for measuring error rates go out of the window in the presence of convexity). Stanley Fisher, the head of the central bank of Israel, former IMF hotshot, co-author of a classic macroeconomics textbook, came to talk to me after the session to critique my point about such feedback loops causing unpredictability. He explained that we had input-output matrices that were good at calculating such feedbacks, and he cited work honored by the “Nobel” in economics. The economist in question was one Vassili Leontieff, I presume. I looked at him with the look “He is arrogant, but does not know enough to understand that he is not even wrong” (needless to say, Fisher was one of those who did not see the crisis coming). It was hard to get the message across that, even if econometric methods could track the effects of feedback loops in normal times (natural, since errors are small), such models said nothing about large disturbances. And I will repeat, large disturbances are everything in Extremistan.

  The problem is that if I am right, Fisher’s textbook, and his colleagues’ textbooks, should be dispensed with. As should almost every prediction method that uses mathematical equations.

  I tried to explain the problems of errors in monetary policy under nonlinearities: you keep adding money with no result … until there is hyperinflation. Or nothing. Governments should not be given toys they do not understand.

  * The “a priori” I am using here differs from the philosophical “a priori” belief, in the sense that it is a theoretical starting point, not a belief that is nondefeasible by experience.

  * Interestingly, the famous paper by Reverend Bayes that led to what we call Bayesian inference did not give us “probability” but expectation (expected average). Statisticians had difficulties with the concept so extracted probability from payoff. Unfortunately, this reduction led to the reifica
tion of the concept of probability, its adherents fogetting that probability is not natural in real life.

  * The intelligent reader who gets the idea that rare events are not computable can skip the remaining parts of this section, which will be extremely technical. It is meant to prove a point to those who studied too much to be able to see things with clarity.

  * This is an extremely technical point (to skip). The problem of the unknown distribution resembles, in a way, Bertrand Russell’s central difficulty in logic with the “this sentence is true” issue—a sentence cannot contain its own truth predicate. We need to apply Tarski’s solution: for every language, a metalanguage will take care of predicates of true and false about that language. With probability, simply, a metaprobability assigns degrees of credence to every probability—or, more generally, a probability distribution needs to be subordinated to a metaprobability distribution giving, say, the probability of a probability distribution being the wrong one. But luckily I have been able to express this with the available mathematical tools. I have played with this metadistribution problem in the past, in my book Dynamic Hedging (1997). I started putting an error rate on the Gaussian (by having my true distribution draw from two or more Gaussians, each with different parameters) leading to nested distributions almost invariably producing some class of Extremistan. So, to me, the variance of the distribution is, epistemologically, a measure of lack of knowledge about the average; hence the variance of variance is, epistemologically, a measure of lack of knowledge about the lack of knowledge of the mean—and the variance of variance is analog to the fourth moment of the distribution, and its kurtosis, which makes such uncertainty easy to express mathematically. This shows that: fat tails = lack of knowledge about lack of knowledge.

 

‹ Prev