Antifragile: Things That Gain from Disorder

Home > Other > Antifragile: Things That Gain from Disorder > Page 23
Antifragile: Things That Gain from Disorder Page 23

by Taleb, Nassim Nicholas


  THE SOVIET-HARVARD DEPARTMENT OF ORNITHOLOGY

  Now, since a very large share of technological know-how comes from the antifragility, the optionality, of trial and error, some people and some institutions want to hide the fact from us (and themselves), or downplay its role.

  Consider two types of knowledge. The first type is not exactly “knowledge”; its ambiguous character prevents us from associating it with the strict definitions of knowledge. It is a way of doing things that we cannot really express in clear and direct language—it is sometimes called apophatic—but that we do nevertheless, and do well. The second type is more like what we call “knowledge”; it is what you acquire in school, can get grades for, can codify, what is explainable, academizable, rationalizable, formalizable, theoretizable, codifiable, Sovietizable, bureaucratizable, Harvardifiable, provable, etc.

  The error of naive rationalism leads to overestimating the role and necessity of the second type, academic knowledge, in human affairs—and degrading the uncodifiable, more complex, intuitive, or experience-based type.

  There is no proof against the statement that the role such explainable knowledge plays in life is so minor that it is not even funny.

  We are very likely to believe that skills and ideas that we actually acquired by antifragile doing, or that came naturally to us (from our innate biological instinct), came from books, ideas, and reasoning. We get blinded by it; there may even be something in our brains that makes us suckers for the point. Let us see how.

  I recently looked for definitions of technology. Most texts define it as the application of scientific knowledge to practical projects—leading us to believe in a flow of knowledge going chiefly, even exclusively, from lofty “science” (organized around a priestly group of persons with titles before their names) to lowly practice (exercised by uninitiated people without the intellectual attainments to gain membership into the priestly group).

  So, in the corpus, knowledge is presented as derived in the following manner: basic research yields scientific knowledge, which in turn generates technologies, which in turn lead to practical applications, which in turn lead to economic growth and other seemingly interesting matters. The payoff from the “investment” in basic research will be partly directed to more investments in basic research, and the citizens will prosper and enjoy the benefits of such knowledge-derived wealth with Volvo cars, ski vacations, Mediterranean diets, and long summer hikes in beautifully maintained public parks.

  This is called the Baconian linear model, after the philosopher of science Francis Bacon; I am adapting its representation by the scientist Terence Kealey (who, crucially, as a biochemist, is a practicing scientist, not a historian of science) as follows:

  Academia → Applied Science and Technology → Practice

  While this model may be valid in some very narrow (but highly advertised instances), such as building the atomic bomb, the exact reverse seems to be true in most of the domains I’ve examined. Or, at least, this model is not guaranteed to be true and, what is shocking, we have no rigorous evidence that it is true. It may be that academia helps science and technology, which in turn help practice, but in unintended, nonteleological ways, as we will see later (in other words, it is directed research that may well be an illusion).

  Let us return to the metaphor of the birds. Think of the following event: A collection of hieratic persons (from Harvard or some such place) lecture birds on how to fly. Imagine bald males in their sixties, dressed in black robes, officiating in a form of English that is full of jargon, with equations here and there for good measure. The bird flies. Wonderful confirmation! They rush to the department of ornithology to write books, articles, and reports stating that the bird has obeyed them, an impeccable causal inference. The Harvard Department of Ornithology is now indispensable for bird flying. It will get government research funds for its contribution.

  Mathematics → Ornithological navigation and wing-flapping technologies → (ungrateful) birds fly

  It also happens that birds write no such papers and books, conceivably because they are just birds, so we never get their side of the story. Meanwhile, the priests keep broadcasting theirs to the new generation of humans who are completely unaware of the conditions of the pre-Harvard lecturing days. Nobody discusses the possibility of the birds’ not needing lectures—and nobody has any incentive to look at the number of birds that fly without such help from the great scientific establishment.

  The problem is that what I wrote above looks ridiculous, but a change of domain makes it look reasonable. Clearly, we never think that it is thanks to ornithologists that birds learn to fly—and if some people do hold such a belief, it would be hard for them to convince the birds. But why is it that when we anthropomorphize and replace “birds” with “men,” the idea that people learn to do things thanks to lectures becomes plausible? When it comes to human agency, matters suddenly become confusing to us.

  So the illusion grows and grows, with government funding, tax dollars, swelling (and self-feeding) bureaucracies in Washington all devoted to helping birds fly better. Problems occur when people start cutting such funding—with a spate of accusations of killing birds by not helping them fly.

  As per the Yiddish saying: “If the student is smart, the teacher takes the credit.” These illusions of contribution result largely from confirmation fallacies: in addition to the sad fact that history belongs to those who can write about it (whether winners or losers), a second bias appears, as those who write the accounts can deliver confirmatory facts (what has worked) but not a complete picture of what has worked and what has failed. For instance, directed research would tell you what has worked from funding (like AIDS drugs or some modern designer drugs), not what has failed—so you may have the impression that it fares better than random.

  And of course iatrogenics is never part of the discourse. They never tell you if education hurt you in some places.

  So we are blind to the possibility of the alternative process, or the role of such a process, a loop:

  Random Tinkering (antifragile) → Heuristics (technology) → Practice and Apprenticeship → Random Tinkering (antifragile) → Heuristics (technology) → Practice and Apprenticeship …

  In parallel to the above loop,

  Practice → Academic Theories → Academic Theories → Academic Theories → Academic Theories … (with of course some exceptions, some accidental leaks, though these are indeed rare and overhyped and grossly generalized).

  Now, crucially, one can detect the scam in the so-called Baconian model by looking at events in the days that preceded the Harvard lectures on flying and examining the birds. This is what I accidentally found (indeed, accidentally) in my own career as practitioner turned researcher in volatility, thanks to some lucky turn of events. But before that, let me explain epiphenomena and the arrow of education.

  EPIPHENOMENA

  The Soviet-Harvard illusion (lecturing birds on flying and believing that the lecture is the cause of these wonderful skills) belongs to a class of causal illusions called epiphenomena. What are these illusions? When you spend time on the bridge of a ship or in the coxswain’s station with a large compass in front, you can easily develop the impression that the compass is directing the ship rather than merely reflecting its direction.

  The lecturing-birds-how-to-fly effect is an example of epiphenomenal belief: we see a high degree of academic research in countries that are wealthy and developed, leading us to think uncritically that research is the generator of wealth. In an epiphenomenon, you don’t usually observe A without observing B with it, so you are likely to think that A causes B, or that B causes A, depending on the cultural framework or what seems plausible to the local journalist.

  One rarely has the illusion that, given that so many boys have short hair, short hair determines gender, or that wearing a tie causes one to become a businessman. But it is easy to fall into other epiphenomena, particularly when one is immersed in a news-driven culture.


  And one can easily see the trap of having these epiphenomena fuel action, then justify it retrospectively. A dictator—just like a government—will feel indispensable because the alternative is not easily visible, or is hidden by special interest groups. The Federal Reserve Bank of the United States, for instance, can wreak havoc on the economy yet feel convinced of its effectiveness. People are scared of the alternative.

  Greed as a Cause

  Whenever an economic crisis occurs, greed is pointed to as the cause, which leaves us with the impression that if we could go to the root of greed and extract it from life, crises would be eliminated. Further, we tend to believe that greed is new, since these wild economic crises are new. This is an epiphenomenon: greed is much older than systemic fragility. It existed as far back as the eye can go into history. From Virgil’s mention of greed of gold and the expression radix malorum est cupiditas (from the Latin version of the New Testament), both expressed more than twenty centuries ago, we know that the same problems of greed have been propounded through the centuries, with no cure, of course, in spite of the variety of political systems we have developed since then. Trollope’s novel The Way We Live Now, published close to a century and a half ago, shows the exact same complaint of a resurgence of greed and con operators that I heard in 1988 with cries over of the “greed decade,” or in 2008 with denunciations of the “greed of capitalism.” With astonishing regularity, greed is seen as something (a) new and (b) curable. A Procrustean bed approach; we cannot change humans as easily as we can build greed-proof systems, and nobody thinks of simple solutions.1

  Likewise “lack of vigilance” is often proposed as the cause of an error (as we will see with the Société Générale story in Book V, the cause was size and fragility). But lack of vigilance is not the cause of the death of a mafia don; the cause of death is making enemies, and the cure is making friends.

  Debunking Epiphenomena

  We can dig out epiphenomena in the cultural discourse and consciousness by looking at the sequence of events and checking whether one always precedes the other. This is a method refined by the late Clive Granger (himself a refined gentleman), a well-deserved “Nobel” in Economics, that Bank of Sweden (Sveriges Riksbank) prize in honor of Alfred Nobel that has been given to a large number of fragilistas. It is the only rigorously scientific technique that philosophers of science can use to establish causation, as they can now extract, if not measure, the so-called “Granger cause” by looking at sequences. In epiphenomenal situations, you end up seeing A and B together. But if you refine your analysis by considering the sequence, thus introducing a time dimension—which takes place first, A or B?—and analyze evidence, then you will see if truly A causes B.

  Further, Granger had the great idea of studying differences, that is, changes in A and B, not just levels of A and B. While I do not believe that Granger’s method can lead me to believe that “A causes B” with certainty, it can most certainly help me debunk fake causation, and allow me to make the claim that “the statement that B causes A is wrong” or has insufficient evidence from the sequence.

  The important difference between theory and practice lies precisely in the detection of the sequence of events and retaining the sequence in memory. If life is lived forward but remembered backward, as Kierkegaard observed, then books exacerbate this effect—our own memories, learning, and instinct have sequences in them. Someone standing today looking at events without having lived them would be inclined to develop illusions of causality, mostly from being mixed-up by the sequence of events. In real life, in spite of all the biases, we do not have the same number of asynchronies that appear to the student of history. Nasty history, full of lies, full of biases!

  For one example of a trick for debunking causality: I am not even dead yet, but am already seeing distortions about my work. Authors theorize about some ancestry of my ideas, as if people read books then developed ideas, not wondering whether perhaps it is the other way around; people look for books that support their mental program. So one journalist (Anatole Kaletsky) saw the influence of Benoît Mandelbrot on my book Fooled by Randomness, published in 2001 when I did not know who Mandelbrot was. It is simple: the journalist noticed similarities of thought in one type of domain, and seniority of age, and immediately drew the false inference. He did not consider that like-minded people are inclined to hang together and that such intellectual similarity caused the relationship rather than the reverse. This makes me suspicious of the master-pupil relationships we read about in cultural history: about all the people that have been called my pupils have been my pupils because we were like-minded.

  Cherry-picking (or the Fallacy of Confirmation)

  Consider the tourist brochures used by countries to advertise their wares: you can expect that the pictures presented to you will look much, much better than anything you will encounter in the place. And the bias, the difference (for which humans correct, thanks to common sense), can be measured as the country shown in the tourist brochure minus the country seen with your naked eyes. That difference can be small, or large. We also make such corrections with commercial products, not overly trusting advertising.

  But we don’t correct for the difference in science, medicine, and mathematics, for the same reasons we didn’t pay attention to iatrogenics. We are suckers for the sophisticated.

  In institutional research, one can selectively report facts that confirm one’s story, without revealing facts that disprove it or don’t apply to it—so the public perception of science is biased into believing in the necessity of the highly conceptualized, crisp, and purified Harvardized methods. And statistical research tends to be marred with this one-sidedness. Another reason one should trust the disconfirmatory more than the confirmatory.

  Academia is well equipped to tell us what it did for us, not what it did not—hence how indispensable its methods are. This ranges across many things in life. Traders talk about their successes, so one is led to believe that they are intelligent—not looking at the hidden failures. As to academic science: a few years ago, the great Anglo-Lebanese mathematician Michael Atiyah of string theory fame came to New York to raise funds for a research center in mathematics based in Lebanon. In his speech, he enumerated applications in which mathematics turned out to be useful for society and modern life, such as traffic signaling. Fine. But what about areas where mathematics led us to disaster (as in, say, economics or finance, where it blew up the system)? And how about areas out of the reach of mathematics? I thought right there of a different project: a catalog of where mathematics fails to produce results, hence causes harm.

  Cherry-picking has optionality: the one telling the story (and publishing it) has the advantage of being able to show the confirmatory examples and completely ignore the rest—and the more volatility and dispersion, the rosier the best story will be (and the darker the worst story). Someone with optionality—the right to pick and choose his story—is only reporting on what suits his purpose. You take the upside of your story and hide the downside, so only the sensational seems to count.

  The real world relies on the intelligence of antifragility, but no university would swallow that—just as interventionists don’t accept that things can improve without their intervention. Let us return to the idea that universities generate wealth and the growth of useful knowledge in society. There is a causal illusion here; time to bust it.

  1 Is democracy epiphenomenal? Supposedly, democracy works because of this hallowed rational decision making on the part of voters. But consider that democracy may be something completely accidental to something else, the side effect of people liking to cast ballots for completely obscure reasons, just as people enjoy expressing themselves just to express themselves. (I once put this question at a political science conference and got absolutely nothing beyond blank nerdy faces, not even a smile.)

  CHAPTER 14

  When Two Things Are Not the “Same Thing”

  Green lumber another “blue”—Where we look for the arrow of
discovery—Putting Iraq in the middle of Pakistan—Prometheus never looked back

  I am writing these lines in an appropriate place to think about the arrow of knowledge: Abu Dhabi, a city that sprang out of the desert, as if watered by oil.

  It makes me queasy to see the building of these huge universities, funded by the oil revenues of governments, under the postulation that oil reserves can be turned into knowledge by hiring professors from prestigious universities and putting their kids through school (or, as is the case, waiting for their kids to feel the desire to go to school, as many students in Abu Dhabi are from Bulgaria, Serbia, or Macedonia getting a free education). Even better, they can, with a single check, import an entire school from overseas, such as the Sorbonne and New York University (among many more). So, in a few years, members of this society will be reaping the benefits of a great technological improvement.

 

‹ Prev