Book Read Free

Antifragile: Things That Gain from Disorder

Page 22

by Taleb, Nassim Nicholas


  But these are all dwarfed by the role of optionality in the two evolutions: natural and scientific-technological, the latter of which we will examine in Book IV.

  Roman Politics Likes Optionality

  Even political systems follow a form of rational tinkering, when people are rational hence take the better option: the Romans got their political system by tinkering, not by “reason.” Polybius in his Histories compares the Greek legislator Lycurgus, who constructed his political system while “untaught by adversity,” to the more experiential Romans, who, a few centuries later, “have not reached it by any process of reasoning [emphasis mine], but by the discipline of many struggles and troubles, and always choosing the best by the light of the experience gained in disaster.”

  Next

  Let me summarize. In Chapter 10 we saw the foundational asymmetry as embedded in Seneca’s ideas: more upside than downside and vice versa. This chapter refined the point and presented a manifestation of such asymmetry in the form of an option, by which one can take the upside if one likes, but without the downside. An option is the weapon of antifragility.

  The other point of the chapter and Book IV is that the option is a substitute for knowledge—actually I don’t quite understand what sterile knowledge is, since it is necessarily vague and sterile. So I make the bold speculation that many things we think are derived by skill come largely from options, but well-used options, much like Thales’ situation—and much like nature—rather than from what we claim to be understanding.

  The implication is nontrivial. For if you think that education causes wealth, rather than being a result of wealth, or that intelligent actions and discoveries are the result of intelligent ideas, you will be in for a surprise. Let us see what kind of surprise.

  1 I suppose that the main benefit of being rich (over just being independent) is to be able to despise rich people (a good concentration of whom you find in glitzy ski resorts) without any sour grapes. It is even sweeter when these farts don’t know that you are richer than they are.

  2 We will use nature as a model to show how its operational outperformance arises from optionality rather than intelligence—but let us not fall for the naturalistic fallacy: ethical rules do not have to spring from optionality.

  3 Everyone talks about luck and about trial and error, but it has led to so little difference. Why? Because it is not about luck, but about optionality. By definition luck cannot be exploited; trial and error can lead to errors. Optionality is about getting the upper half of luck.

  4 I usually hesitate to discuss my career in options, as I worry that the reader will associate the idea with finance rather than the more scientific applications. I go ballistic when I use technical insights derived from derivatives and people mistake it for a financial discussion—these are only techniques, portable techniques, very portable techniques, for Baal’s sake!

  CHAPTER 13

  Lecturing Birds on How to Fly

  Finally, the wheel—Proto–Fat Tony thinking—The central problem is that birds rarely write more than ornithologists—Combining stupidity with wisdom rather than the opposite

  Consider the story of the wheeled suitcase.

  I carry a large wheeled suitcase mostly filled with books on almost all my travels. It is heavy (books that interest me when I travel always happen to be in hardcover).

  In June 2012, I was rolling that generic, heavy, book-filled suitcase outside the JFK international terminal and, looking at the small wheels at the bottom of the case and the metal handle that helps pull it, I suddenly remembered the days when I had to haul my book-stuffed luggage through the very same terminal, with regular stops to rest and let the lactic acid flow out of my sore arms. I could not afford a porter, and even if I could, I would not have felt comfortable doing it. I have been going through the same terminal for three decades, with and without wheels, and the contrast was eerie. It struck me how lacking in imagination we are: we had been putting our suitcases on top of a cart with wheels, but nobody thought of putting tiny wheels directly under the suitcase.

  Can you imagine that it took close to six thousand years between the invention of the wheel (by, we assume, the Mesopotamians) and this brilliant implementation (by some luggage maker in a drab industrial suburb)? And billions of hours spent by travelers like myself schlepping luggage through corridors full of rude customs officers.

  Worse, this took place three decades or so after we put a man on the moon. And consider all this sophistication used in sending someone into space, and its totally negligible impact on my life, and compare it to this lactic acid in my arms, pain in my lower back, soreness in the palms of my hands, and sense of helplessness in front of a long corridor. Indeed, though extremely consequential, we are talking about something trivial: a very simple technology.

  But the technology is only trivial retrospectively—not prospectively. All those brilliant minds, usually disheveled and rumpled, who go to faraway conferences to discuss Gödel, Shmodel, Riemann’s Conjecture, quarks, shmarks, had to carry their suitcases through airport terminals, without thinking about applying their brain to such an insignificant transportation problem. (We said that the intellectual society rewards “difficult” derivations, compared to practice in which there is no penalty for simplicity.) And even if these brilliant minds had applied their supposedly overdeveloped brains to such an obvious and trivial problem, they probably would not have gotten anywhere.

  This tells us something about the way we map the future. We humans lack imagination, to the point of not even knowing what tomorrow’s important things look like. We use randomness to spoon-feed us with discoveries—which is why antifragility is necessary.

  The story of the wheel itself is even more humbling than that of the suitcase: we keep being reminded that the Mesoamericans did not invent the wheel. They did. They had wheels. But the wheels were on small toys for children. It was just like the story of the suitcase: the Mayans and Zapotecs did not make the leap to the application. They used vast quantities of human labor, corn maize, and lactic acid to move gigantic slabs of stone in the flat spaces ideal for pushcarts and chariots where they built their pyramids. They even rolled them on logs of wood. Meanwhile, their small children were rolling their toys on the stucco floors (or perhaps not even doing that, as the toys might have been solely used for mortuary purposes).

  The same story holds for the steam engine: the Greeks had an operating version of it, for amusement, of course: the aeolipyle, a turbine that spins when heated, as described by Hero of Alexandria. But it took the Industrial Revolution for us to discover this earlier discovery.

  Just as great geniuses invent their predecessors, practical innovations create their theoretical ancestry.

  There is something sneaky in the process of discovery and implementation—something people usually call evolution. We are managed by small (or large) accidental changes, more accidental than we admit. We talk big but hardly have any imagination, except for a few visionaries who seem to recognize the optionality of things. We need some randomness to help us out—with a double dose of antifragility. For randomness plays a role at two levels: the invention and the implementation. The first point is not overly surprising, though we play down the role of chance, especially when it comes to our own discoveries.

  But it took me a lifetime to figure out the second point: implementation does not necessarily proceed from invention. It, too, requires luck and circumstances. The history of medicine is littered with the strange sequence of discovery of a cure followed, much later, by the implementation—as if the two were completely separate ventures, the second harder, much harder, than the first. Just taking something to market requires struggling against a collection of naysayers, administrators, empty suits, formalists, mountains of details that invite you to drown, and one’s own discouraged mood on occasion. In other words, to identify the option (again, there is this option blindness). This is where all you need is the wisdom to realize what you have on your hands.
/>   The Half-Invented. For there is a category of things that we can call half-invented, and taking the half-invented into the invented is often the real breakthrough. Sometimes you need a visionary to figure out what to do with a discovery, a vision that he and only he can have. For instance, take the computer mouse, or what is called the graphical interface: it took Steve Jobs to put it on your desk, then laptop—only he had a vision of the dialectic between images and humans—later adding sounds to a trilectic. The things, as they say, that are “staring at us.”

  Further, the simplest “technologies,” or perhaps not even technologies but tools, such as the wheel, are the ones that seem to run the world. In spite of the hype, what we call technologies have a very high mortality rate, as I will show in Chapter 20. Just consider that of all the means of transportation that have been designed in the past three thousand years or more since the attack weapons of the Hyksos and the drawings of Hero of Alexandria, individual transportation today is limited to bicycles and cars (and a few variants in between the two). Even then, technologies seem to go backward and forward, with the more natural and less fragile superseding the technological. The wheel, born in the Middle East, seems to have disappeared after the Arab invasion introduced to the Levant a more generalized use of the camel and the inhabitants figured out that the camel was more robust—hence more efficient in the long run—than the fragile technology of the wheel. In addition, since one person could control six camels but only one carriage, the regression away from technology proved more economically sound.

  Once More, Less Is More

  This story of the suitcase came to tease me when I realized, looking at a porcelain coffee cup, that there existed a simple definition of fragility, hence a straightforward and practical testing heuristic: the simpler and more obvious the discovery, the less equipped we are to figure it out by complicated methods. The key is that the significant can only be revealed through practice. How many of these simple, trivially simple heuristics are currently looking and laughing at us?

  The story of the wheel also illustrates the point of this chapter: both governments and universities have done very, very little for innovation and discovery, precisely because, in addition to their blinding rationalism, they look for the complicated, the lurid, the newsworthy, the narrated, the scientistic, and the grandiose, rarely for the wheel on the suitcase. Simplicity, I realized, does not lead to laurels.

  Mind the Gaps

  As we saw with the stories of Thales and the wheel, antifragility (thanks to the asymmetry effects of trial and error) supersedes intelligence. But some intelligence is needed. From our discussion on rationality, we see that all we need is the ability to accept that what we have on our hands is better than what we had before—in other words, to recognize the existence of the option (or “exercise the option” as people say in the business, that is, take advantage of a valuable alternative that is superior to what precedes it, with a certain gain from switching from one into the other, the only part of the process where rationality is required). And from the history of technology, this ability to use the option given to us by antifragility is not guaranteed: things can be looking at us for a long time. We saw the gap between the wheel and its use. Medical researchers call such lag the “translational gap,” the time difference between formal discovery and first implementation, which, if anything, owing to excessive noise and academic interests, has been shown by Contopoulos-Ioannidis and her peers to be lengthening in modern times.

  The historian David Wooton relates a gap of two centuries between the discovery of germs and the acceptance of germs as a cause of disease, a delay of thirty years between the germ theory of putrefaction and the development of antisepsis, and a delay of sixty years between antisepsis and drug therapy.

  But things can get bad. In the dark ages of medicine, doctors used to rely on the naive rationalistic idea of a balance of humors in the body, and disease was assumed to originate with some imbalance, leading to a series of treatments that were perceived as needed to restore such balance. In her book on humors, Noga Arikha shows that after William Harvey demonstrated the mechanism of blood circulation in the 1620s, one would have expected that such theories and related practices should have disappeared. Yet people continued to refer to spirit and humors, and doctors continued to prescribe, for centuries more, phlebotomies (bloodletting), enemas (I prefer to not explain), and cataplasms (application of a moist piece of bread or cereal on inflamed tissue). This continued even after Pasteur’s evidence that germs were the cause of these infectious diseases.

  Now, as a skeptical empiricist, I do not consider that resisting new technology is necessarily irrational: waiting for time to operate its testing might be a valid approach if one holds that we have an incomplete picture of things. This is what naturalistic risk management is about. However, it is downright irrational if one holds on to an old technology that is not naturalistic at all yet visibly harmful, or when the switch to a new technology (like the wheel on the suitcase) is obviously free of possible side effects that did not exist with the previous one. And resisting removal is downright incompetent and criminal (as I keep saying, removal of something non-natural does not carry long-term side effects; it is typically iatrogenics-free).

  In other words, I do not give the resistance to the implementation of such discoveries any intellectual credit, or explain it by some hidden wisdom and risk management attitude: this is plainly mistaken. It partakes of the chronic lack of heroism and cowardice on the part of professionals: few want to jeopardize their jobs and reputation for the sake of change.

  Search and How Errors Can Be Investments

  Trial and error has one overriding value people fail to understand: it is not really random, rather, thanks to optionality, it requires some rationality. One needs to be intelligent in recognizing the favorable outcome and knowing what to discard.

  And one needs to be rational in not making trial and error completely random. If you are looking for your misplaced wallet in your living room, in a trial and error mode, you exercise rationality by not looking in the same place twice. In many pursuits, every trial, every failure provides additional information, each more valuable than the previous one—if you know what does not work, or where the wallet is not located. With every trial one gets closer to something, assuming an environment in which one knows exactly what one is looking for. We can, from the trial that fails to deliver, figure out progressively where to go.

  I can illustrate it best with the modus operandi of Greg Stemm, who specializes in pulling long-lost shipwrecks from the bottom of the sea. In 2007, he called his (then) biggest find “the Black Swan” after the idea of looking for positive extreme payoffs. The find was quite sizable, a treasure with precious metals now worth a billion dollars. His Black Swan is a Spanish frigate called Nuestra Señora de las Mercedes, which was sunk by the British off the southern coast of Portugal in 1804. Stemm proved to be a representative hunter of positive Black Swans, and someone who can illustrate that such a search is a highly controlled form of randomness.

  I met him and shared ideas with him: his investors (like mine at the time, as I was still involved in that business) were for the most part not programmed to understand that for a treasure hunter, a “bad” quarter (meaning expenses of searching but no finds) was not indicative of distress, as it would be with a steady cash flow business like that of a dentist or prostitute. By some mental domain dependence, people can spend money on, say, office furniture and not call it a “loss,” rather an investment, but would treat cost of search as “loss.”

  Stemm’s method is as follows. He does an extensive analysis of the general area where the ship could be. That data is synthesized into a map drawn with squares of probability. A search area is then designed, taking into account that they must have certainty that the shipwreck is not in a specific area before moving on to a lower probability area. It looks random but it is not. It is the equivalent of looking for a treasure in your house: every search has inc
rementally a higher probability of yielding a result, but only if you can be certain that the area you have searched does not hold the treasure.

  Some readers might not be too excited about the morality of shipwreck-hunting, and could consider that these treasures are national, not private, property. So let us change domain. The method used by Stemm applies to oil and gas exploration, particularly at the bottom of the unexplored oceans, with a difference: in a shipwreck, the upside is limited to the value of the treasure, whereas oil fields and other natural resources are nearly unlimited (or have a very high limit).

  Finally, recall my discussion of random drilling in Chapter 6 and how it seemed superior to more directed techniques. This optionality-driven method of search is not foolishly random. Thanks to optionality, it becomes tamed and harvested randomness.

  Creative and Uncreative Destructions

  Someone who got a (minor) version of the point that generalized trial and error has, well, errors, but without much grasp of asymmetry (or what, since Chapter 12, we have been calling optionality), is the economist Joseph Schumpeter. He realized that some things need to break for the system to improve—what is labeled creative destruction—a notion developed, among so many other ones, by the philosopher Karl Marx and a concept discovered, we will show in Chapter 17, by Nietzsche. But a reading of Schumpeter shows that he did not think in terms of uncertainty and opacity; he was completely smoked by interventionism, under the illusion that governments could innovate by fiat, something that we will contradict in a few pages. Nor did he grasp the notion of layering of evolutionary tensions. More crucially, both he and his detractors (Harvard economists who thought that he did not know mathematics) missed the notion of antifragility as asymmetry (optionality) effects, hence the philosopher’s stone—on which, later—as the agent of growth. That is, they missed half of life.

 

‹ Prev