Antifragile: Things That Gain from Disorder
Page 9
To satisfy the conditions for such immortality, the organisms need to predict the future with perfection—near perfection is not enough. But by letting the organisms go one lifespan at a time, with modifications between successive generations, nature does not need to predict future conditions beyond the extremely vague idea of which direction things should be heading. Actually, even a vague direction is not necessary. Every random event will bring its own antidote in the form of ecological variation. It is as if nature changed itself at every step and modified its strategy every instant.
Consider this in terms of economic and institutional life. If nature ran the economy, it would not continuously bail out its living members to make them live forever. Nor would it have permanent administrations and forecasting departments that try to outsmart the future—it would not let the scam artists of the United States Office of Management and Budget make such mistakes of epistemic arrogance.
If one looks at history as a complex system similar to nature, then, like nature, it won’t let a single empire dominate the planet forever—even if every superpower from the Babylonians to the Egyptians to the Persians to the Romans to modern America has believed in the permanence of its domination and managed to produce historians to theorize to that effect. Systems subjected to randomness—and unpredictability—build a mechanism beyond the robust to opportunistically reinvent themselves each generation, with a continuous change of population and species.
Black Swan Management 101: nature (and nature-like systems) likes diversity between organisms rather than diversity within an immortal organism, unless you consider nature itself the immortal organism, as in the pantheism of Spinoza or that present in Asian religions, or the Stoicism of Chrisippus or Epictetus. If you run into a historian of civilizations, try to explain it to him.
Let us look at how evolution benefits from randomness and volatility (in some dose, of course). The more noise and disturbances in the system, up to a point, barring those extreme shocks that lead to extinction of a species, the more the effect of the reproduction of the fittest and that of random mutations will play a role in defining the properties of the next generation. Say an organism produces ten offspring. If the environment is perfectly stable, all ten will be able to reproduce. But if there is instability, pushing aside five of these descendants (likely to be on average weaker than their surviving siblings), then those that evolution considers (on balance) the better ones will reproduce, making the gene undergo some fitness. Likewise, if there is variability among the offspring, thanks to occasional random spontaneous mutation, a sort of copying mistake in the genetic code, then the best should reproduce, increasing the fitness of the species. So evolution benefits from randomness by two different routes: randomness in the mutations, and randomness in the environment—both act in a similar way to cause changes in the traits of the surviving next generations.
Even when there is extinction of an entire species after some extreme event, no big deal, it is part of the game. This is still evolution at work, as those species that survive are fittest and take over from the lost dinosaurs—evolution is not about a species, but at the service of the whole of nature.
But note that evolution likes randomness only up to some limit.2 If a calamity completely kills life on the entire planet, the fittest will not survive. Likewise, if random mutations occur at too high a rate, then the fitness gain might not stick, might perhaps even reverse thanks to a new mutation: as I will keep repeating, nature is antifragile up to a point but such point is quite high—it can take a lot, a lot of shocks. Should a nuclear event eradicate most of life on earth, but not all life, some rat or bacteria will emerge out of nowhere, perhaps the bottom of the oceans, and the story will start again, without us, and without the members of the Office of Management and Budget, of course.
So, in a way, while hormesis corresponds to situations by which the individual organism benefits from direct harm to itself, evolution occurs when harm makes the individual organism perish and the benefits are transferred to others, the surviving ones, and future generations.
For an illustration of how families of organisms like harm in order to evolve (again, up to a point), though not the organisms themselves, consider the phenomenon of antibiotic resistance. The harder you try to harm bacteria, the stronger the survivors will be—unless you can manage to eradicate them completely. The same with cancer therapy: quite often cancer cells that manage to survive the toxicity of chemotherapy and radiation reproduce faster and take over the void made by the weaker cells.
Organisms Are Populations and Populations Are Organisms
The idea of viewing things in terms of populations, not individuals, with benefits to the latter stemming from harm to the former, came to me from the works on antifragility by the physicist turned geneticist Antoine Danchin.3 For him, analysis needs to accommodate the fact that an organism is not something isolated and stand-alone: there are layering and hierarchies. If you view things in terms of populations, you must transcend the terms “hormesis” and “Mithridatization” as a characterization of antifragility. Why? To rephrase the argument made earlier, hormesis is a metaphor for direct antifragility, when an organism directly benefits from harm; with evolution, something hierarchically superior to that organism benefits from the damage. From the outside, it looks like there is hormesis, but from the inside, there are winners and losers.
How does this layering operate? A tree has many branches, and these look like small trees; further, these large branches have many more smaller branches that sort of look like even smaller trees. This is a manifestation of what is called fractal self-similarity, a vision by the mathematician Benoît Mandelbrot. There is a similar hierarchy in things and we just see the top layer from the outside. The cell has a population of intercellular molecules; in turn the organism has a population of cells, and the species has a population of organisms. A strengthening mechanism for the species comes at the expense of some organisms; in turn the organism strengthens at the expense of some cells, all the way down and all the way up as well.
For instance, if you drink a poisonous substance in small amounts, the mechanism by which your organism gets better is, according to Danchin, evolutionary within your system, with bad (and weak) proteins in the cells replaced by stronger—and younger—ones and the stronger ones being spared (or some similar operation). When you starve yourself of food, it is the bad proteins that are broken down first and recycled by your own body—a process called autophagy. This is a purely evolutionary process, one that selects and kills the weakest for fitness. But one does not need to accept the specific biological theory (like aging proteins and autophagy) to buy the general idea that survival pressures within the organism play a role in its overall improvement under external stress.
THANK YOU, ERRORS
Now we get into errors and how the errors of some people carry benefits for others.
We can simplify the relationships between fragility, errors, and antifragility as follows. When you are fragile, you depend on things following the exact planned course, with as little deviation as possible—for deviations are more harmful than helpful. This is why the fragile needs to be very predictive in its approach, and, conversely, predictive systems cause fragility. When you want deviations, and you don’t care about the possible dispersion of outcomes that the future can bring, since most will be helpful, you are antifragile.
Further, the random element in trial and error is not quite random, if it is carried out rationally, using error as a source of information. If every trial provides you with information about what does not work, you start zooming in on a solution—so every attempt becomes more valuable, more like an expense than an error. And of course you make discoveries along the way.
Learning from the Mistakes of Others
But recall that this chapter is about layering, units, hierarchies, fractal structure, and the difference between the interest of a unit and those of its subunits. So it is often the mistakes of others that ben
efit the rest of us—and, sadly, not them. We saw that stressors are information, in the right context. For the antifragile, harm from errors should be less than the benefits. We are talking about some, not all, errors, of course; those that do not destroy a system help prevent larger calamities. The engineer and historian of engineering Henry Petroski presents a very elegant point. Had the Titanic not had that famous accident, as fatal as it was, we would have kept building larger and larger ocean liners and the next disaster would have been even more tragic. So the people who perished were sacrificed for the greater good; they unarguably saved more lives than were lost. The story of the Titanic illustrates the difference between gains for the system and harm to some of its individual parts.
The same can be said of the debacle of Fukushima: one can safely say that it made us aware of the problem with nuclear reactors (and small probabilities) and prevented larger catastrophes. (Note that the errors of naive stress testing and reliance on risk models were quite obvious at the time; as with the economic crisis, nobody wanted to listen.)
Every plane crash brings us closer to safety, improves the system, and makes the next flight safer—those who perish contribute to the overall safety of others. Swiss flight 111, TWA flight 800, and Air France flight 447 allowed the improvement of the system. But these systems learn because they are antifragile and set up to exploit small errors; the same cannot be said of economic crashes, since the economic system is not antifragile the way it is presently built. Why? There are hundreds of thousands of plane flights every year, and a crash in one plane does not involve others, so errors remain confined and highly epistemic—whereas globalized economic systems operate as one: errors spread and compound.
Again, crucially, we are talking of partial, not general, mistakes, small, not severe and terminal ones. This creates a separation between good and bad systems. Good systems such as airlines are set up to have small errors, independent from each other—or, in effect, negatively correlated to each other, since mistakes lower the odds of future mistakes. This is one way to see how one environment can be antifragile (aviation) and the other fragile (modern economic life with “earth is flat” style interconnectedness).
If every plane crash makes the next one less likely, every bank crash makes the next one more likely. We need to eliminate the second type of error—the one that produces contagion—in our construction of an ideal socioeconomic system. Let us examine Mother Nature once again.
The natural was built from nonsystemic mistake to nonsystemic mistake: my errors lifting stones, when I am well calibrated, translate into small injuries that guide me the next time, as I try to avoid pain—after all, that’s the purpose of pain. Leopards, who move like a true symphony of nature, are not instructed by personal trainers on the “proper form” to lift a deer up a tree. Human advice might work with artificial sports, like, say, tennis, bowling, or gun shooting, not with natural movements.
Some businesses love their own mistakes. Reinsurance companies, who focus on insuring catastrophic risks (and are used by insurance companies to “re-insure” such non-diversifiable risks), manage to do well after a calamity or tail event that causes them to take a hit. If they are still in business and “have their powder dry” (few manage to have plans for such contingency), they make it up by disproportionately raising premia—customers overreact and pay up for insurance. They claim to have no idea about fair value, that is, proper pricing, for reinsurance, but they certainly know that it is overpriced at times of stress, which is sufficient to them to make a long-term shekel. All they need is to keep their mistakes small enough so they can survive them.
How to Become Mother Teresa
Variability causes mistakes and adaptations; it also allows you to know who your friends are. Both your failures and your successes will give you information. But, and this is one of the good things in life, sometimes you only know about someone’s character after you harm them with an error for which you are solely responsible—I have been astonished at the generosity of some persons in the way they forgave me for my mistakes.
And of course you learn from the errors of others. You may never know what type of person someone is unless they are given opportunities to violate moral or ethical codes. I remember a classmate, a girl in high school who seemed nice and honest and part of my childhood group of anti-materialistic utopists. I learned that against my expectations (and her innocent looks) she didn’t turn out to be Mother Teresa or Rosa Luxemburg, as she dumped her first (rich) husband for another, richer person, whom she dumped upon his first financial difficulties for yet another richer and more powerful (and generous) lover. In a nonvolatile environment I (and most probably she, too) would have mistaken her for a utopist and a saint. Some members of society—those who did not marry her—got valuable information while others, her victims, paid the price.
Further, my characterization of a loser is someone who, after making a mistake, doesn’t introspect, doesn’t exploit it, feels embarrassed and defensive rather than enriched with a new piece of information, and tries to explain why he made the mistake rather than moving on. These types often consider themselves the “victims” of some large plot, a bad boss, or bad weather.
Finally, a thought. He who has never sinned is less reliable than he who has only sinned once. And someone who has made plenty of errors—though never the same error more than once—is more reliable than someone who has never made any.
WHY THE AGGREGATE HATES THE INDIVIDUAL
We saw that antifragility in biology works thanks to layers. This rivalry between suborganisms contributes to evolution: cells within our bodies compete; within the cells, proteins compete, all the way through. Let us translate the point into human endeavors. The economy has an equivalent layering: individuals, artisans, small firms, departments within corporations, corporations, industries, the regional economy, and, finally, on top, the general economy—one can even have thinner slicing with a larger number of layers.
For the economy to be antifragile and undergo what is called evolution, every single individual business must necessarily be fragile, exposed to breaking—evolution needs organisms (or their genes) to die when supplanted by others, in order to achieve improvement, or to avoid reproduction when they are not as fit as someone else. Accordingly, the antifragility of the higher level may require the fragility—and sacrifice—of the lower one. Every time you use a coffeemaker for your morning cappuccino, you are benefiting from the fragility of the coffeemaking entrepreneur who failed. He failed in order to help put the superior merchandise on your kitchen counter.
Also consider traditional societies. There, too, we have a similar layering: individuals, immediate families, extended families, tribes, people using the same dialects, ethnicities, groups.
While sacrifice as a modus is obvious in the case of ant colonies, I am certain that individual businessmen are not overly interested in hara-kiri for the greater good of the economy; they are therefore necessarily concerned in seeking antifragility or at least some level of robustness for themselves. That’s not necessarily compatible with the interest of the collective—that is, the economy. So there is a problem in which the property of the sum (the aggregate) varies from that of each one of the parts—in fact, it wants harm to the parts.
It is painful to think about ruthlessness as an engine of improvement.
Now what is the solution? There is none, alas, that can please everyone—but there are ways to mitigate the harm to the very weak.
The problem is graver than you think. People go to business school to learn how to do well while ensuring their survival—but what the economy, as a collective, wants them to do is to not survive, rather to take a lot, a lot of imprudent risks themselves and be blinded by the odds. Their respective industries improve from failure to failure. Natural and naturelike systems want some overconfidence on the part of individual economic agents, i.e., the overestimation of their chances of success and underestimation of the risks of failure in their businesses, p
rovided their failure does not impact others. In other words, they want local, but not global, overconfidence.
We saw that the restaurant business is wonderfully efficient precisely because restaurants, being vulnerable, go bankrupt every minute, and entrepreneurs ignore such a possibility, as they think that they will beat the odds. In other words, some class of rash, even suicidal, risk taking is healthy for the economy—under the condition that not all people take the same risks and that these risks remain small and localized.
Now, by disrupting the model, as we will see, with bailouts, governments typically favor a certain class of firms that are large enough to require being saved in order to avoid contagion to other business. This is the opposite of healthy risk-taking; it is transferring fragility from the collective to the unfit. People have difficulty realizing that the solution is building a system in which nobody’s fall can drag others down—for continuous failures work to preserve the system. Paradoxically, many government interventions and social policies end up hurting the weak and consolidating the established.