Convexity is about acceleration. The remarkable thing about measuring convexity effects to detect model errors is that even if the model used for the computation is wrong, it can tell you if an entity is fragile and by how much it is fragile. As with the defective scale, we are only looking for second-order effects.
2 I am simplifying a bit. There may be a few degrees’ variation around 70 at which the grandmother might be better off than just at 70, but I skip this nuance here. In fact younger humans are antifragile to thermal variations, up to a point, benefiting from some variability, then losing such antifragility with age (or disuse, as I suspect that thermal comfort ages people and makes them fragile).
3 I remind the reader that this section is technical and can be skipped.
4 The grandmother does better at 70 degrees Fahrenheit than at an average of 70 degrees with one hour at 0, another at 140 degrees. The more dispersion around the average, the more harm for her. Let us see the counterintuitive effect in terms of x and function of x, f(x). Let us write the health of the grandmother as f(x), with x the temperature. We have a function of the average temperature, f{(0 + 140)/2}, showing the grandmother in excellent shape. But {f(o) + f(140)}/2 leaves us with a dead grandmother at f(0) and a dead grandmother at f(140), for an “average” of a dead grandmother. We can see an explanation of the statement that the properties of f(x) and those of x become divorced from each other when f(x) is nonlinear. The average of f(x) is different from f(average of x).
BOOK VI
Via Negativa
Recall that we had no name for the color blue but managed rather well without it—we stayed for a long part of our history culturally, not biologically, color blind. And before the composition of Chapter 1, we did not have a name for antifragility, yet systems have relied on it effectively in the absence of human intervention. There are many things without words, matters that we know and can act on but cannot describe directly, cannot capture in human language or within the narrow human concepts that are available to us. Almost anything around us of significance is hard to grasp linguistically—and in fact the more powerful, the more incomplete our linguistic grasp.
But if we cannot express what something is exactly, we can say something about what it is not—the indirect rather than the direct expression. The “apophatic” focuses on what cannot be said directly in words, from the Greek apophasis (saying no, or mentioning without mentioning). The method began as an avoidance of direct description, leading to a focus on negative description, what is called in Latin via negativa, the negative way, after theological traditions, particularly in the Eastern Orthodox Church. Via negativa does not try to express what God is—leave that to the primitive brand of contemporary thinkers and philosophasters with scientistic tendencies. It just lists what God is not and proceeds by the process of elimination. The idea is mostly associated with the mystical theologian Pseudo-Dionysos the Areopagite. He was some obscure Near Easterner by the name of Dionysos who wrote powerful mystical treatises and was for a long time confused with Dionysos the Areopagite, a judge in Athens who was converted by the preaching of Paul the Apostle. Hence the qualifier of “Pseudo” added to his name.
Neoplatonists were followers of Plato’s ideas; they focused mainly on Plato’s forms, those abstract objects that had a distinct existence on their own. Pseudo-Dionysos was the disciple of Proclus the Neoplatonist (himself the student of Syrianus, another Syrian Neoplatonist). Proclus was known to repeat the metaphor that statues are carved by subtraction. I have often read a more recent version of the idea, with the following apocryphal pun. Michelangelo was asked by the pope about the secret of his genius, particularly how he carved the statue of David, largely considered the masterpiece of all masterpieces. His answer was: “It’s simple. I just remove everything that is not David.”
The reader might thus recognize the logic behind the barbell. Remember from the logic of the barbell that it is necessary to first remove fragilities.
Where Is the Charlatan?
Recall that the interventionista focuses on positive action—doing. Just like positive definitions, we saw that acts of commission are respected and glorified by our primitive minds and lead to, say, naive government interventions that end in disaster, followed by generalized complaints about naive government interventions, as these, it is now accepted, end in disaster, followed by more naive government interventions. Acts of omission, not doing something, are not considered acts and do not appear to be part of one’s mission. Table 3 showed how generalized this effect can be across domains, from medicine to business.
I have used all my life a wonderfully simple heuristic: charlatans are recognizable in that they will give you positive advice, and only positive advice, exploiting our gullibility and sucker-proneness for recipes that hit you in a flash as just obvious, then evaporate later as you forget them. Just look at the “how to” books with, in their title, “Ten Steps for—” (fill in: enrichment, weight loss, making friends, innovation, getting elected, building muscles, finding a husband, running an orphanage, etc.). Yet in practice it is the negative that’s used by the pros, those selected by evolution: chess grandmasters usually win by not losing; people become rich by not going bust (particularly when others do); religions are mostly about interdicts; the learning of life is about what to avoid. You reduce most of your personal risks of accident thanks to a small number of measures.
Further, being fooled by randomness is that in most circumstances fraught with a high degree of randomness, one cannot really tell if a successful person has skills, or if a person with skills will succeed—but we can pretty much predict the negative, that a person totally devoid of skills will eventually fail.
Subtractive Knowledge
Now when it comes to knowledge, the same applies. The greatest—and most robust—contribution to knowledge consists in removing what we think is wrong—subtractive epistemology.
In life, antifragility is reached by not being a sucker. In Peri mystikes theologias, Pseudo-Dionysos did not use these exact words, nor did he discuss disconfirmation, nor did he get the idea with clarity, but in my view he figured out this subtractive epistemology and asymmetries in knowledge. I have called “Platonicity” the love of some crisp abstract forms, the theoretical forms and universals that make us blind to the mess of reality and cause Black Swan effects. Then I realized that there was an asymmetry. I truly believe in Platonic ideas when they come in reverse, like negative universals.
So the central tenet of the epistemology I advocate is as follows: we know a lot more what is wrong than what is right, or, phrased according to the fragile/robust classification, negative knowledge (what is wrong, what does not work) is more robust to error than positive knowledge (what is right, what works). So knowledge grows by subtraction much more than by addition—given that what we know today might turn out to be wrong but what we know to be wrong cannot turn out to be right, at least not easily. If I spot a black swan (not capitalized), I can be quite certain that the statement “all swans are white” is wrong. But even if I have never seen a black swan, I can never hold such a statement to be true. Rephrasing it again: since one small observation can disprove a statement, while millions can hardly confirm it, disconfirmation is more rigorous than confirmation.
This idea has been associated in our times with the philosopher Karl Popper, and I quite mistakenly thought that he was its originator (though he is at the origin of an even more potent idea on the fundamental inability to predict the course of history). The notion, it turned out, is vastly more ancient, and was one of the central tenets of the skeptical-empirical school of medicine of the postclassical era in the Eastern Mediterranean. It was well known to a group of nineteenth-century French scholars who rediscovered these works. And this idea of the power of disconfirmation permeates the way we do hard science.
As you can see, we can link this to the general tableaus of positive (additive) and negative (subtractive): negative knowledge is more robust. But it is not perfect
. Popper has been criticized by philosophers for his treatment of disconfirmation as hard, unequivocal, black-and-white. It is not clear-cut: it is impossible to figure out whether an experiment failed to produce the intended results—hence “falsifying” the theory—because of the failure of the tools, because of bad luck, or because of fraud by the scientist. Say you saw a black swan. That would certainly invalidate the idea that all swans are white. But what if you had been drinking Lebanese wine, or hallucinating from spending too much time on the Web? What if it was a dark night, in which all swans look gray? Let us say that, in general, failure (and disconfirmation) are more informative than success and confirmation, which is why I claim that negative knowledge is just “more robust.”
Now, before starting to write this section, I spent some time scouring Popper’s complete works wondering how the great thinker, with his obsessive approach to falsification, completely missed the idea of fragility. His masterpiece, The Poverty of Historicism, in which he presents the limits of forecasting, shows the impossibility of an acceptable representation of the future. But he missed the point that if an incompetent surgeon is operating on a brain, one can safely predict serious damage, even the death of the patient. Yet such subtractive representation of the future is perfectly in line with his idea of disconfirmation, its logical second step. What he calls falsification of a theory should lead, in practice, to the breaking of the object of its application.
In political systems, a good mechanism is one that helps remove the bad guy; it’s not about what to do or who to put in. For the bad guy can cause more harm than the collective actions of good ones. Jon Elster goes further; he recently wrote a book with the telling title Preventing Mischief in which he bases negative action on Bentham’s idea that “the art of the legislator is limited to the prevention of everything that might prevent the development of their [members of the assembly] liberty and their intelligence.”
And, as expected, via negativa is part of classical wisdom. For the Arab scholar and religious leader Ali Bin Abi-Taleb (no relation), keeping one’s distance from an ignorant person is equivalent to keeping company with a wise man.
Finally, consider this modernized version in a saying from Steve Jobs: “People think focus means saying yes to the thing you’ve got to focus on. But that’s not what it means at all. It means saying no to the hundred other good ideas that there are. You have to pick carefully. I’m actually as proud of the things we haven’t done as the things I have done. Innovation is saying no to 1,000 things.”
BARBELLS, AGAIN
Subtractive knowledge is a form of barbell. Critically, it is convex. What is wrong is quite robust, what you don’t know is fragile and speculative, but you do not take it seriously so you make sure it does not harm you in case it turns out to be false.
Now another application of via negativa lies in the less-is-more idea.
Less Is More
The less-is-more idea in decision making can be traced to Spyros Makridakis, Robyn Dawes, Dan Goldstein, and Gerd Gigerenzer, who have all found in various contexts that simpler methods for forecasting and inference can work much, much better than complicated ones. Their simple rules of thumb are not perfect, but are designed to not be perfect; adopting some intellectual humility and abandoning the aim at sophistication can yield powerful effects. The pair of Goldstein and Gigerenzer coined the notion of “fast and frugal” heuristics that make good decisions despite limited time, knowledge, and computing power.
I realized that the less-is-more heuristic fell squarely into my work in two places. First, extreme effects: there are domains in which the rare event (I repeat, good or bad) plays a disproportionate share and we tend to be blind to it, so focusing on the exploitation of such a rare event, or protection against it, changes a lot, a lot of the risky exposure. Just worry about Black Swan exposures, and life is easy.
Less is more has proved to be shockingly easy to find and apply—and “robust” to mistakes and change of minds. There may not be an easily identifiable cause for a large share of the problems, but often there is an easy solution (not to all problems, but good enough; I mean really good enough), and such a solution is immediately identifiable, sometimes with the naked eye rather than the use of complicated analyses and highly fragile, error-prone, cause-ferreting nerdiness.
Some people are aware of the eighty/twenty idea, based on the discovery by Vilfredo Pareto more than a century ago that 20 percent of the people in Italy owned 80 percent of the land, and vice versa. Of these 20 percent, 20 percent (that is, 4 percent) would have owned around 80 percent of the 80 percent (that is, 64 percent). We end up with less than 1 percent representing about 50 percent of the total. These describe winner-take-all Extremistan effects. These effects are very general, from the distribution of wealth to book sales per author.
Few realize that we are moving into the far more uneven distribution of 99/1 across many things that used to be 80/20: 99 percent of Internet traffic is attributable to less than 1 percent of sites, 99 percent of book sales come from less than 1 percent of authors … and I need to stop because numbers are emotionally stirring. Almost everything contemporary has winner-take-all effects, which includes sources of harm and benefits. Accordingly, as I will show, 1 percent modification of systems can lower fragility (or increase antifragility) by about 99 percent—and all it takes is a few steps, very few steps, often at low cost, to make things better and safer.
For instance, a small number of homeless people cost the states a disproportionate share of the bills, which makes it obvious where to look for the savings. A small number of employees in a corporation cause the most problems, corrupt the general attitude—and vice versa—so getting rid of these is a great solution. A small number of customers generate a large share of the revenues. I get 95 percent of my smear postings from the same three obsessive persons, all representing the same prototypes of failure (one of whom has written, I estimate, close to one hundred thousand words in posts—he needs to write more and more and find more and more stuff to critique in my work and personality to get the same effect). When it comes to health care, Ezekiel Emanuel showed that half the population accounts for less than 3 percent of the costs, with the sickest 10 percent consuming 64 percent of the total pie. Bent Flyvbjerg (of Chapter 18) showed in his Black Swan management idea that the bulk of cost overruns by corporations are simply attributable to large technology projects—implying that that’s what we need to focus on instead of talking and talking and writing complicated papers.
As they say in the mafia, just work on removing the pebble in your shoe.
There are some domains, like, say, real estate, in which problems and solutions are crisply summarized by a heuristic, a rule of thumb to look for the three most important properties: “location, location, and location”—much of the rest is supposed to be chickensh***t. Not quite and not always true, but it shows the central thing to worry about, as the rest takes care of itself.
Yet people want more data to “solve problems.” I once testified in Congress against a project to fund a crisis forecasting project. The people involved were blind to the paradox that we have never had more data than we have now, yet have less predictability than ever. More data—such as paying attention to the eye colors of the people around when crossing the street—can make you miss the big truck. When you cross the street, you remove data, anything but the essential threat.1 As Paul Valéry once wrote: que de choses il faut ignorer pour agir—how many things one should disregard in order to act.
Convincing—and confident—disciplines, say, physics, tend to use little statistical backup, while political science and economics, which have never produced anything of note, are full of elaborate statistics and statistical “evidence” (and you know that once you remove the smoke, the evidence is not evidence). The situation in science is similar to detective novels in which the person with the largest number of alibis turns out to be to be the guilty one. And you do not need reams of paper full of data to destroy th
e megatons of papers using statistics in economics: the simple argument that Black Swans and tail events run the socioeconomic world—and these events cannot be predicted—is sufficient to invalidate their statistics.
We have further evidence of the potency of less-is-more from the following experiment. Christopher Chabris and Daniel Simons, in their book The Invisible Gorilla, show how people watching a video of a basketball game, when diverted with attention-absorbing details such as counting passes, can completely miss a gorilla stepping into the middle of the court.
I discovered that I had been intuitively using the less-is-more idea as an aid in decision making (contrary to the method of putting a series of pros and cons side by side on a computer screen). For instance, if you have more than one reason to do something (choose a doctor or veterinarian, hire a gardener or an employee, marry a person, go on a trip), just don’t do it. It does not mean that one reason is better than two, just that by invoking more than one reason you are trying to convince yourself to do something. Obvious decisions (robust to error) require no more than a single reason. Likewise the French army had a heuristic to reject excuses for absenteeism for more than one reason, like death of grandmother, cold virus, and being bitten by a boar. If someone attacks a book or idea using more than one argument, you know it is not real: nobody says “he is a criminal, he killed many people, and he also has bad table manners and bad breath and is a very poor driver.”
Antifragile: Things That Gain from Disorder Page 35