Nor has the vocation of Levantine prophet been a particularly desirable professional occupation. As I said at the beginning of the chapter, acceptance was far from guaranteed: Jesus, mentioning the fate of Elijah (who warned against Baal, then ironically had to go find solace in Sidon, where Baal was worshipped), announced that no one becomes a prophet in his own land. And the prophetic mission was not necessarily voluntary. Consider Jeremiah’s life, laden with jeremiads (lamentations), as his unpleasant warnings about destruction and captivity (and their causes) did not make him particularly popular and he was the personification of the notion of “shoot the messenger” and the expression veritas odium parit—truth brings hatred. Jeremiah was beaten, punished, persecuted, and the victim of numerous plots, which involved his own brothers. Apocryphal and imaginative accounts even have him stoned to death in Egypt.
Further north of the Semites, in the Greek tradition, we find the same focus on messages, warnings about the present, and the same punishment inflicted on those able to understand things others don’t. For example, Cassandra gets the gift of prophecy, along with the curse of not being believed, when the temple snakes cleaned her ears so she could hear some special messages. Tiresias was made blind and transformed into a woman for revealing the secrets of the gods—but, as a consolation, Athena licked his ears so he could understand secrets in the songs of birds.
Recall the inability we saw in Chapter 2 to learn from past behavior. The problem with lack of recursion in learning—lack of second-order thinking—is as follows. If those delivering some messages deemed valuable for the long term have been persecuted in past history, one would expect that there would be a correcting mechanism, that intelligent people would end up learning from such historical experience so those delivering new messages would be greeted with the new understanding in mind. But nothing of the sort takes place.
This lack of recursive thinking applies not just to prophecy, but to other human activities as well: if you believe that what will work and do well is going to be a new idea that others did not think of, what we commonly call “innovation,” then you would expect people to pick up on it and have a clearer eye for new ideas without too much reference to the perception of others. But they don’t: something deemed “original” tends to be modeled on something that was new at the time but is no longer new, so being an Einstein for many scientists means solving a similar problem to the one Einstein solved when at the time Einstein was not solving a standard problem at all. The very idea of being an Einstein in physics is no longer original. I’ve detected in the area of risk management the similar error, made by scientists trying to be new in a standard way. People in risk management only consider risky things that have hurt them in the past (given their focus on “evidence”), not realizing that, in the past, before these events took place, these occurrences that hurt them severely were completely without precedent, escaping standards. And my personal efforts to make them step outside their shoes to consider these second-order considerations have failed—as have my efforts to make them aware of the notion of fragility.
EMPEDOCLES’ DOG
In Aristotle’s Magna Moralia, there is a possibly apocryphal story about Empedocles, the pre-Socratic philosopher, who was asked why a dog prefers to always sleep on the same tile. His answer was that there had to be some likeness between the dog and that tile. (Actually the story might be even twice as apocryphal since we don’t know if Magna Moralia was actually written by Aristotle himself.)
Consider the match between the dog and the tile. A natural, biological, explainable or nonexplainable match, confirmed by long series of recurrent frequentation—in place of rationalism, just consider the history of it.
Which brings me to the conclusion of our exercise in prophecy.
I surmise that those human technologies such as writing and reading that have survived are like the tile to the dog, a match between natural friends, because they correspond to something deep in our nature.
Every time I hear someone trying to make a comparison between a book and an e-reader, or something ancient and a new technology, “opinions” pop up, as if reality cared about opinions and narratives. There are secrets to our world that only practice can reveal, and no opinion or analysis will ever capture in full.
This secret property is, of course, revealed through time, and, thankfully, only through time.
What Does Not Make Sense
Let’s take this idea of Empedocles’ dog a bit further: If something that makes no sense to you (say, religion—if you are an atheist—or some age-old habit or practice called irrational); if that something has been around for a very, very long time, then, irrational or not, you can expect it to stick around much longer, and outlive those who call for its demise.
1 There is anecdotal evidence from barefoot runners and users of “five finger” style athletic shoes—which includes myself—that one’s feet store some memory of the terrain, remembering where they have been in the past.
2 If something does not have a natural upper bound then the distribution of any specified event time is constrained only by fragility.
3 The phrase originates, it seems, with a June 13, 1964, article in The New Republic, though the article made the mistake of applying it to perishable items. The author wrote that “the future career expectations of a television comedian is proportional to the total amount of his past exposure on the medium.” This would work for a young comedian, not an older one (comedians are, alas, perishable items). But technologies and books do not have such constraint.
4 This is where my simplification lies: I am assuming that every year doubles the additional life expectancy. It can actually get better, increase by 2½ or more. So the Lindy effect, says, mathematically, that the nonperishable has a life expectancy that increases with every day it survives.
5 Note also that the Lindy effect is invariant to the definition of the technology. You can define a technology as a “convertible car,” a more general “car,” a “bound book,” or a broadly defined “book” (which would include electronic texts); the life expectancy will concern the item as defined.
6 By the same Lindy effect, diseases and conditions that were not known to be diseases a hundred or so years ago are likely to be either (1) diseases of civilization, curable by via negativa, or (2) not diseases, just invented conditions. This applies most to psychological “conditions” and buzzwords putting people in silly buckets: “Type A,” “passive aggressive,” etc.
7 I have had the privilege of reading a five-hundred-year-old book, an experience hardly different from that of reading a modern book. Compare such robustness to the lifespan of electronic documents: some of the computer files of my manuscripts that are less than a decade old are now irretrievable.
CHAPTER 21
Medicine, Convexity, and Opacity
What they call nonevidence—Where medicine fragilizes humans, then tries to save them—Newton’s law or evidence?
The history of medicine is the story—largely documented—of the dialectic between doing and thinking—and how to make decisions under opacity. In the medieval Mediterranean, Maimonides, Avicenna, Al-Ruhawi, and the Syriac doctors such as Hunain Ibn Ishaq were at once philosophers and doctors. A doctor in the medieval Semitic world was called Al-Hakim, “the wise,” or “practitioner of wisdom,” a synonym for philosopher or rabbi (hkm is the Semitic root for “wisdom”). Even in the earlier period there was a crop of Hellenized fellows who stood in the exact middle between medicine and the practice of philosophy—the great skeptic philosopher Sextus Empiricus was himself a doctor member of the skeptical empirical school. So were Menodotus of Nicomedia and the experience-based predecessor of evidence-based medicine—on whom a bit more in a few pages. The works of these thinkers, or whatever remains extant are quite refreshing for those of us who distrust those who talk without doing.
Simple, quite simple decision rules and heuristics emerge from this chapter. Via negativa, of course (by removal of the unnatural):
only resort to medical techniques when the health payoff is very large (say, saving a life) and visibly exceeds its potential harm, such as incontrovertibly needed surgery or lifesaving medicine (penicillin). It is the same as with government intervention. This is squarely Thalesian, not Aristotelian (that is, decision making based on payoffs, not knowledge). For in these cases medicine has positive asymmetries—convexity effects—and the outcome will be less likely to produce fragility. Otherwise, in situations in which the benefits of a particular medicine, procedure, or nutritional or lifestyle modification appear small—say, those aiming for comfort—we have a large potential sucker problem (hence putting us on the wrong side of convexity effects). Actually, one of the unintended side benefits of the theorems that Raphael Douady and I developed in our paper mapping risk detection techniques (in Chapter 19) is an exact link between (a) nonlinearity in exposure or dose-response and (b) potential fragility or antifragility.
I also extend the problem to epistemological grounds and make rules for what should be considered evidence: as with whether a cup should be considered half-empty or half-full, there are situations in which we focus on absence of evidence, others in which we focus on evidence. In some cases one can be confirmatory, not others—it depends on the risks. Take smoking, which was, at some stage, viewed as bringing small gains in pleasure and even health (truly, people thought it was a good thing). It took decades for its harm to become visible. Yet had someone questioned it, he would have faced the canned-naive-academized and faux-expert response “do you have evidence that this is harmful?” (the same type of response as “is there evidence that polluting is harmful?”). As usual, the solution is simple, an extension of via negativa and Fat Tony’s don’t-be-a-sucker rule: the non-natural needs to prove its benefits, not the natural—according to the statistical principle outlined earlier that nature is to be considered much less of a sucker than humans. In a complex domain, only time—a long time—is evidence.
For any decision, the unknown will preponderate on one side more than the other.
The “do you have evidence” fallacy, mistaking evidence of no harm for no evidence of harm, is similar to the one of misinterpreting NED (no evidence of disease) for evidence of no disease. This is the same error as mistaking absence of evidence for evidence of absence, the one that tends to affect smart and educated people, as if education made people more confirmatory in their responses and more liable to fall into simple logical errors.
And recall that under nonlinearities, the simple statements “harmful” or “beneficial” break down: it is all in the dosage.
HOW TO ARGUE IN AN EMERGENCY ROOM
I once broke my nose … walking. For the sake of antifragility, of course. I was trying to walk on uneven surfaces, as part of my antifragility program, under the influence of Loic Le Corre, who believes in naturalistic exercise. It was exhilarating; I felt the world was richer, more fractal, and when I contrasted this terrain with the smooth surfaces of sidewalks and corporate offices, those felt like prisons. Unfortunately, I was carrying something much less ancestral, a cellular phone, which had the insolence to ring in the middle of my walk.
In the emergency room, the doctor and staff insisted that I should “ice” my nose, meaning apply an ice-cold patch to it. In the middle of the pain, it hit me that the swelling that Mother Nature gave me was most certainly not directly caused by the trauma. It was my own body’s response to the injury. It seemed to me that it was an insult to Mother Nature to override her programmed reactions unless we had a good reason to do so, backed by proper empirical testing to show that we humans can do better; the burden of evidence falls on us humans. So I mumbled to the emergency room doctor whether he had any statistical evidence of benefits from applying ice to my nose or if it resulted from a naive version of an interventionism.
His response was: “You have a nose the size of Cleveland and you are now interested in … numbers?” I recall developing from his blurry remarks the thought that he had no answer.
Effectively, he had no answer, because as soon as I got to a computer, I was able to confirm that there is no compelling empirical evidence in favor of the reduction of swelling. At least, not outside of the very rare cases in which the swelling would threaten the patient, which was clearly not the case. It was pure sucker-rationalism in the mind of doctors, following what made sense to boundedly intelligent humans, coupled with interventionism, this need to do something, this defect of thinking that we knew better, and denigration of the unobserved. This defect is not limited to our control of swelling: this confabulation plagues the entire history of medicine, along with, of course, many other fields of practice. The researchers Paul Meehl and Robin Dawes pioneered a tradition to catalog the tension between “clinical” and actuarial (that is, statistical) knowledge, and examine how many things believed to be true by professionals and clinicians aren’t so and don’t match empirical evidence. The problem is of course that these researchers did not have a clear idea of where the burden of empirical evidence lies (the difference between naive or pseudo empiricism and rigorous empiricism)—the onus is on the doctors to show us why reducing fever is good, why eating breakfast before engaging in activity is healthy (there is no evidence), or why bleeding patients is the best alternative (they’ve stopped doing so). Sometimes I get the answer that they have no clue when they have to utter defensively “I am a doctor” or “are you a doctor?” But worst, I sometimes get some letters of support and sympathy from the alternative medicine fellows, which makes me go postal: the approach in this book is ultra-orthodox, ultra-rigorous, and ultra-scientific, certainly not in favor of alternative medicine.
The hidden costs of health care are largely in the denial of antifragility. But it may not be just medicine—what we call diseases of civilization result from the attempt by humans to make life comfortable for ourselves against our own interest, since the comfortable is what fragilizes. The rest of this chapter focuses on specific medical cases with hidden negative convexity effects (small gains, large losses)—and reframes the ideas of iatrogenics in connection with my notion of fragility and nonlinearities.
FIRST PRINCIPLE OF IATROGENICS (EMPIRICISM)
The first principle of iatrogenics is as follows: we do not need evidence of harm to claim that a drug or an unnatural via positiva procedure is dangerous. Recall my comment earlier with the turkey problem that harm is in the future, not in the narrowly defined past. In other words, empiricism is not naive empiricism.
We saw the smoking argument. Now consider the adventure of a human-invented fat, trans fat. Somehow, humans discovered how to make fat products and, as it was the great era of scientism, they were convinced they could make it better than nature. Not just equal; better. Chemists assumed that they could produce a fat replacement that was superior to lard or butter from so many standpoints. First, it was more convenient: synthetic products such as margarine stay soft in the refrigerator, so you can immediately spread them on a piece of bread without the usual wait while listening to the radio. Second, it was economical, as the synthetic fats were derived from vegetables. Finally, what is worst, trans fat was assumed to be healthier. Its use propagated very widely and after a few hundred million years of consumption of animal fat, people suddenly started getting scared of it (particularly something called “saturated” fat), mainly from shoddy statistical interpretations. Today trans fat is widely banned as it turned out that it kills people, as it is behind heart disease and cardiovascular problems.
For another murderous example of such sucker (and fragilizing) rationalism, consider the story of Thalidomide. It was a drug meant to reduce the nausea episodes of pregnant women. It led to birth defects. Another drug, Diethylstilbestrol, silently harmed the fetus and led to delayed gynecological cancer among daughters.
These two mistakes are quite telling because, in both cases, the benefits appeared to be obvious and immediate, though small, and the harm remained delayed for years, at least three-quarters of a generation. The n
ext discussion will be about the burden of evidence, as you can easily imagine that someone defending these treatments would have immediately raised the objection, “Monsieur Taleb, do you have evidence for your statement?”
Now we can see the pattern: iatrogenics, being a cost-benefit situation, usually results from the treacherous condition in which the benefits are small, and visible—and the costs very large, delayed, and hidden. And of course, the potential costs are much worse than the cumulative gains.
For those into graphs, the appendix shows the potential risks from different angles and expresses iatrogenics as a probability distribution.
SECOND PRINCIPLE OF IATROGENICS (NONLINEARITY IN RESPONSE)
Second principle of iatrogenics: it is not linear. We should not take risks with near-healthy people; but we should take a lot, a lot more risks with those deemed in danger.1
Why do we need to focus treatment on more serious cases, not marginal ones? Take this example showing nonlinearity (convexity). When hypertension is mild, say marginally higher than the zone accepted as “normotensive,” the chance of benefiting from a certain drug is close to 5.6 percent (only one person in eighteen benefit from the treatment). But when blood pressure is considered to be in the “high” or “severe” range, the chances of benefiting are now 26 and 72 percent, respectively (that is, one person in four and two persons out of three will benefit from the treatment). So the treatment benefits are convex to condition (the benefits rise disproportionally, in an accelerated manner). But consider that the iatrogenics should be constant for all categories! In the very ill condition, the benefits are large relative to iatrogenics; in the borderline one, they are small. This means that we need to focus on high-symptom conditions and ignore, I mean really ignore, other situations in which the patient is not very ill.
Antifragile: Things That Gain from Disorder Page 39