Let me phrase the last point a bit differently. If there is something in nature you don’t understand, odds are it makes sense in a deeper way that is beyond your understanding. So there is a logic to natural things that is much superior to our own. Just as there is a dichotomy in law: innocent until proven guilty as opposed to guilty until proven innocent, let me express my rule as follows: what Mother Nature does is rigorous until proven otherwise; what humans and science do is flawed until proven otherwise.
Let us close on this business of b***t “evidence.” If you want to talk about the “statistically significant,” nothing on the planet can be as close to “statistically significant” as nature. This is in deference to her track record and the sheer statistical significance of her massively large experience—the way she has managed to survive Black Swan events. So overriding her requires some very convincing justification on our part, rather than the reverse, as is commonly done, and it is very hard to beat her on statistical grounds—as I wrote in Chapter 7 in the discussion on procrastination, we can invoke the naturalistic fallacy when it comes to ethics, not when it comes to risk management.5
Let me repeat violations of logic in the name of “evidence” owing to their gravity. I am not joking: just as I face the shocking request “Do you have evidence?” when I question a given unnatural treatment, such as icing one’s swollen nose, in the past, many faced the question “Do you have evidence that trans fat is harmful?” and needed to produce proofs—which they were obviously unable to do because it took decades before the harm became apparent. These questions are offered more often than not by smart people, even doctors. So when the (present) inhabitants of Mother Earth want to do something counter to nature, they are the ones that need to produce the evidence, if they can.
Everything nonstable or breakable has had ample chance to break over time. Further, the interactions between components of Mother Nature had to modulate in such a way as to keep the overall system alive. What emerges over millions of years is a wonderful combination of solidity, antifragility, and local fragility, sacrifices in one area made in order for nature to function better. We sacrifice ourselves in favor of our genes, trading our fragility for their survival. We age, but they stay young and get fitter and fitter outside us. Things break on a small scale all the time, in order to avoid large-scale generalized catastrophes.
Plead Ignorance of Biology: Phenomenology
I have explained that phenomenology is more potent than theories—and should lead to more rigorous policy making. Let me illustrate here.
I was in a gym in Barcelona next to the senior partner of a consulting firm, a profession grounded in building narratives and naive rationalization. Like many people who have lost weight, the fellow was eager to talk about it—it is easier to talk about weight loss theories than to stick to them. The fellow told me that he did not believe in such diets as the low-carbohydrate Atkins or Dukan diet, until he was told of the mechanism of “insulin,” which convinced him to embark on the regimen. He then lost thirty pounds—he had to wait for a theory before taking any action. That was in spite of the empirical evidence showing people losing one hundred pounds by avoiding carbohydrates, without changing their total food intake—just the composition! Now, being the exact opposite of the consultant, I believe that “insulin” as a cause is a fragile theory but that the phenomenology, the empirical effect, is real. Let me introduce the ideas of the postclassical school of the skeptical empiricists.
We are built to be dupes for theories. But theories come and go; experience stays. Explanations change all the time, and have changed all the time in history (because of causal opacity, the invisibility of causes) with people involved in the incremental development of ideas thinking they always had a definitive theory; experience remains constant.
As we saw in Chapter 7, what physicists call the phenomenology of the process is the empirical manifestation, without looking at how it glues to existing general theories. Take for instance the following statement, entirely evidence-based: if you build muscle, you can eat more without getting more fat deposits in your belly and can gorge on lamb chops without having to buy a new belt. Now in the past the theory to rationalize it was “Your metabolism is higher because muscles burn calories.” Currently I tend to hear “You become more insulin-sensitive and store less fat.” Insulin, shminsulin; metabolism, shmetabolism: another theory will emerge in the future and some other substance will come about, but the exact same effect will continue to prevail.
The same holds for the statement Lifting weights increases your muscle mass. In the past they used to say that weight lifting caused the “micro-tearing of muscles,” with subsequent healing and increase in size. Today some people discuss hormonal signaling or genetic mechanisms, tomorrow they will discuss something else. But the effect has held forever and will continue to do so.
When it comes to narratives, the brain seems to be the last province of the theoretician-charlatan. Add neurosomething to a field, and suddenly it rises in respectability and becomes more convincing as people now have the illusion of a strong causal link—yet the brain is too complex for that; it is both the most complex part of the human anatomy and the one that seems most susceptible to sucker-causation. Christopher Chabris and Daniel Simons brought to my attention the evidence I had been looking for: whatever theory has a reference in it to brain circuitry seems more “scientific” and more convincing, even when it is just randomized psychoneurobabble.
But this causation is highly rooted in orthodox medicine as it was traditionally built. Avicenna in his Canon (which in Arabic means law): “We must know the causes of health and illness if we wish to make [medicine] a scientia.”
I am writing about health, but I do not want to rely on biology beyond the minimum required (not in the theoretical sense)—and I believe that my strength will lie there. I just want to understand as little as possible to be able to look at regularities of experience.
So the modus operandi in every venture is to remain as robust as possible to changes in theories (let me repeat that my deference to Mother Nature is entirely statistical and risk-management-based, i.e., again, grounded in the notion of fragility). The doctor and medical essayist James Le Fanu showed how our understanding of the biological processes was coupled with a decline of pharmaceutical discoveries, as if rationalistic theories were blinding and somehow a handicap.
In other words, we have in biology a green lumber problem!
Now, a bit of history of ancient and medieval medicine. Traditionally, medicine used to be split into three traditions: rationalists (based on preset theories, the need of global understanding of what things were made for), skeptical empiricists (who refused theories and were skeptical of ideas making claims about the unseen), and methodists (who taught each other some simple medical heuristics stripped of theories and found an even more practical way to be empiricists). While differences can be overplayed by the categorization, one can look at the three traditions not as entirely dogmatic approaches, but rather ones varying in their starting point, the weight of the prior beliefs: some start with theories, others with evidence.
Tensions among the three tendencies have always existed over time—and I put myself squarely in the camp attempting to vindicate the empiricists, who, as a philosophical school, were swallowed by late antiquity. I have been trying to bring alive these ideas of Aenesidemus of Knossos, Antiochus of Laodicea, Menodotus of Nicomedia, Herodotus of Tarsus, and of course Sextus Empiricus. The empiricists insisted on the “I did not know” while facing situations not exactly seen in the past, that is, in nearly identical conditions. The methodists did not have the same strictures against analogy, but were still careful.
The Ancients Were More Caustic
This problem of iatrogenics is not new—and doctors have been traditionally the butt of jokes.
Martial in his epigrams gives us an idea of the perceived expert problem in medicine in his time: “I thought that Diaulus was a doctor, not a caretaker
—but for him it appears to be the same job” (Nuper erat medicus, nunc est uispillo Diaulus: quod uispillo facit, fecerat et medicus) or “I did not feel ill, Symmache; now I do (after your ministrations).” (Non habui febrem, Symmache, nunc habeo).
The Greek term pharmakon is ambiguous, as it can mean both “poison” and “cure” and has been used as a pun to warn against iatrogenics by the Arab doctor Ruhawi.
An attribution problem arises when the person imputes his positive results to his own skills and his failures to luck. Nicocles, as early as the fourth century B.C., asserts that doctors claimed responsibility for success and blamed failure on nature, or on some external cause. The very same idea was rediscovered by psychologists some twenty-four centuries later, and applied to stockbrokers, doctors, and managers of companies.
According to an ancient anecdote, the Emperor Hadrian continually exclaimed, as he was dying, that it was his doctors who had killed him.
Montaigne, mostly a synthesizer of classical writers, has his Essays replete with anecdotes: A Lacedaemonian was asked what had made him live so long; he answered, “Ignoring medicine.” Montaigne also detected the agency problem, or why the last thing a doctor needs is for you to be healthy: “No doctor derives pleasure from the health of his friends, wrote the ancient Greek satirist, no soldier from the peace of his city, etc.” (Nul médecin ne prent plaisir à la santé de ses amis mesmes, dit l’ancien Comique Grec, ny soldat à la paix de sa ville: ainsi du reste.)
How to Medicate Half the Population
Recall how a personal doctor can kill you.
We saw in the story of the grandmother our inability to distinguish in our logical reasoning (though not in intuitive actions) between average and other, richer properties of what we observe.
I was once attending a lunch party at the country house of a friend when someone produced a handheld blood pressure measuring tool. Tempted, I measured my arterial pressure, and it turned out to be slightly higher than average. A doctor, who was part of the party and had a very friendly disposition, immediately pulled out a piece of paper prescribing some medication to lower it—which I later threw in the garbage can. I subsequently bought the same measuring tool and discovered that my blood pressure was much lower (hence better) than average, except once in a while, when it peaked episodically. In short, it exhibits some variability. Like everything in life.
This random variability is often mistaken for information, hence leading to intervention. Let us play a thought experiment, without making any assumption on the link between blood pressure and health. Further, assume that “normal” pressure is a certain, known number. Take a cohort of healthy persons. Suppose that because of randomness, half the time a given person’s pressure will be above that number, and half the time, for the same person, the measurement will be below. So on about half the doctor’s visits they will show the alarming “above normal.” If the doctor automatically prescribes medication on the days the patients are above normal, then half the normal population will be on medication. And note that we are quite certain that their life expectancy will be reduced by unnecessary treatments. Clearly I am simplifying here; sophisticated doctors are aware of the variable nature of the measurements and do not prescribe medication when the numbers are not compelling (though it is easy to fall into the trap, and not all doctors are sophisticated). But the thought experiment can show how frequent visits to the doctor, particularly outside the cases of a life-threatening ailment or an uncomfortable condition—just like frequent access to information—can be harmful. This example also shows us the process outlined in Chapter 7 by which a personal doctor ends up killing the patient—simply by overreacting to noise.
This is more serious than you think: it seems that medicine has a hard time grasping normal variability in samples—it is hard sometimes to translate the difference between “statistically significant” and “significant” in effect. A certain disease might marginally lower your life expectancy, but it can be deemed to do so with “high statistical significance,” prompting panics when in fact all these studies might be saying is they established with a significant statistical margin that in some cases, say, 1 percent of the cases, patients are likely to be harmed by it. Let me rephrase: the magnitude of the result, the importance of the effect, is not captured by what is called “statistical significance,” something that tends to deceive specialists. We need to look in two dimensions: how much a condition, say, blood pressure a certain number of points higher than normal, is likely to affect your life expectancy; and how significant the result is.
Why is this serious? If you think that the statistician really understands “statistical significance” in the complicated texture of real life (the “large world,” as opposed to the “small world” of textbooks), some surprises. Kahneman and Tversky showed that statisticians themselves made practical mistakes in real life in violation of their teachings, forgetting that they were statisticians (thinking, I remind the reader, requires effort). My colleague Daniel Goldstein and I did some research on “quants,” professionals of quantitative finance, and realized that the overwhelming majority did not understand the practical effect of elementary notions such as “variance” or “standard deviation,” concepts they used in about every one of their equations. A recent powerful study by Emre Soyer and Robin Hogarth showed that many professionals and experts in the field of econometrics supplying pompous numbers such as “regression” and “correlation” made egregious mistakes translating into practice the numbers they were producing themselves—they get the equation right but make severe translation mistakes when expressing it into reality. In all cases they underestimate randomness and underestimate the uncertainty in the results. And we are talking about errors of interpretation made by the statisticians, not by the users of statistics such as social scientists and doctors.
Alas, all these biases lead to action, almost never inaction.
In addition, we now know that the craze against fats and the “fat free” slogans result from an elementary mistake in interpreting the results of a regression: when two variables are jointly responsible for an effect (here, carbohydrates and fat), sometimes one of them shows sole responsibility. Many fell into the error of attributing problems under joint consumption of fat and carbohydrates to fat rather than carbohydrates. Further, the great statistician and debunker of statistical misinterpretation David Freedman showed (very convincingly) with a coauthor that the link everyone is obsessing about between salt and blood pressure has no statistical basis. It may exist for some hypertensive people, but it is more likely the exception than the rule.
The “Rigor of Mathematics” in Medicine
For those of us who laugh at the charlatanism hidden behind fictional mathematics in social science, one may wonder why this did not happen to medicine.
And indeed the cemetery of bad ideas (and hidden ideas) shows that mathematics fooled us there. There have been many forgotten attempts to mathematize medicine. There was a period during which medicine derived its explanatory models from the physical sciences. Giovanni Borelli, in De motu animalium, compared the body to a machine consisting of animal levers—hence we could apply the rules of linear physics.
Let me repeat: I am not against rationalized learned discourse, provided it is not fragile to error; I am first and last a decision maker hybrid and will never separate the philosopher-probabilist from the decision maker, so I am that joint person all the time, in the morning when I drink the ancient liquid called coffee, at noon when I eat with my friends, and at night when I go to bed clutching a book. What I am against is naive rationalized, pseudolearned discourse, with green lumber problems—one that focuses solely on the known and ignores the unknown. Nor am I against the use of mathematics when it comes to gauging the importance of the unknown—this is the robust application of mathematics. Actually the arguments in this chapter and the next are all based on the mathematics of probability—but it is not a rationalistic use of mathematics and much of it allows the det
ection of blatant inconsistencies between statements about severity of disease and intensity of treatment. On the other hand, the use of mathematics in social science is like interventionism. Those who practice it professionally tend to use it everywhere except where it can be useful.
The only condition for such brand of more sophisticated rationalism: to believe and act as if one does not have the full story—to be sophisticated you need to accept that you are not so.
Next
This chapter has introduced the idea of convexity effects and burden of evidence into medicine and into the assessment of risk of iatrogenics. Next, let us look at more applications of convexity effects and discuss via negativa as a rigorous approach to life.
1 A technical comment. This is a straightforward result of convexity effects on the probability distribution of outcomes. By the “inverse barbell effect,” when the gains are small to iatrogenics, uncertainty harms the situation. But by the “barbell effect,” when the gains are large in relation to potential side effects, uncertainty tends to be helpful. An explanation with ample graphs is provided in the Appendix.
2 In other words, the response for, say, 50 percent of a certain dose during one period, followed by 150 percent of the dose in a subsequent period in convex cases, is superior to 100 percent of the dose in both periods. We do not need much empiricism to estimate the convexity bias: by theorem, such bias is a necessary result of convexity.
3 Stuart McGill, an evidence-based scientist who specializes in back conditions, describes the self-healing process as follows: the sciatic nerve, when trapped in too narrow a cavity, causing the common back problem that is thought (by doctors) to be curable only by (lucrative) surgery, produces acid substances that cut through the bone and, over time, carves itself a larger passage. The body does a better job than surgeons.
Antifragile: Things That Gain from Disorder Page 41