2 Note these double standards on the part of Western governments. As a Christian, parts of Saudi Arabia are off-limits to me, as I would violate the purity of the place. But no public part of the United States or Western Europe is off-limits to Saudi citizens.
CHAPTER 7
Naive Intervention
A tonsillectomy to kill time—Never do today what can be left to tomorrow—Let’s predict revolutions after they happen—Lessons in blackjack
Consider this need to “do something” through an illustrative example. In the 1930s, 389 children were presented to New York City doctors; 174 of them were recommended tonsillectomies. The remaining 215 children were again presented to doctors, and 99 were said to need the surgery. When the remaining 116 children were shown to yet a third set of doctors, 52 were recommended the surgery. Note that there is morbidity in 2 to 4 percent of the cases (today, not then, as the risks of surgery were very bad at the time) and that a death occurs in about every 15,000 such operations and you get an idea about the break-even point between medical gains and detriment.
This story allows us to witness probabilistic homicide at work. Every child who undergoes an unnecessary operation has a shortening of her life expectancy. This example not only gives us an idea of harm done by those who intervene, but, worse, it illustrates the lack of awareness of the need to look for a break-even point between benefits and harm.
Let us call this urge to help “naive interventionism.” Next we examine its costs.
INTERVENTION AND IATROGENICS
In the case of tonsillectomies, the harm to the children undergoing unnecessary treatment is coupled with the trumpeted gain for some others. The name for such net loss, the (usually hidden or delayed) damage from treatment in excess of the benefits, is iatrogenics, literally, “caused by the healer,” iatros being a healer in Greek. We will posit in Chapter 21 that every time you visit a doctor and get a treatment, you incur risks of such medical harm, which should be analyzed the way we analyze other trade-offs: probabilistic benefits minus probabilistic costs.
For a classic example of iatrogenics, consider the death of George Washington in December 1799: we have enough evidence that his doctors greatly helped, or at least hastened, his death, thanks to the then standard treatment that included bloodletting (between five and nine pounds of blood).
Now these risks of harm by the healer can be so overlooked that, depending on how you account for it, until penicillin, medicine had a largely negative balance sheet—going to the doctor increased your chance of death. But it is quite telling that medical iatrogenics seems to have increased over time, along with knowledge, to peak sometime late in the nineteenth century. Thank you, modernity: it was “scientific progress,” the birth of the clinic and its substitution for home remedies, that caused death rates to shoot up, mostly from what was then called “hospital fever”—Leibniz had called these hospitals seminaria mortis, seedbeds of death. The evidence of increase in death rates is about as strong as they come, since all the victims were now gathered in one place: people were dying in these institutions who would have survived outside them. The famously mistreated Austro-Hungarian doctor Ignaz Semmelweis had observed that more women died giving birth in hospitals than giving birth on the street. He called the establishment doctors a bunch of criminals—which they were: the doctors who kept killing patients could not accept his facts or act on them since he “had no theory” for his observations. Semmelweis entered a state of depression, helpless to stop what he saw as murders, disgusted at the attitude of the establishment. He ended up in an asylum, where he died, ironically, from the same hospital fever he had been warning against.
Semmelweis’s story is sad: a man who was punished, humiliated, and even killed for shouting the truth in order to save others. The worst punishment was his state of helplessness in the face of risks and unfairness. But the story is also a happy one—the truth came out eventually, and his mission ended up paying off, with some delay. And the final lesson is that one should not expect laurels for bringing the truth.
Medicine is comparatively the good news, perhaps the only good news, in the field of iatrogenics. We see the problem there because things are starting to be brought under control today; it is now just what we call the cost of doing business, although medical error still currently kills between three times (as accepted by doctors) and ten times as many people as car accidents in the United States. It is generally accepted that harm from doctors—not including risks from hospital germs—accounts for more deaths than any single cancer. The methodology used by the medical establishment for decision making is still innocent of proper risk-management principles, but medicine is getting better. We have to worry about the incitation to overtreatment on the part of pharmaceutical companies, lobbies, and special interest groups and the production of harm that is not immediately salient and not accounted for as an “error.” Pharma plays the game of concealed and distributed iatrogenics, and it has been growing. It is easy to assess iatrogenics when the surgeon amputates the wrong leg or operates on the wrong kidney, or when the patient dies of a drug reaction. But when you medicate a child for an imagined or invented psychiatric disease, say, ADHD or depression, instead of letting him out of the cage, the long-term harm is largely unaccounted for. Iatrogenics is compounded by the “agency problem” or “principal-agent problem,” which emerges when one party (the agent) has personal interests that are divorced from those of the one using his services (the principal). An agency problem, for instance, is present with the stockbroker and medical doctor, whose ultimate interest is their own checking account, not your financial and medical health, respectively, and who give you advice that is geared to benefit themselves. Or with politicians working on their career.
First, Do No Harm
Medicine has known about iatrogenics since at least the fourth century before our era—primum non nocere (“first do no harm”) is a first principle attributed to Hippocrates and integrated in the so-called Hippocratic Oath taken by every medical doctor on his commencement day. It just took medicine about twenty-four centuries to properly execute the brilliant idea. In spite of the recitations of non nocere through the ages, the term “iatrogenics” only appeared in frequent use very, very late, a few decades ago—after so much damage had been done. I for myself did not know the exact word until the writer Bryan Appleyard introduced me to it (I had used “harmful unintended side effects”). So let us leave medicine (to return to it in a dozen chapters or so), and apply this idea born in medicine to other domains of life. Since no intervention implies no iatrogenics, the source of harm lies in the denial of antifragility, and to the impression that we humans are so necessary to making things function.
Enforcing consciousness of generalized iatrogenics is a tall order. The very notion of iatrogenics is quite absent from the discourse outside medicine (which, to repeat, has been a rather slow learner). But just as with the color blue, having a word for something helps spread awareness of it. We will push the idea of iatrogenics into political science, economics, urban planning, education, and more domains. Not one of the consultants and academics in these fields with whom I tried discussing it knew what I was talking about—or thought that they could possibly be the source of any damage. In fact, when you approach the players with such skepticism, they tend to say that you are “against scientific progress.”
But the concept can be found in some religious texts. The Koran mentions “those who are wrongful while thinking of themselves that they are righteous.”
To sum up, anything in which there is naive interventionism, nay, even just intervention, will have iatrogenics.
The Opposite of Iatrogenics
While we now have a word for causing harm while trying to help, we don’t have a designation for the opposite situation, that of someone who ends up helping while trying to cause harm. Just remember that attacking the antifragile will backfire. For instance, hackers make systems stronger. Or as in the case of Ayn Rand, obsessive and in
tense critics help a book spread.
Incompetence is double-sided. In the Mel Brooks movie The Producers, two New York theater fellows get in trouble by finding success instead of the intended failure. They had sold the same shares to multiple investors in a Broadway play, reasoning that should the play fail, they would keep the excess funds—their scheme would not be discovered if the investors got no return on their money. The problem was that they tried so hard to have a bad play—called Springtime for Hitler—and they were so bad at it that it turned out to be a huge hit. Uninhibited by their common prejudices, they managed to produce interesting work. I also saw similar irony in trading: a fellow was so upset with his year-end bonus that he started making huge bets with his employer’s portfolio—and ended up making them considerable sums of money, more than if he had tried to do so on purpose.
Perhaps the idea behind capitalism is an inverse-iatrogenic effect, the unintended-but-not-so-unintended consequences: the system facilitates the conversion of selfish aims (or, to be correct, not necessarily benevolent ones) at the individual level into beneficial results for the collective.
Iatrogenics in High Places
Two areas have been particularly infected with absence of awareness of iatrogenics: socioeconomic life and (as we just saw in the story of Semmelweis) the human body, matters in which we have historically combined a low degree of competence with a high rate of intervention and a disrespect for spontaneous operation and healing—let alone growth and improvement.
As we saw in Chapter 3, there is a distinction between organisms (biological or nonbiological) and machines. People with an engineering-oriented mind will tend to look at everything around as an engineering problem. This is a very good thing in engineering, but when dealing with cats, it is a much better idea to hire veterinarians than circuits engineers—or even better, let your animal heal by itself.
Table 3 provides a glimpse of these attempts to “improve matters” across domains and their effects. Note the obvious: in all cases they correspond to the denial of antifragility.
Click here for a larger image of this table.
Can a Whale Fly Like an Eagle?
Social scientists and economists have no built-in consciousness of iatrogenics, and of course no name for it—when I decided to teach a class on model error in economics and finance, nobody took me or the idea seriously, and the few who did tried to block me, asking for “a theory” (as in Semmelweis’s story) and not realizing that it was precisely the errors of theory that I was addressing and cataloguing, as well as the very idea of using a theory without considering the impact of the possible errors from theory.
For a theory is a very dangerous thing to have.
And of course one can rigorously do science without it. What scientists call phenomenology is the observation of an empirical regularity without a visible theory for it. In the Triad, I put theories in the fragile category, phenomenology in the robust one. Theories are superfragile; they come and go, then come and go, then come and go again; phenomenologies stay, and I can’t believe people don’t realize that phenomenology is “robust” and usable, and theories, while overhyped, are unreliable for decision making—outside physics.
Physics is privileged; it is the exception, which makes its imitation by other disciplines similar to attempts to make a whale fly like an eagle. Errors in physics get smaller from theory to theory—so saying “Newton was wrong” is attention grabbing, good for lurid science journalism, but ultimately mendacious; it would be far more honest to say “Newton’s theory is imprecise in some specific cases.” Predictions made by Newtonian mechanics are of astonishing precision except for items traveling close to the speed of light, something you don’t expect to do on your next vacation. We also read nonsense-with-headlines to the effect that Einstein was “wrong” about that speed of light—and the tools used to prove him wrong are of such complication and such precision that they’ve demonstrated how inconsequential such a point will be for you and me in the near and far future.
On the other hand, social science seems to diverge from theory to theory. During the cold war, the University of Chicago was promoting laissez-faire theories, while the University of Moscow taught the exact opposite—but their respective physics departments were in convergence, if not total agreement. This is the reason I put social science theories in the left column of the Triad, as something superfragile for real-world decisions and unusable for risk analyses. The very designation “theory” is even upsetting. In social science we should call these constructs “chimeras” rather than theories.
We will have to construct a methodology to deal with these defects. We cannot afford to wait an additional twenty-four centuries. Unlike with medicine, where iatrogenics is distributed across the population (hence with Mediocristan effects), because of concentration of power, social science and policy iatrogenics can blow us up (hence, Extremistan).
Not Doing Nothing
A main source of the economic crisis that started in 2007 lies in the iatrogenics of the attempt by Überfragilista Alan Greenspan—certainly the top economic iatrogenist of all time—to iron out the “boom-bust cycle” which caused risks to go hide under the carpet and accumulate there until they blew up the economy. The most depressing part of the Greenspan story is that the fellow was a libertarian and seemingly convinced of the idea of leaving systems to their own devices; people can fool themselves endlessly. The same naive interventionism was also applied by the U.K. government of Fragilista Gordon Brown, a student of the Enlightenment whose overt grand mission was to “eliminate” the business cycle. Fragilista Prime Minister Brown, a master iatrogenist though not nearly in the same league as Greenspan, is now trying to lecture the world on “ethics” and “sustainable” finance—but his policy of centralizing information technology (leading to massive cost overruns and delays in implementation) instead of having decentralized small units has proven difficult to reverse. Indeed, the U.K. health service was operating under the principle that a pin falling somewhere in some remote hospital should be heard in Whitehall (the street in London where the government buildings are centralized). The technical argument about the dangers of concentration is provided in Chapter 18.
These attempts to eliminate the business cycle lead to the mother of all fragilities. Just as a little bit of fire here and there gets rid of the flammable material in a forest, a little bit of harm here and there in an economy weeds out the vulnerable firms early enough to allow them to “fail early” (so they can start again) and minimize the long-term damage to the system.
An ethical problem arises when someone is put in charge. Greenspan’s actions were harmful, but even if he knew that, it would have taken a bit of heroic courage to justify inaction in a democracy where the incentive is to always promise a better outcome than the other guy, regardless of the actual, delayed cost.
Ingenuous interventionism is very pervasive across professions. Just as with the tonsillectomy, if you supply a typical copy editor with a text, he will propose a certain number of edits, say about five changes per page. Now accept his “corrections” and give this text to another copy editor who tends to have the same average rate of intervention (editors vary in interventionism), and you will see that he will suggest an equivalent number of edits, sometimes reversing changes made by the previous editor. Find a third editor, same.
Incidentally, those who do too much somewhere do too little elsewhere—and editing provides a quite fitting example. Over my writing career I’ve noticed that those who overedit tend to miss the real typos (and vice versa). I once pulled an op-ed from The Washington Post owing to the abundance of completely unnecessary edits, as if every word had been replaced by a synonym from the thesaurus. I gave the article to the Financial Times instead. The editor there made one single correction: 1989 became 1990. The Washington Post had tried so hard that they missed the only relevant mistake. As we will see, interventionism depletes mental and economic resources; it is rarely available when it is needed the
most. (Beware what you wish for: small government might in the end be more effective at whatever it needs to do. Reduction in size and scope may make it even more intrusive than large government.)
Non-Naive Interventionism
Let me warn against misinterpreting the message here. The argument is not against the notion of intervention; in fact I showed above that I am equally worried about underintervention when it is truly necessary. I am just warning against naive intervention and lack of awareness and acceptance of harm done by it.
It is certain that the message will be misinterpreted, for a while. When I wrote Fooled by Randomness, which argues—a relative of this message—that we have a tendency to underestimate the role of randomness in human affairs, summarized as “it is more random than you think,” the message in the media became “it’s all random” or “it’s all dumb luck,” an illustration of the Procrustean bed that changes by reducing. During a radio interview, when I tried explaining to the journalist the nuance and the difference between the two statements I was told that I was “too complicated”; so I simply walked out of the studio, leaving them in the lurch. The depressing part is that those people who were committing such mistakes were educated journalists entrusted to represent the world to us lay persons. Here, all I am saying is that we need to avoid being blind to the natural antifragility of systems, their ability to take care of themselves, and fight our tendency to harm and fragilize them by not giving them a chance to do so.
Antifragile: Things That Gain from Disorder Page 14