Antifragile: Things That Gain from Disorder
Page 27
Governments Should Spend on Nonteleological Tinkering, Not Research
Note that I do not believe that the argument set forth above should logically lead us to say that no money should be spent by government. This reasoning is more against teleology than research in general. There has to be a form of spending that works. By some vicious turn of events, governments have gotten huge payoffs from research, but not as intended—just consider the Internet. And look at the recapture we’ve had of military expenditures with innovations, and, as we will see, medical cures. It is just that functionaries are too teleological in the way they look for things (particularly the Japanese), and so are large corporations. Most large corporations, such as Big Pharma, are their own enemies.
Consider blue sky research, whereby research grants and funding are given to people, not projects, and spread in small amounts across many researchers. The sociologist of science Steve Shapin, who spent time in California observing venture capitalists, reports that investors tend to back entrepreneurs, not ideas. Decisions are largely a matter of opinion strengthened with “who you know” and “who said what,” as, to use the venture capitalist’s lingo, you bet on the jockey, not the horse. Why? Because innovations drift, and one needs flâneur-like abilities to keep capturing the opportunities that arise, not stay locked up in a bureaucratic mold. The significant venture capital decisions, Shapin showed, were made without real business plans. So if there was any “analysis,” it had to be of a backup, confirmatory nature. I myself spent some time with venture capitalists in California, with an eye on investing myself, and sure enough, that was the mold.
Visibly the money should go to the tinkerers, the aggressive tinkerers who you trust will milk the option.
Let us use statistical arguments and get technical for a paragraph. Payoffs from research are from Extremistan; they follow a power-law type of statistical distribution, with big, near-unlimited upside but, because of optionality, limited downside. Consequently, payoff from research should necessarily be linear to number of trials, not total funds involved in the trials. Since, as in Figure 7, the winner will have an explosive payoff, uncapped, the right approach requires a certain style of blind funding. It means the right policy would be what is called “one divided by n” or “1/N” style, spreading attempts in as large a number of trials as possible: if you face n options, invest in all of them in equal amounts.5 Small amounts per trial, lots of trials, broader than you want. Why? Because in Extremistan, it is more important to be in something in a small amount than to miss it. As one venture capitalist told me: “The payoff can be so large that you can’t afford not to be in everything.”
THE CASE IN MEDICINE
Unlike technology, medicine has a long history of domestication of luck; it now has accepted randomness in its practice. But not quite.
Medical data allow us to assess the performance of teleological research compared to randomly generated discoveries. The U.S. government provides us with the ideal dataset for that: the activities of the National Cancer Institute that came out of the Nixon “war on cancer” in the early 1970s. Morton Meyers, a practicing doctor and researcher, writes in his wonderful Happy Accidents: Serendipity in Modern Medical Breakthroughs: “Over a twenty-year period of screening more than 144,000 plant extracts, representing about 15,000 species, not a single plant-based anticancer drug reached approved status. This failure stands in stark contrast to the discovery in the late 1950s of a major group of plant-derived cancer drugs, the Vinca Alcaloids—a discovery that came about by chance, not through directed research.”
John LaMatina, an insider who described what he saw after leaving the pharmaceutical business, shows statistics illustrating the gap between public perception of academic contributions and truth: private industry develops nine drugs out of ten. Even the tax-funded National Institutes of Health found that out of forty-six drugs on the market with significant sales, about three had anything to do with federal funding.
We have not digested the fact that cures for cancer had been coming from other branches of research. You search for noncancer drugs (or noncancer nondrugs) and find something you were not looking for (and vice versa). But the interesting constant is that when a result is initially discovered by an academic researcher, he is likely to disregard the consequences because it is not what he wanted to find—an academic has a script to follow. So, to put it in option terms, he does not exercise his option in spite of its value, a strict violation of rationality (no matter how you define rationality), like someone who both is greedy and does not pick up a large sum of money found in his garden. Meyers also shows the lecturing-birds-how-to-fly effect as discoveries are ex post narrated back to some academic research, contributing to our illusion.
In some cases, because the source of the discovery is military, we don’t know exactly what’s going on. Take for instance chemotherapy for cancer, as discussed in Meyers’s book. An American ship carrying mustard gas off Bari in Italy was bombed by the Germans 1942. It helped develop chemotherapy owing to the effect of the gas on the condition of the soldiers who had liquid cancers (eradication of white blood cells). But mustard gas was banned by the Geneva Conventions, so the story was kept secret—Churchill purged all mention from U.K. records, and in the United States, the information was stifled, though not the research on the effect of nitrogen mustard.
James Le Fanu, the doctor and writer about medicine, wrote that the therapeutic revolution, or the period in the postwar years that saw a large number of effective therapies, was not ignited by a major scientific insight. It came from the exact opposite, “the realization by doctors and scientists that it was not necessary to understand in any detail what was wrong, but that synthetic chemistry blindly and randomly would deliver the remedies that had eluded doctors for centuries.” (He uses as a central example the sulfonamides identified by Gerhard Domagk.)
Further, the increase in our theoretical understanding—the “epistemic base,” to use Mokyr’s term—came with a decrease in the number of new drugs. This is something Fat Tony or the green lumber fellow could have told us. Now, one can argue that we depleted the low-hanging fruits, but I go further, with more cues from other parts (such as the payoff from the Human Genome Project or the stalling of medical cures of the past two decades in the face of the growing research expenditures)—knowledge, or what is called “knowledge,” in complex domains inhibits research.
Or, another way to see it, studying the chemical composition of ingredients will make you neither a better cook nor a more expert taster—it might even make you worse at both. (Cooking is particularly humbling for teleology-driven fellows.)
One can make a list of medications that came Black Swan–style from serendipity and compare it to the list of medications that came from design. I was about to embark on such a list until I realized that the notable exceptions, that is, drugs that were discovered in a teleological manner, are too few—mostly AZT, AIDS drugs. Designer drugs have a main property—they are designed (and are therefore teleological). But it does not look as if we are capable of designing a drug while taking into account the potential side effects. Hence a problem for the future of designer drugs. The more drugs there are on the market, the more interactions with one another—so we end up with a swelling number of possible interactions with every new drug introduced. If there are twenty unrelated drugs, the twenty-first would need to consider twenty interactions, no big deal. But if there are a thousand, we would need to predict a little less than a thousand. And there are tens of thousands of drugs available today. Further, there is research showing that we may be underestimating the interactions of current drugs, those already on the market, by a factor of four so, if anything, the pool of available drugs should be shrinking rather than growing.
There is an obvious drift in that business, as a drug can be invented for something and find new applications, what the economist John Kay calls obliquity—aspirin, for instance, changed many times in uses; or the ideas of Judah Folkman about r
estricting the blood supply of tumors (angiogenesis inhibitors) have led to the treatment of macular degeneration (bevacizumab, known as Avastin), an effect that is more effective than the original intent.
Now, instead of giving my laundry list of drugs here (too inelegant), I refer the reader to, in addition to Meyers’s book, Claude Bohuon and Claude Monneret, Fabuleux hasards, histoire de la découverte des médicaments, and Jie Jack Li’s Laughing Gas, Viagra and Lipitor.
Matt Ridley’s Anti-Teleological Argument
The great medieval Arabic-language skeptic philosopher Algazel, aka Al-Ghazali, who tried to destroy the teleology of Averroes and his rationalism, came up with the famous metaphor of the pin—now falsely attributed to Adam Smith. The pin doesn’t have a single maker, but twenty-five persons involved; these are all collaborating in the absence of a central planner—a collaboration guided by an invisible hand. For not a single one knows how to produce it on his own.
In the eyes of Algazel, a skeptic fideist (i.e., a skeptic with religious faith), knowledge was not in the hands of humans, but in those of God, while Adam Smith calls it the law of the market and some modern theorist presents it as self-organization. If the reader wonders why fideism is epistemologically equivalent to pure skepticism about human knowledge and embracing the hidden logics of things, just replace God with nature, fate, the Invisible, Opaque, and Inaccessible, and you mostly get the same result. The logic of things stands outside of us (in the hands of God or natural or spontaneous forces); and given that nobody these days is in direct communication with God, even in Texas, there is little difference between God and opacity. Not a single individual has a clue about the general process, and that is central.
The author Matt Ridley produces a more potent argument thanks to his background in biology. The difference between humans and animals lies in the ability to collaborate, engage in business, let ideas, pardon the expression, copulate. Collaboration has explosive upside, what is mathematically called a superadditive function, i.e., one plus one equals more than two, and one plus one plus one equals much, much more than three. That is pure nonlinearity with explosive benefits—we will get into details on how it benefits from the philosopher’s stone. Crucially, this is an argument for unpredictability and Black Swan effects: since you cannot forecast collaborations and cannot direct them, you cannot see where the world is going. All you can do is create an environment that facilitates these collaborations, and lay the foundation for prosperity. And, no, you cannot centralize innovations, we tried that in Russia.
Remarkably, to get a bit more philosophical with the ideas of Algazel, one can see religion’s effect here in reducing dependence on the fallibility of human theories and agency—so Adam Smith meets Algazel in that sense. For one the invisible hand is the market, for the other it is God. It has been difficult for people to understand that, historically, skepticism has been mostly skepticism of expert knowledge rather than skepticism about abstract entities like God, and that all the great skeptics have been largely either religious or, at least, pro-religion (that is, in favor of others being religious).
Corporate Teleology
When I was in business school I rarely attended lectures in something called strategic planning, a required course, and when I showed my face in class, I did not listen for a nanosecond to what was said there; did not even buy the books. There is something about the common sense of student culture; we knew that it was all babble. I passed the required classes in management by confusing the professors, playing with complicated logics, and I felt it intellectually dishonest to enroll in more classes than the strictly necessary.
Corporations are in love with the idea of the strategic plan. They need to pay to figure out where they are going. Yet there is no evidence that strategic planning works—we even seem to have evidence against it. A management scholar, William Starbuck, has published a few papers debunking the effectiveness of planning—it makes the corporation option-blind, as it gets locked into a non-opportunistic course of action.
Almost everything theoretical in management, from Taylorism to all productivity stories, upon empirical testing, has been exposed as pseudoscience—and like most economic theories, lives in a world parallel to the evidence. Matthew Stewart, who, trained as a philosopher, found himself in a management consultant job, gives a pretty revolting, if funny, inside story in The Management Myth. It is similar to the self-serving approach of bankers. Abrahamson and Friedman, in their beautiful book A Perfect Mess, also debunk many of these neat, crisp, teleological approaches. It turns out, strategic planning is just superstitious babble.
For an illustration of business drift, rational and opportunistic business drift, take the following. Coca-Cola began as a pharmaceutical product. Tiffany & Co., the fancy jewelry store company, started life as a stationery store. The last two examples are close, perhaps, but consider next: Raytheon, which made the first missile guidance system, was a refrigerator maker (one of the founders was no other than Vannevar Bush, who conceived the teleological linear model of science we saw earlier; go figure). Now, worse: Nokia, who used to be the top mobile phone maker, began as a paper mill (at some stage they were into rubber shoes). DuPont, now famous for Teflon nonstick cooking pans, Corian countertops, and the durable fabric Kevlar, actually started out as an explosives company. Avon, the cosmetics company, started out in door-to-door book sales. And, the strangest of all, Oneida Silversmiths was a community religious cult but for regulatory reasons they needed to use as cover a joint stock company.
THE INVERSE TURKEY PROBLEM
Now some plumbing behind what I am saying—epistemology of statistical statements. The following discussion will show how the unknown, what you don’t see, can contain good news in one case and bad news in another. And in Extremistan territory, things get even more accentuated.
To repeat (it is necessary to repeat because intellectuals tends to forget it), evidence of absence is not absence of evidence, a simple point that has the following implications: for the antifragile, good news tends to be absent from past data, and for the fragile it is the bad news that doesn’t show easily.
Imagine going to Mexico with a notebook and trying to figure out the average wealth of the population from talking to people you randomly encounter. Odds are that, without Carlos Slim in your sample, you have little information. For out of the hundred or so million Mexicans, Slim would (I estimate) be richer than the bottom seventy to ninety million all taken together. So you may sample fifty million persons and unless you include that “rare event,” you may have nothing in your sample and underestimate the total wealth.
Remember the graphs in Figures 6 or 7 illustrating the payoff from trial and error. When engaging in tinkering, you incur a lot of small losses, then once in a while you find something rather significant. Such methodology will show nasty attributes when seen from the outside—it hides its qualities, not its defects.
In the antifragile case (of positive asymmetries, positive Black Swan businesses), such as trial and error, the sample track record will tend to underestimate the long-term average; it will hide the qualities, not the defects.
(A chart is included in the appendix for those who like to look at the point graphically.)
Recall our mission to “not be a turkey.” The take-home is that, when facing a long sample subjected to turkey problems, one tends to estimate a lower number of adverse events—simply, rare events are rare, and tend not to show up in past samples, and given that the rare is almost always negative, we get a rosier picture than reality. But here we face the mirror image, the reverse situation. Under positive asymmetries, that is, the antifragile case, the “unseen” is positive. So “empirical evidence” tends to miss positive events and underestimate the total benefits.
As to the classic turkey problem, the rule is as follows.
In the fragile case of negative asymmetries (turkey problems), the sample track record will tend to underestimate the long-term average; it will hide the defects and displa
y the qualities.
The consequences make life simple. But since standard methodologies do not take asymmetries into account, about anyone who studied conventional statistics without getting very deep into the subject (just to theorize in social science or teach students) will get the turkey problem wrong. I have a simple rule, that those who teach at Harvard should be expected to have much less understanding of things than cab drivers or people innocent of canned methods of inference (it is a heuristic, it can be wrong, but it works; it came to my attention as the Harvard Business School used to include Fragilista Robert C. Merton on its staff).
So let us pick on Harvard Business School professors who deserve it quite a bit. When it comes to the first case (the error of ignoring positive asymmetries), one Harvard Business School professor, Gary Pisano, writing about the potential of biotech, made the elementary inverse-turkey mistake, not realizing that in a business with limited losses and unlimited potential (the exact opposite of banking), what you don’t see can be both significant and hidden from the past. He writes: “Despite the commercial success of several companies and the stunning growth in revenues for the industry as a whole, most biotechnology firms earn no profit.” This may be correct, but the inference from it is wrong, possibly backward, on two counts, and it helps to repeat the logic owing to the gravity of the consequences. First, “most companies” in Extremistan make no profit—the rare event dominates, and a small number of companies generate all the shekels. And whatever point he may have, in the presence of the kind of asymmetry and optionality we see in Figure 7, it is inconclusive, so it is better to write about another subject, something less harmful that may interest Harvard students, like how to make a convincing PowerPoint presentation or the difference in managerial cultures between the Japanese and the French. Again, he may be right about the pitiful potential of biotech investments, but not on the basis of the data he showed.