So opinion makers who were so proudly and professionally providing idle babble will eventually appear to win an argument, since they are the ones writing, and suckers who got in trouble from reading them will again look to them for future guidance, and will again get in trouble.
The past is fluid, marred with selection biases and constantly revised memories. It is a central property of suckers that they will never know they were the suckers because that’s how our minds work. (Even so, one is struck with the following fact: the fragilista crisis that started in 2007–2008 had many, many fewer near-predictors than random.)
The asymmetry (antifragility of postdictors): postdictors can cherry-pick and produce instances in which their opinions played out and discard mispredictions into the bowels of history. It is like a free option—to them; we pay for it.
Since they have the option, the fragilistas are personally antifragile: volatility tends to benefit them: the more volatility, the higher the illusion of intelligence.
But evidence of whether one has been a sucker or a nonsucker is easy to ferret out by looking at actual records, actions. Actions are symmetric, do not allow cherry-picking, remove the free option. When you look at the actual history of someone’s activities, instead of what thoughts he will deliver after the facts, things become crystal clear. The option is gone. Reality removes the uncertainty, the imprecision, the vagueness, the self-serving mental biases that make us appear more intelligent. Mistakes are costly, no longer free, but being right brings actual rewards. Of course, there are other checks one can do to assess the b***t component of life: investigate people’s decisions as expressed through their own investments. You would discover that many people who claim to have foreseen the collapse of the financial system had financial companies in their portfolios. Indeed, there was no need to “profit” from events like Tony and Nero to show nonsuckerness: just avoiding being hurt by them would have been sufficient.
I want predictors to have visible scars on their body from prediction errors, not distribute these errors to society.
You cannot sit and moan about the world. You need to come out on top. So Tony was right to insist that Nero take a ritual look at the physical embodiment of the spoils, like a bank account statement—as we said, it had nothing to do with financial value, nor purchasing power, just symbolic value. We saw in Chapter 9 how Julius Caesar needed to incur the cost of having Vercingetorix brought to Rome and paraded. An intangible victory has no value.
Verba volent, words fly. Never have people who talk and don’t do been more visible, and played a larger role, than in modern times. This is the product of modernism and division of tasks.
Recall that I said that America’s strength was risk taking and harboring risk takers (the right kind, the Thalesian king of high-failure, long-optionality type). Sorry, but we have been moving away from this model.
The Stiglitz Syndrome
There is something more severe than the problem with Thomas Friedman, which can be generalized to represent someone causing action while being completely unaccountable for his words.
The phenomenon I will call the Stiglitz syndrome, after an academic economist of the so-called “intelligent” variety called Joseph Stiglitz, is as follows.
Remember the fragility detection in Chapter 19 and my obsession with Fannie Mae. Luckily, I had some skin in the game for my opinions, be it through exposure to a smear campaign. And, in 2008, no surprise, Fannie Mae went bust, I repeat, costing the U.S. taxpayer hundreds of billions (and counting)—generally, the financial system, with similar risks, exploded. The entire banking system had similar exposures.
But around the same period, Joseph Stiglitz, with two colleagues, the Orszag brothers (Peter and Jonathan), looked at the very same Fannie Mae. They assessed, in a report, that “on the basis of historical experience, the risk to the government from a potential default on GSE debt is effectively zero.”1 Supposedly, they ran simulations—but didn’t see the obvious. They also said that the probability of a default was found to be “so small that it is difficult to detect.” It is statements like these and, to me, only statements like these (intellectual hubris and the illusion of understanding of rare events) that caused the buildup of these exposures to rare events in the economy. This is the Black Swan problem that I was fighting. This is Fukushima.
Now the culmination is that Stiglitz writes in 2010 in his I-told-you-so book that he claims to have “predicted” the crisis that started in 2007–2008.
Look at this aberrant case of antifragility provided to Stiglitz and his colleagues by society. It turns out that Stiglitz was not just a nonpredictor (by my standards) but was also part of the problem that caused the events, these accumulations of exposures to small probabilities. But he did not notice it! An academic is not designed to remember his opinions because he doesn’t have anything at risk from them.
At the core, people are dangerous when they have that strange skill that allows their papers to be published in journals but decreases their understanding of risk. So the very same economist who caused the problem then postdicted the crisis, and then became a theorist on what happened. No wonder we will have larger crises.
The central point: had Stiglitz been a businessman with his own money on the line, he would have blown up, terminated. Or had he been in nature, his genes would have been made extinct—so people with such misunderstanding of probability would eventually disappear from our DNA. What I found nauseating was the government hiring one of his coauthors.2
I am reluctantly calling the syndrome by Stiglitz’s name because I find him the smartest of economists, one with the most developed intellect for things on paper—except that he has no clue about the fragility of systems. And Stiglitz symbolizes harmful misunderstanding of small probabilities by the economics establishment. It is a severe disease, one that explains why economists will blow us up again.
The Stiglitz syndrome corresponds to a form of cherry-picking, the nastiest variety because the perpetrator is not aware of what he is doing. It is a situation in which someone doesn’t just fail to detect a hazard but contributes to its cause while ending up convincing himself—and sometimes others—of the opposite, namely, that he predicted it and warned against it. It corresponds to a combination of remarkable analytical skills, blindness to fragility, selective memory, and absence of skin in the game.
Stiglitz Syndrome = fragilista (with good intentions) + ex post cherry-picking
There are other lessons here, related to the absence of penalty. This is an illustration of the academics-who-write-papers-and-talk syndrome in its greatest severity (unless, as we will see, they have their soul in it). So many academics propose something in one paper, then the opposite in another paper, without penalty to themselves from having been wrong in the first paper since there is a need only for consistency within a single paper, not across one’s career. This would be fine, as someone may evolve and contradict earlier beliefs, but then the earlier “result” should be withdrawn from circulation and superseded with a new one—with books, the new edition supersedes the preceding one. This absence of penalty makes them antifragile at the expense of the society that accepts the “rigor” of their results. Further, I am not doubting Stiglitz’s sincerity, or some weak form of sincerity: I believe he genuinely thinks he predicted the financial crisis, so let me rephrase the problem: the problem with people who do not incur harm is that they can cherry-pick from statements they’ve made in the past, many of them contradictory, and end up convincing themselves of their intellectual lucidity on the way to the World Economic Forum at Davos.
There is the iatrogenics of the medical charlatan and snake oil salesperson causing harm, but he sort of knows it and lies low after he is caught. And there is a far more vicious form of iatrogenics by experts who use their more acceptable status to claim later that they warned of harm. As these did not know they were causing iatrogenics, they cure iatrogenics with iatrogenics. Then things explode.
Finally, the cure
to many ethical problems maps to the exact cure for the Stiglitz effect, which I state now.
Never ask anyone for their opinion, forecast, or recommendation. Just ask them what they have—or don’t have—in their portfolio.
We now know that many innocent retirees have been harmed by the incompetence of the rating agencies—it was a bit more than incompetence. Many subprime loans were toxic waste dressed as “AAA,” meaning near-government grade in safety. People were innocently led into putting their savings into them—and, further, regulators were forcing portfolio managers to use the assessment of the rating agencies. But rating agencies are protected: they present themselves as press—without the noble mission of the press to expose frauds. And they benefit from the protection of free speech—the “First Amendment” so ingrained in American habits. My humble proposal: one should say whatever he wants, but one’s portfolio needs to line up with it. And, of course, regulators should not be fragilistas by giving their stamp to predictive approaches—hence junk science.
The psychologist Gerd Gigerenzer has a simple heuristic. Never ask the doctor what you should do. Ask him what he would do if he were in your place. You would be surprised at the difference.
The Problem of Frequency, or How to Lose Arguments
Recall that Fat Tony was in favor of just “making a buck” as opposed to being “proven right.” The point has a statistical dimension. Let us return to the distinction between Thalesian and Aristotelian for a minute and look at evolution from the following point of view. The frequency, i.e., how often someone is right is largely irrelevant in the real world, but alas, one needs to be a practitioner, not a talker, to figure it out. On paper, the frequency of being right matters, but only on paper—typically, fragile payoffs have little (sometimes no) upside, and antifragile payoffs have little downside. This means that one makes pennies to lose dollars in the fragile case; makes dollars to lose pennies in the antifragile one. So the antifragile can lose for a long time with impunity, so long as he happens to be right once; for the fragile, a single loss can be terminal.
Accordingly if you were betting on the downfall of, say, a portfolio of financial institutions because of their fragilities, it would have cost you pennies over the years preceding their eventual demise in 2008, as Nero and Tony did. (Note again that taking the other side of fragility makes you antifragile.) You were wrong for years, right for a moment, losing small, winning big, so vastly more successful than the other way (actually the other way would be bust). So you would have made the Thekels like Thales because betting against the fragile is antifragile. But someone who had merely “predicted” the event with just words would have been called by the journalists “wrong for years,” “wrong most of the time,” etc.
Should we keep tally of opinion makers’ “right” and “wrong,” the proportion does not matter, as we need to include consequences. And given that this is impossible, we are now in a quandary.
Look at it again, the way we looked at entrepreneurs. They are usually wrong and make “mistakes”—plenty of mistakes. They are convex. So what counts is the payoff from success.
Let me rephrase again. Decision making in the real world, that is, deeds, are Thalesian, while forecasting in words is Aristotelian. As we saw in the discussion in Chapter 12, one side of a decision has larger consequences than the other—we don’t have evidence that people are terrorists but we check them for weapons; we don’t believe the water is poisonous but we avoid drinking it; something that would be absurd for someone narrowly applying Aristotelian logic. To put in Fat Tony terms: suckers try to be right, nonsuckers try to make the buck, or:
Suckers try to win arguments, nonsuckers try to win.
To put it again in other words: it is rather a good thing to lose arguments.
The Right Decision for the Wrong Reason
More generally, for Mother Nature, opinions and predictions don’t count; surviving is what matters.
There is an evolutionary argument here. It appears to be the most underestimated argument in favor of free enterprise and a society driven by individual doers, what Adam Smith called “adventurers,” not central planners and bureaucratic apparatuses. We saw that bureaucrats (whether in government or large corporations) live in a system of rewards based on narratives, “tawk,” and the opinion of others, with job evaluation and peer reviews—in other words, what we call marketing. Aristotelian, that is. Yet the biological world evolves by survival, not opinions and “I predicted” and “I told you so.” Evolution dislikes the confirmation fallacy, endemic in society.
The economic world should, too, but institutions mess things up, as suckers may get bigger—institutions block evolution with bailouts and statism. Note that, in the long term, social and economic evolution nastily takes place by surprises, discontinuities, and jumps.3
We mentioned earlier Karl Popper’s ideas on evolutionary epistemology; not being a decision maker, he was under the illusion that ideas compete with each other, with the least wrong surviving at any point in time. He missed the point that it is not ideas that survive, but people who have the right ones, or societies that have the correct heuristics, or the ones, right or wrong, that lead them to do the good thing. He missed the Thalesian effect, the fact that a wrong idea that is harmless can survive. Those who have wrong heuristics—but with a small harm in the event of error—will survive. Behavior called “irrational” can be good if it is harmless.
Let me give an example of a type of false belief that is helpful for survival. In your opinion, which is more dangerous, to mistake a bear for a stone, or mistake a stone for a bear? It is hard for humans to make the first mistake; our intuitions make us overreact at the smallest probability of harm and fall for a certain class of false patterns—those who overreact upon seeing what may look like a bear have had a survival advantage, those who made the opposite mistake left the gene pool.
Our mission is to make talk less cheap.
THE ANCIENTS AND THE STIGLITZ SYNDROME
We saw how the ancients understood the Stiglitz syndrome—and associated ones—rather well. In fact they had quite sophisticated mechanisms to counter most aspects of agency problems, whether individual or collective (the circular effect of hiding behind the collective). Earlier, I mentioned the Romans forcing engineers to spend time under the bridge they built. They would have had Stiglitz and Orszag sleep under the bridge of Fannie Mae and exit the gene pool (so they wouldn’t harm us again).
The Romans had even more powerful heuristics for situations few today have thought about, solving potent game-theoretic problems. Roman soldiers were forced to sign a sacramentum accepting punishment in the event of failure—a kind of pact between the soldier and the army spelling out commitment for upside and downside.
Assume that you and I are facing a small leopard or a wild animal in the jungle. The two of us can possibly overcome it by joining forces—but each one of us is individually weak. Now, if you run away, all you need to be is just faster than me, not faster than the animal. So it would be optimal for the one who can run away the fastest, that is, the most cowardly, to just be a coward and let the other one perish.
The Romans removed the soldiers’ incentive to be a coward and hurt others thanks to a process called decimation. If a legion loses a battle and there is suspicion of cowardice, 10 percent of the soldiers and commanders are put to death, usually by random lottery. Decimation—meaning eliminating one in ten—has been corrupted by modern language. The magic number is one in ten (or something equivalent): putting more than 10 per cent to death would lead to weakening of the army; too little, and cowardice would be a dominant strategy.
And the mechanism must have worked well as a deterrent against cowardice, since it was not commonly applied.
The English applied a version of it. Admiral John Byng was court-martialed and sentenced to death as he was found guilty of failing to “do his utmost” to prevent Minorca from falling to the French following the Battle of Minorca in 1757.
&n
bsp; To Burn One’s Vessels
Playing on one’s inner agency problem can go beyond symmetry: give soldiers no options and see how antifragile they can get.
On April 29, 711, the armies of the Arab commander Tarek crossed the Strait of Gibraltar from Morocco into Spain with a small army (the name Gibraltar is derived from the Arabic Jabal Tarek, meaning “mount of Tarek”). Upon landing, Tarek had his ships put to the fire. He then made a famous speech every schoolchild memorized during my school days that I translate loosely: “Behind you is the sea, before you, the enemy. You are vastly outnumbered. All you have is sword and courage.”
And Tarek and his small army took control of Spain. The same heuristic seems to have played out throughout history, from Cortés in Mexico, eight hundred years later, to Agathocles of Syracuse, eight hundred years earlier—ironically, Agathocles was heading southward, in the opposite direction as Tarek, as he was fighting the Carthaginians and landed in Africa.
Never put your enemy’s back to the wall.
How Poetry Can Kill You
Ask a polyglot who knows Arabic who he considers the best poet—in any language—and odds are that he would answer Almutanabbi, who lived about a thousand years ago; his poetry in the original has a hypnotic effect on the reader (listener), rivaled only by the grip of Pushkin on Russian speakers. The problem is that Almutanabbi knew it; his name was literally “He who thinks of himself as a prophet,” on account of his perceived oversized ego. For a taste of his bombast, one of his poems informs us that his poetry is so potent “that blind people can read it” and “deaf people can listen to it.” Well, Almutanabbi was that rare case of a poet with skin in the game, dying for his poetry.
Antifragile: Things That Gain from Disorder Page 45