The Black Swan
Page 24
We have a paradox. Not only have forecasters generally failed dismally to foresee the drastic changes brought about by unpredictable discoveries, but incremental change has turned out to be generally slower than forecasters expected. When a new technology emerges, we either grossly underestimate or severely overestimate its importance. Thomas Watson, the founder of IBM, once predicted that there would be no need for more than just a handful of computers.
That the reader of this book is probably reading these lines not on a screen but in the pages of that anachronistic device, the book, would seem quite an aberration to certain pundits of the “digital revolution.” That you are reading them in archaic, messy, and inconsistent English, French, or Swahili, instead of in Esperanto, defies the predictions of half a century ago that the world would soon be communicating in a logical, unambiguous, and Platonically designed lingua franca. Likewise, we are not spending long weekends in space stations as was universally predicted three decades ago. In an example of corporate arrogance, after the first moon landing the now-defunct airline Pan Am took advance bookings for round-trips between earth and the moon. Nice prediction, except that the company failed to foresee that it would be out of business not long after.
A Solution Waiting for a Problem
Engineers tend to develop tools for the pleasure of developing tools, not to induce nature to yield its secrets. It so happens that some of these tools bring us more knowledge; because of the silent evidence effect, we forget to consider tools that accomplished nothing but keeping engineers off the streets. Tools lead to unexpected discoveries, which themselves lead to other unexpected discoveries. But rarely do our tools seem to work as intended; it is only the engineer’s gusto and love for the building of toys and machines that contribute to the augmentation of our knowledge. Knowledge does not progress from tools designed to verify or help theories, but rather the opposite. The computer was not built to allow us to develop new, visual, geometric mathematics, but for some other purpose. It happened to allow us to discover mathematical objects that few cared to look for. Nor was the computer invented to let you chat with your friends in Siberia, but it has caused some long-distance relationships to bloom. As an essayist, I can attest that the Internet has helped me to spread my ideas by bypassing journalists. But this was not the stated purpose of its military designer.
The laser is a prime illustration of a tool made for a given purpose (actually no real purpose) that then found applications that were not even dreamed of at the time. It was a typical “solution looking for a problem.” Among the early applications was the surgical stitching of detached retinas. Half a century later, The Economist asked Charles Townes, the alleged inventor of the laser, if he had had retinas on his mind. He had not. He was satisfying his desire to split light beams, and that was that. In fact, Townes’s colleagues teased him quite a bit about the irrelevance of his discovery. Yet just consider the effects of the laser in the world around you: compact disks, eyesight corrections, microsurgery, data storage and retrieval—all unforeseen applications of the technology.*
We build toys. Some of those toys change the world.
Keep Searching
In the summer of 2005 I was the guest of a biotech company in California that had found inordinate success. I was greeted with T-shirts and pins showing a bell-curve buster and the announcement of the formation of the Fat Tails Club (“fat tails” is a technical term for Black Swans). This was my first encounter with a firm that lived off Black Swans of the positive kind. I was told that a scientist managed the company and that he had the instinct, as a scientist, to just let scientists look wherever their instinct took them. Commercialization came later. My hosts, scientists at heart, understood that research involves a large element of serendipity, which can pay off big as long as one knows how serendipitous the business can be and structures it around that fact. Viagra, which changed the mental outlook and social mores of retired men, was meant to be a hypertension drug. Another hypertension drug led to a hair-growth medication. My friend Bruce Goldberg, who understands randomness, calls these unintended side applications “corners.” While many worry about unintended consequences, technology adventurers thrive on them.
The biotech company seemed to follow implicitly, though not explicitly, Louis Pasteur’s adage about creating luck by sheer exposure. “Luck favors the prepared,” Pasteur said, and, like all great discoverers, he knew something about accidental discoveries. The best way to get maximal exposure is to keep researching. Collect opportunities—on that, later.
To predict the spread of a technology implies predicting a large element of fads and social contagion, which lie outside the objective utility of the technology itself (assuming there is such an animal as objective utility). How many wonderfully useful ideas have ended up in the cemetery, such as the Segway, an electric scooter that, it was prophesized, would change the morphology of cities, and many others. As I was mentally writing these lines I saw a Time magazine cover at an airport stand announcing the “meaningful inventions” of the year. These inventions seemed to be meaningful as of the issue date, or perhaps for a couple of weeks after. Journalists can teach us how to not learn.
HOW TO PREDICT YOUR PREDICTIONS!
This brings us to Sir Doktor Professor Karl Raimund Popper’s attack on historicism. As I said in Chapter 5, this was his most significant insight, but it remains his least known. People who do not really know his work tend to focus on Popperian falsification, which addresses the verification or nonverification of claims. This focus obscures his central idea: he made skepticism a method, he made of a skeptic someone constructive.
Just as Karl Marx wrote, in great irritation, a diatribe called The Misery of Philosophy in response to Proudhon’s The Philosophy of Misery, Popper, irritated by some of the philosophers of his time who believed in the scientific understanding of history, wrote, as a pun, The Misery of Historicism (which has been translated as The Poverty of Historicism).*
Popper’s insight concerns the limitations in forecasting historical events and the need to downgrade “soft” areas such as history and social science to a level slightly above aesthetics and entertainment, like butterfly or coin collecting. (Popper, having received a classical Viennese education, didn’t go quite that far; I do. I am from Amioun.) What we call here soft historical sciences are narrative dependent studies.
Popper’s central argument is that in order to predict historical events you need to predict technological innovation, itself fundamentally unpredictable.
“Fundamentally” unpredictable? I will explain what he means using a modern framework. Consider the following property of knowledge: If you expect that you will know tomorrow with certainty that your boyfriend has been cheating on you all this time, then you know today with certainty that your boyfriend is cheating on you and will take action today, say, by grabbing a pair of scissors and angrily cutting all his Ferragamo ties in half. You won’t tell yourself, This is what I will figure out tomorrow, but today is different so I will ignore the information and have a pleasant dinner. This point can be generalized to all forms of knowledge. There is actually a law in statistics called the law of iterated expectations, which I outline here in its strong form: if I expect to expect something at some date in the future, then I already expect that something at present.
Consider the wheel again. If you are a Stone Age historical thinker called on to predict the future in a comprehensive report for your chief tribal planner, you must project the invention of the wheel or you will miss pretty much all of the action. Now, if you can prophesy the invention of the wheel, you already know what a wheel looks like, and thus you already know how to build a wheel, so you are already on your way. The Black Swan needs to be predicted!
But there is a weaker form of this law of iterated knowledge. It can be phrased as follows: to understand the future to the point of being able to predict it, you need to incorporate elements from this future itself. If you know about the discovery you are about to m
ake in the future, then you have almost made it. Assume that you are a special scholar in Medieval University’s Forecasting Department specializing in the projection of future history (for our purposes, the remote twentieth century). You would need to hit upon the inventions of the steam machine, electricity, the atomic bomb, and the Internet, as well as the institution of the airplane onboard massage and that strange activity called the business meeting, in which well-fed, but sedentary, men voluntarily restrict their blood circulation with an expensive device called a necktie.
This incapacity is not trivial. The mere knowledge that something has been invented often leads to a series of inventions of a similar nature, even though not a single detail of this invention has been disseminated—there is no need to find the spies and hang them publicly. In mathematics, once a proof of an arcane theorem has been announced, we frequently witness the proliferation of similar proofs coming out of nowhere, with occasional accusations of leakage and plagiarism. There may be no plagiarism: the information that the solution exists is itself a big piece of the solution.
By the same logic, we are not easily able to conceive of future inventions (if we were, they would have already been invented). On the day when we are able to foresee inventions we will be living in a state where everything conceivable has been invented. Our own condition brings to mind the apocryphal story from 1899 when the head of the U.S. patent office resigned because he deemed that there was nothing left to discover—except that on that day the resignation would be justified.*
Popper was not the first to go after the limits to our knowledge. In Germany, in the late nineteenth century, Emil du Bois-Reymond claimed that ignoramus et ignorabimus—we are ignorant and will remain so. Somehow his ideas went into oblivion. But not before causing a reaction: the mathematician David Hilbert set to defy him by drawing a list of problems that mathematicians would need to solve over the next century.
Even du Bois-Reymond was wrong. We are not even good at understanding the unknowable. Consider the statements we make about things that we will never come to know—we confidently underestimate what knowledge we may acquire in the future. Auguste Comte, the founder of the school of positivism, which is (unfairly) accused of aiming at the scientization of everything in sight, declared that mankind would forever remain ignorant of the chemical composition of the fixed stars. But, as Charles Sanders Peirce reported, “The ink was scarcely dry upon the printed page before the spectroscope was discovered and that which he had deemed absolutely unknowable was well on the way of getting ascertained.” Ironically, Comte’s other projections, concerning what we would come to learn about the workings of society, were grossly—and dangerously—overstated. He assumed that society was like a clock that would yield its secrets to us.
I’ll summarize my argument here: Prediction requires knowing about technologies that will be discovered in the future. But that very knowledge would almost automatically allow us to start developing those technologies right away. Ergo, we do not know what we will know.
Some might say that the argument, as phrased, seems obvious, that we always think that we have reached definitive knowledge but don’t notice that those past societies we laugh at also thought the same way. My argument is trivial, so why don’t we take it into account? The answer lies in a pathology of human nature. Remember the psychological discussions on asymmetries in the perception of skills in the previous chapter? We see flaws in others and not in ourselves. Once again we seem to be wonderful at self-deceit machines.
Monsieur le professeur Henri Poincaré. Somehow they stopped making this kind of thinker. Courtesy of Université Nancy-2.
THE NTH BILLIARD BALL
Henri Poincaré, in spite of his fame, is regularly considered to be an undervalued scientific thinker, given that it took close to a century for some of his ideas to be appreciated. He was perhaps the last great thinking mathematician (or possibly the reverse, a mathematical thinker). Every time I see a T-shirt bearing the picture of the modern icon Albert Einstein, I cannot help thinking of Poincaré—Einstein is worthy of our reverence, but he has displaced many others. There is so little room in our consciousness; it is winner-take-all up there.
Third Republic–Style Decorum
Again, Poincaré is in a class by himself. I recall my father recommending Poincaré’s essays, not just for their scientific content, but for the quality of his French prose. The grand master wrote these wonders as serialized articles and composed them like extemporaneous speeches. As in every masterpiece, you see a mixture of repetitions, digressions, everything a “me too” editor with a prepackaged mind would condemn—but these make his text even more readable owing to an iron consistency of thought.
Poincaré became a prolific essayist in his thirties. He seemed in a hurry and died prematurely, at fifty-eight; he was in such a rush that he did not bother correcting typos and grammatical errors in his text, even after spotting them, since he found doing so a gross misuse of his time. They no longer make geniuses like that—or they no longer let them write in their own way.
Poincaré’s reputation as a thinker waned rapidly after his death. His idea that concerns us took almost a century to resurface, but in another form. It was indeed a great mistake that I did not carefully read his essays as a child, for in his magisterial La science et l’hypothèse, I discovered later, he angrily disparages the use of the bell curve.
I will repeat that Poincaré was the true kind of philosopher of science: his philosophizing came from his witnessing the limits of the subject itself, which is what true philosophy is all about. I love to tick off French literary intellectuals by naming Poincaré as my favorite French philosopher. “Him a philosophe? What do you mean, monsieur?” It is always frustrating to explain to people that the thinkers they put on the pedestals, such as Henri Bergson or Jean-Paul Sartre, are largely the result of fashion production and can’t come close to Poincaré in terms of sheer influence that will continue for centuries to come. In fact, there is a scandal of prediction going on here, since it is the French Ministry of National Education that decides who is a philosopher and which philosophers need to be studied.
I am looking at Poincaré’s picture. He was a bearded, portly and imposing, well-educated patrician gentleman of the French Third Republic, a man who lived and breathed general science, looked deep into his subject, and had an astonishing breadth of knowledge. He was part of the class of mandarins that gained respectability in the late nineteenth century: upper middle class, powerful, but not exceedingly rich. His father was a doctor and professor of medicine, his uncle was a prominent scientist and administrator, and his cousin Raymond became a president of the republic of France. These were the days when the grandchildren of businessmen and wealthy landowners headed for the intellectual professions.
However, I can hardly imagine him on a T-shirt, or sticking out his tongue like in that famous picture of Einstein. There is something non-playful about him, a Third Republic style of dignity.
In his day, Poincaré was thought to be the king of mathematics and science, except of course by a few narrow-minded mathematicians like Charles Hermite who considered him too intuitive, too intellectual, or too “hand-waving.” When mathematicians say “hand-waving,” disparagingly, about someone’s work, it means that the person has: a) insight, b) realism, c) something to say, and it means that d) he is right because that’s what critics say when they can’t find anything more negative. A nod from Poincaré made or broke a career. Many claim that Poincaré figured out relativity before Einstein—and that Einstein got the idea from him—but that he did not make a big deal out of it. These claims are naturally made by the French, but there seems to be some validation from Einstein’s friend and biographer Abraham Pais. Poincaré was too aristocratic in both background and demeanor to complain about the ownership of a result.
Poincaré is central to this chapter because he lived in an age when we had made extremely rapid intellectual progress in the fields of prediction—think of cel
estial mechanics. The scientific revolution made us feel that we were in possession of tools that would allow us to grasp the future. Uncertainty was gone. The universe was like a clock and, by studying the movements of the pieces, we could project into the future. It was only a matter of writing down the right models and having the engineers do the calculations. The future was a mere extension of our technological certainties.
The Three Body Problem
Poincaré was the first known big-gun mathematician to understand and explain that there are fundamental limits to our equations. He introduced nonlinearities, small effects that can lead to severe consequences, an idea that later became popular, perhaps a bit too popular, as chaos theory. What’s so poisonous about this popularity? Because Poincaré’s entire point is about the limits that nonlinearities put on forecasting; they are not an invitation to use mathematical techniques to make extended forecasts. Mathematics can show us its own limits rather clearly.
There is (as usual) an element of the unexpected in this story. Poincaré initially responded to a competition organized by the mathematician Gösta Mittag-Leffer to celebrate the sixtieth birthday of King Oscar of Sweden. Poincaré’s memoir, which was about the stability of the solar system, won the prize that was then the highest scientific honor (as these were the happy days before the Nobel Prize). A problem arose, however, when a mathematical editor checking the memoir before publication realized that there was a calculation error, and that, after consideration, it led to the opposite conclusion—unpredictability, or, more technically, nonintegrability. The memoir was discreetly pulled and reissued about a year later.