Farsighted

Home > Other > Farsighted > Page 9
Farsighted Page 9

by Steven Johnson


  The networked personal computer was a different kind of problem. For roughly a century and a half, transportation speed was more predictable because the problem of designing engines involved a limited and relatively stable number of disciplines. It was thermodynamics and mechanical engineering, and maybe a little chemistry in experimenting with different kinds of propulsion sources. But the fields that converged to invent the modern digital computer were far more diverse. Computing began in math, but it came to rely on electrical engineering, robotics, and microwave signal processing, not to mention entirely new fields like user interface design. When all of those different fields are raising their game at a regular pace, category-changing breakthroughs can arise, precisely the kind of step change that is difficult to predict in advance. The fact that the chips got so cheap can be attributed to both the advances in solid state physics that enabled us to use semiconductors as logic gates and, eventually, to advances in supply chain management that enable a device like the iPhone to be assembled out of components made on another continent.

  This is why so many smart people had a blind spot for the personal computer. To see it coming, you had to understand that the symbolic languages of programming would advance beyond the simple mathematical calculations of early computation; you had to understand that silicon and integrated circuits would replace vacuum tubes; that radio waves could be manipulated to convey binary information, instead of analog waveforms; that the electrons firing at cathode-ray screens could be controlled so precisely that they could form legible alphanumeric characters; that stable networks could form on top of decentralized nodes without any master device controlling the entire system. To make sense of it all, you had to be a mathematician, and a supply chain manager, and an information theorist, and a solid state physicist. For all its achievements, the physical acceleration of the nineteenth and twentieth centuries was all just a variation on a single theme: burn something, and convert the energy released into movement. But the computer was a symphony.

  Is the myopia of Tetlock’s pundits and the sci-fi authors perhaps just a sign that complex systems like geopolitics and information technology are fundamentally unpredictable because they involve too many variables, distributed across too many different fields? And if that’s the case, how can we ever make better long-term decisions? To make successful decisions, you need to have a better-than-chance understanding of where the paths you’re choosing between are going to take you. You can’t be farsighted if the road ahead is blurry.

  Are there fields where we have made meaningful advances in predicting the future behavior of complex systems, not just incremental improvements of the superforecasters? And if so, can we learn something from their success?

  THE WATER CURE

  A few years after Darwin made his fateful decision to marry, he began experiencing mysterious bouts of vomiting, a condition that would plague him for the rest of his life. Eventually his physicians recommended that he leave London to recuperate. The doctors weren’t simply sending him to the country for a little rest and relaxation. They had a much more specific intervention in mind. They were sending him to the water cure.

  In taking their advice, Darwin was following the lead of many of his intellectual peers: Alfred Tennyson, Florence Nightingale, Charles Dickens, and George Eliot’s companion, George Henry Lewes. Situated near a legendary natural spring in the town of Malvern, the water cure clinic had been founded a decade before by two doctors named James Manby Gully and James Wilson. In modern terms, the Malvern clinic would be categorized at the extreme end of “holistic” health practices, but at the time, it was largely indistinguishable from genuine medicine. Darwin made several visits to Malvern and wrote many letters mulling the scientific validity of the water cure. (As we will see, he had reason to be concerned. The great tragedy of his life unfurled during a trip to Malvern.) The treatments Gully and Wilson devised included dumping a large quantity of freezing-cold water onto their patients and then wrapping them in wet sheets and forcing them to lie still for hours. Gully, in particular, seems to have been receptive to just about every holistic or spiritual treatment one could imagine. In a letter, Darwin mocked his doctor for the “medical” interventions he arranged for his own family: “When his daughter was very ill,” he wrote of Gully, “he had a clairvoyant girl to report on internal changes, a mesmerist to put her to sleep, a homeopathist . . . and himself as hydropathist.”

  The fact that Darwin kept returning to Malvern despite his misgivings suggests that he still believed there was something genuinely therapeutic about the water cure. While it is likely true that the simple act of leaving the polluted chaos of London and drinking uncontaminated water for a few weeks would have had health benefits, the specific treatments employed at the clinic almost certainly had no positive effect on the patients’ condition other than perhaps the small bonus of the placebo effect. The fact that the water cure appears to have had no medical value whatsoever did not stop Gully and Wilson from developing a national reputation as miracle healers.

  That reputation may have had something to do with the fact that the water cure outperformed many common medical interventions from the period: arsenic, lead, and bloodletting were all still commonly prescribed by the most highly regarded physicians. When you think of some of the engineering and scientific achievements of the period—Darwin’s dangerous idea, the railroads—it seems strangely asynchronous that medical expertise would still be lingering in such dark age mysticism. Darwin faced one of the most challenging decisions one can imagine—What treatment should I seek for this debilitating illness?—and his options were fundamentally, Should I let this doctor dump a bucket of ice water on me, or should I opt for the leeches?

  That choice seems laughable to us today, but how did it come about in the first place? The Victorians were great overachievers in many fields. Why were they so incompetent with medicine? There is a good argument to be made that, in sum, the medical professions of the Victorian era broke the Hippocratic oath and did more harm than good with their interventions. The average Victorian trying to stay alive in that environment would have done better ignoring all medical advice than paying attention to any of it.

  There are many reasons for that strange deficit, but one of them is this: Victorian doctors were incapable of predicting the future in any reliable way, at least in terms of the effects of their treatments. They might have promised you that being doused in ice water or poisoned with arsenic would cure your tuberculosis. But they had no real way of knowing whether their actions had any effect on the disease. Every medical prophecy was based on anecdote, intuition, and hearsay. And that lack of foresight was only partly due to the fact that Victorian doctors didn’t have access to the medical tools that we now take for granted: X-ray machines, fMRI scanners, electron microscopes. They also lacked a conceptual tool: randomized controlled trials.

  In 1948, the British Medical Journal published a paper called “Streptomycin treatment of pulmonary tuberculosis”—an analysis of the effects of a new antibiotic in treating victims of tuberculosis, coauthored by many researchers but led by the British statistician and epidemiologist Austin Bradford Hill. Streptomycin was, in fact, a step forward in treating the disease, but what made Hill’s research so revolutionary was not the content of the study but rather its form. “Streptomycin treatment of pulmonary tuberculosis” is widely considered to be the first randomized controlled trial in the history of medical research.

  There are inventions that shape how we manipulate matter in the world. And then there are inventions that shape how we manipulate data, new methods that let us see patterns in that data that we couldn’t have seen before. The tuberculosis experiment, like all randomized controlled trials, relied on a kind of crowd wisdom. It wasn’t enough to just give the antibiotic to one or two patients and report whether they lived or died. Hill’s streptomycin study involved more than a hundred subjects, randomly divided into two groups, one given the antibiotic a
nd one given a placebo.

  Once you put those elements together—a large enough sample size, and a randomly selected control group—something extraordinary happened: you had a tool for separating genuine medical interventions from quackery. You could make a prediction about future events—in this case, you could predict the outcome of prescribing streptomycin to a patient suffering from pulmonary tuberculosis. Your forecast wasn’t always 100 percent accurate, of course, but for the first time doctors could map out chains of cause and effect with genuine rigor, even if they didn’t understand all the underlying forces that made those causal chains a reality. If someone proposed that the water cure offered a better treatment for tuberculosis, you could test the hypothesis empirically. Almost immediately, the randomized controlled trial began changing the course of medical history. Just a few years after the tuberculosis trial, Hill went on to do a landmark RCT analyzing the health effects of cigarette smoking, arguably the first methodologically sound study to prove that tobacco smoke was harmful to our health.

  The interesting thing about the RCT is how late it arrived on the stage of scientific progress. The germ theory didn’t become an established idea until we had microscopes powerful enough to see bacteria and viruses. Freud famously gave up his study of the physiological works of the brain because he didn’t have access to scanning tools like fMRI machines. But the idea of a randomized controlled experiment wasn’t impeded by some not-yet-invented tool. You could have easily conducted one in 1748. (In fact, the British ship doctor James Lind almost stumbled upon the methodology right around then while investigating the cause of scurvy, but his technique never caught on, and Lind himself seems to have not fully believed the results of his experiment.)

  You could see Darwin straining toward the structure of the RCT in his interactions with Gully and the water cure. He took to maintaining a ledger of sorts that tracked the date and time of each treatment he applied, his physical state before the intervention, and his subsequent state the night after. (One gets the sense that Darwin would have been an avid Fitbit user.) This early rendition of what we would now call the “quantified self” had an earnest scientific question at its core: Darwin was looking for patterns in the data that could help him decide whether Gully was a quack or a visionary. He was running a sequential experiment on his own body. The architecture of the experiment was lacking a few foundational elements: you can’t run an RCT on a single subject, and you need some kind of “control” to measure the effects of the intervention. However meticulous Darwin was in recording his water cure experiment, by definition he couldn’t give himself a placebo.

  In the decades that followed, a small but growing chorus of voices began to argue that a new statistical method might be possible in evaluating the efficacy of different medical interventions, but it was not at all clear how revolutionary the technique was going to be. As late as 1923, The Lancet asked the question, “Is the application of the numerical method to the subject-matter of medicine a trivial and time-wasting ingenuity as some hold, or is it an important stage in the development of our art, as others proclaim it?” Reading the lines now, they seem remarkably naive. (“Will this new alphabetic writing technology really make a difference, or will it just turn out to be a fad? Experts disagree.”) But we now know, beyond a shadow of a doubt, that the randomized controlled experiment was not just “an important stage in the development of our art,” as The Lancet put it. It was, in fact, the breakthrough that turned medicine from an art into a science. For the first time, a patient confronting a bewildering choice about how to treat a disease or ailment could learn from the experiences of hundreds or thousands of other people who had faced a similar challenge. The RCT gave human beings a new superpower, not unlike the unimaginably fast calculations of the digital computer or the breathtaking propulsion of the jet engine. In this one area of complex decision-making—What treatment should I pursue to rid myself of this illness?—we can now predict the future with an acuity that would have been unimaginable just four generations ago.

  THE FIRST FORECAST

  The ironclad ocean steamer Royal Charter, its cargo hulls weighed down with bounty from the Australian gold rush, had almost reached the end of its 14,000-mile journey from Melbourne to Liverpool when the winds began to rise late in the afternoon of October 25, 1859. Legend has it that the captain, Thomas Taylor, overruled a suggestion that they take harbor after the barometers began to drop precipitously. It seemed preposterous not to simply outrun the storm with Liverpool so close. Within hours, though, the storm exploded into one of the most powerful ever recorded in the Irish Sea. Quickly surrendering his Liverpool plan, the captain lowered the sails and anchored near the coast, but the winds and rough sea soon overpowered the ship. Battered by hurricane-level gales, the Royal Charter smashed against the rocks near the Welsh town of Anglesey, only seventy miles from Liverpool. The ship broke into three pieces and sank. Around 450 passengers and crew perished, many of them killed violently on the rocky shores.

  The Royal Charter storm, as it came to be called, ultimately claimed almost a thousand lives and destroyed hundreds of ships along the coasts of England, Scotland, and Wales. In the weeks that followed the storm, Robert FitzRoy—Darwin’s captain from the voyage of the Beagle—read the reports with growing outrage from his office in London. FitzRoy had traded in his career as a captain for a desk job running the Meteorological Department of the Board of Trade (now called, colloquially, the Met Office), which he had founded in 1854.

  Today, the Met Office is the government agency responsible for weather forecasting in the United Kingdom, the equivalent of the National Weather Service in the United States, but the initial purview of the office had nothing to do with predicting future weather events. Instead, FitzRoy had established the department to calculate faster shipping routes by studying wind patterns around the globe. The science of the Met Office wasn’t trying to determine what the weather was going to do tomorrow. It simply wanted to know what the weather generally did. Predicting the weather belonged entirely to the world of folk wisdom and sham almanacs. When a member of Parliament suggested in 1854 that it might be scientifically possible to predict London’s weather twenty-four hours in advance, he was greeted with howls of laughter. But FitzRoy and a few other visionaries had begun to imagine turning the charade of weather prognostication into something resembling a science. FitzRoy was assisted by three important developments, all of which had come into place in the preceding decade: a crude but functional understanding of the connection between storm winds and troughs of low pressure, increasingly accurate barometers that could measure changes in atmospheric pressure, and a network of telegraphs that could transmit those readings back to the Met Office headquarters in London.

  Galvanized by the disaster of the Royal Charter storm, FitzRoy established a network of fourteen stations in towns on the English coast, recording weather data and transmitting to headquarters for analysis. Working with a small team in the Met Office, transcribing the data by hand, FitzRoy created the first generation of meteorological charts, offering maritime travelers the advance warning that the lost souls of the Royal Charter had lacked.

  Initially the Met Office used the data exclusively to warn ships of upcoming storms, but it quickly became apparent that they were assembling predictions that would be of interest for civilian life on land as well. FitzRoy coined a new term for these predictions to differentiate them from the quack soothsaying that had been standard up until that point. He called his weather reports “forecasts.” “Prophecies and predictions they are not,” he explained. “The term forecast is strictly applicable to such an opinion as is the result of scientific combination and calculation.” The first scientifically grounded forecast appeared in the Times (London) on August 1, 1861, predicting a temperature in London of 62°F, clear skies, and a southwesterly wind. The forecast proved to be accurate—the temperature peaked at 61°F that day—and before long, weather forecasts became a staple of most n
ewspapers, even if they were rarely as accurate as FitzRoy’s initial prediction.

  Despite the telegraph network and the barometers—and FitzRoy’s bravado about “scientific combination and calculation”—the predictive powers of meteorology in the nineteenth century were still very limited. FitzRoy published a tome explaining his theories of weather formation in 1862, most of which have not stood the test of time. A review of the Met Office forecasting technique—allegedly conducted by the brilliant statistician Francis Galton—found that “no notes or calculations are made. The operation takes about half an hour and is conducted mentally.” (Stung by the critiques, and by his implicit role in supporting what he considered the sacrilegious theory of evolution, FitzRoy committed suicide in 1865.) Weather forecasters couldn’t build real-time models of the atmosphere, so instead they relied on a kind of historical pattern recognition. They created charts that documented data received from all the observational stations, mapping the reported temperature, pressure, humidity, wind, and precipitation. Those charts were then stored as a historical record of past configurations. When a new configuration emerged, the forecasters would consult earlier charts that resembled the current pattern and use that as a guide for predicting the next day’s weather. If there was a low pressure system and cool southern winds off the coast of Wales, with a warm high over Surrey, the forecasters would go back and find a comparable day from the past, and figure out what the weather did over the next few days in that previous instance. It was more of an educated guess than a proper forecast, and its predictive powers went steadily downhill outside the twenty-four-hour range, but it was a meaningful leap forward from the tea-leaf reading that had characterized all weather predictions before that point.

 

‹ Prev