The Signal and the Noise

Home > Other > The Signal and the Noise > Page 6
The Signal and the Noise Page 6

by Nate Silver


  Rarely should a forecaster be judged on the basis of a single prediction—but this case may warrant an exception. By the weekend before the election, perhaps the only plausible hypothesis to explain why McCain could still win was if there was massive racial animus against Obama that had gone undetected in the polls.4 None of the panelists offered this hypothesis, however. Instead they seemed to be operating in an alternate universe in which the polls didn’t exist, the economy hadn’t collapsed, and President Bush was still reasonably popular rather than dragging down McCain.

  Nevertheless, I decided to check to see whether this was some sort of anomaly. Do the panelists on The McLaughlin Group—who are paid to talk about politics for a living—have any real skill at forecasting?

  I evaluated nearly 1,000 predictions that were made on the final segment of the show by McLaughlin and the rest of the panelists. About a quarter of the predictions were too vague to be analyzed or concerned events in the far future. But I scored the others on a five-point scale ranging from completely false to completely true.

  The panel may as well have been flipping coins. I determined 338 of their predictions to be either mostly or completely false. The exact same number—338—were either mostly or completely true.5

  Nor were any of the panelists—including Clift, who at least got the 2008 election right—much better than the others. For each panelist, I calculated a percentage score, essentially reflecting the number of predictions they got right. Clift and the three other most frequent panelists—Buchanan, the late Tony Blankley, and McLaughlin himself—each received almost identical scores ranging from 49 percent to 52 percent, meaning that they were about as likely to get a prediction right as wrong.7 They displayed about as much political acumen as a barbershop quartet.

  The McLaughlin Group, of course, is more or less explicitly intended as slapstick entertainment for political junkies. It is a holdover from the shouting match era of programs, such as CNN’s Crossfire, that featured liberals and conservatives endlessly bickering with one another. Our current echo chamber era isn’t much different from the shouting match era, except that the liberals and conservatives are confined to their own channels, separated in your cable lineup by a demilitarized zone demarcated by the Food Network or the Golf Channel.* This arrangement seems to produce higher ratings if not necessarily more reliable analysis.

  But what about those who are paid for the accuracy and thoroughness of their scholarship—rather than the volume of their opinions? Are political scientists, or analysts at Washington think tanks, any better at making predictions?

  Are Political Scientists Better Than Pundits?

  The disintegration of the Soviet Union and other countries of the Eastern bloc occurred at a remarkably fast pace—and all things considered, in a remarkably orderly way.*

  On June 12, 1987, Ronald Reagan stood at the Brandenburg Gate and implored Mikhail Gorbachev to tear down the Berlin Wall—an applause line that seemed as audacious as John F. Kennedy’s pledge to send a man to the moon. Reagan was prescient; less than two years later, the wall had fallen.

  On November 16, 1988, the parliament of the Republic of Estonia, a nation about the size of the state of Maine, declared its independence from the mighty USSR. Less than three years later, Gorbachev parried a coup attempt from hard-liners in Moscow and the Soviet flag was lowered for the last time before the Kremlin; Estonia and the other Soviet Republics would soon become independent nations.

  If the fall of the Soviet empire seemed predictable after the fact, however, almost no mainstream political scientist had seen it coming. The few exceptions were often the subject of ridicule.8 If political scientists couldn’t predict the downfall of the Soviet Union—perhaps the most important event in the latter half of the twentieth century—then what exactly were they good for?

  Philip Tetlock, a professor of psychology and political science, then at the University of California at Berkeley,9 was asking some of the same questions. As it happened, he had undertaken an ambitious and unprecedented experiment at the time of the USSR’s collapse. Beginning in 1987, Tetlock started collecting predictions from a broad array of experts in academia and government on a variety of topics in domestic politics, economics, and international relations.10

  Political experts had difficulty anticipating the USSR’s collapse, Tetlock found, because a prediction that not only forecast the regime’s demise but also understood the reasons for it required different strands of argument to be woven together. There was nothing inherently contradictory about these ideas, but they tended to emanate from people on different sides of the political spectrum,11 and scholars firmly entrenched in one ideological camp were unlikely to have embraced them both.

  On the one hand, Gorbachev was clearly a major part of the story—his desire for reform had been sincere. Had Gorbachev chosen to become an accountant or a poet instead of entering politics, the Soviet Union might have survived at least a few years longer. Liberals were more likely to hold this sympathetic view of Gorbachev. Conservatives were less trusting of him, and some regarded his talk of glasnost as little more than posturing.

  Conservatives, on the other hand, were more instinctually critical of communism. They were quicker to understand that the USSR’s economy was failing and that life was becoming increasingly difficult for the average citizen. As late as 1990, the CIA estimated—quite wrongly12—that the Soviet Union’s GDP was about half that of the United States13 (on a per capita basis, tantamount to where stable democracies like South Korea and Portugal are today). In fact, more recent evidence has found that the Soviet economy—weakened by its long war with Afghanistan and the central government’s inattention to a variety of social problems—was roughly $1 trillion poorer than the CIA had thought and was shrinking by as much as 5 percent annually, with inflation well into the double digits.

  Take these two factors together, and the Soviet Union’s collapse is fairly easy to envision. By opening the country’s media and its markets and giving his citizens greater democratic authority, Gorbachev had provided his people with the mechanism to catalyze a regime change. And because of the dilapidated state of the country’s economy, they were happy to take him up on his offer. The center was too weak to hold: not only were Estonians sick of Russians, but Russians were nearly as sick of Estonians, since the satellite republics contributed less to the Soviet economy than they received in subsidies from Moscow.14 Once the dominoes began falling in Eastern Europe—Czechoslovakia, Poland, Romania, Bulgaria, Hungary, and East Germany were all in the midst of revolution by the end of 1989—there was little Gorbachev or anyone else could do to prevent them from caving the country in. A lot of Soviet scholars understood parts of the problem, but few experts had put all the puzzle pieces together, and almost no one had forecast the USSR’s sudden collapse.

  Tetlock, inspired by the example of the Soviet Union, began to take surveys of expert opinion in other areas—asking the experts to make predictions about the Gulf War, the Japanese real-estate bubble, the potential secession of Quebec from Canada, and almost every other major event of the 1980s and 1990s. Was the failure to predict the collapse of the Soviet Union an anomaly, or does “expert” political analysis rarely live up to its billing? His studies, which spanned more than fifteen years, were eventually published in the 2005 book Expert Political Judgment.

  Tetlock’s conclusion was damning. The experts in his survey—regardless of their occupation, experience, or subfield—had done barely any better than random chance, and they had done worse than even rudimentary statistical methods at predicting future political events. They were grossly overconfident and terrible at calculating probabilities: about 15 percent of events that they claimed had no chance of occurring in fact happened, while about 25 percent of those that they said were absolutely sure things in fact failed to occur.15 It didn’t matter whether the experts were making predictions about economics, domestic politics, or international affairs; their judgment was equally bad across the board.
/>
  The Right Attitude for Making Better Predictions: Be Foxy

  While the experts’ performance was poor in the aggregate, however, Tetlock found that some had done better than others. On the losing side were those experts whose predictions were cited most frequently in the media. The more interviews that an expert had done with the press, Tetlock found, the worse his predictions tended to be.

  Another subgroup of experts had done relatively well, however. Tetlock, with his training as a psychologist, had been interested in the experts’ cognitive styles—how they thought about the world. So he administered some questions lifted from personality tests to all the experts.

  On the basis of their responses to these questions, Tetlock was able to classify his experts along a spectrum between what he called hedgehogs and foxes. The reference to hedgehogs and foxes comes from the title of an Isaiah Berlin essay on the Russian novelist Leo Tolstoy—The Hedgehog and the Fox. Berlin had in turn borrowed his title from a passage attributed to the Greek poet Archilochus: “The fox knows many little things, but the hedgehog knows one big thing.”

  Unless you are a fan of Tolstoy—or of flowery prose—you’ll have no particular reason to read Berlin’s essay. But the basic idea is that writers and thinkers can be divided into two broad categories:

  Hedgehogs are type A personalities who believe in Big Ideas—in governing principles about the world that behave as though they were physical laws and undergird virtually every interaction in society. Think Karl Marx and class struggle, or Sigmund Freud and the unconscious. Or Malcolm Gladwell and the “tipping point.”

  Foxes, on the other hand, are scrappy creatures who believe in a plethora of little ideas and in taking a multitude of approaches toward a problem. They tend to be more tolerant of nuance, uncertainty, complexity, and dissenting opinion. If hedgehogs are hunters, always looking out for the big kill, then foxes are gatherers.

  Foxes, Tetlock found, are considerably better at forecasting than hedgehogs. They had come closer to the mark on the Soviet Union, for instance. Rather than seeing the USSR in highly ideological terms—as an intrinsically “evil empire,” or as a relatively successful (and perhaps even admirable) example of a Marxist economic system—they instead saw it for what it was: an increasingly dysfunctional nation that was in danger of coming apart at the seams. Whereas the hedgehogs’ forecasts were barely any better than random chance, the foxes’ demonstrated predictive skill.

  FIGURE 2-2: ATTITUDES OF FOXES AND HEDGEHOGS

  How Foxes Think

  How Hedgehogs Think

  Multidisciplinary: Incorporate ideas from different disciplines and regardless of their origin on the political spectrum.

  Specialized: Often have spent the bulk of their careers on one or two great problems. May view the opinions of “outsiders” skeptically.

  Adaptable: Find a new approach—or pursue multiple approaches at the same time—if they aren’t sure the original one is working.

  Stalwart: Stick to the same “all-in” approach—new data is used to refine the original model.

  Self-critical: Sometimes willing (if rarely happy) to acknowledge mistakes in their predictions and accept the blame for them.

  Stubborn: Mistakes are blamed on bad luck or on idiosyncratic circumstances—a good model had a bad day.

  Tolerant of complexity: See the universe as complicated, perhaps to the point of many fundamental problems being irresolvable or inherently unpredictable.

  Order-seeking: Expect that the world will be found to abide by relatively simple governing relationships once the signal is identified through the noise.

  Cautious: Express their predictions in probabilistic terms and qualify their opinions.

  Confident: Rarely hedge their predictions and are reluctant to change them.

  Empirical: Rely more on observation than theory.

  Ideological: Expect that solutions to many day-to-day problems are manifestations of some grander theory or struggle.

  Foxes are better forecasters.

  Hedgehogs are weaker forecasters.

  Why Hedgehogs Make Better Television Guests

  I met Tetlock for lunch one winter afternoon at the Hotel Durant, a stately and sunlit property just off the Berkeley campus. Naturally enough, Tetlock revealed himself to be a fox: soft-spoken and studious, with a habit of pausing for twenty or thirty seconds before answering my questions (lest he provide me with too incautiously considered a response).

  “What are the incentives for a public intellectual?” Tetlock asked me. “There are some academics who are quite content to be relatively anonymous. But there are other people who aspire to be public intellectuals, to be pretty bold and to attach nonnegligible probabilities to fairly dramatic change. That’s much more likely to bring you attention.”

  Big, bold, hedgehog-like predictions, in other words, are more likely to get you on television. Consider the case of Dick Morris, a former adviser to Bill Clinton who now serves as a commentator for Fox News. Morris is a classic hedgehog, and his strategy seems to be to make as dramatic a prediction as possible when given the chance. In 2005, Morris proclaimed that George W. Bush’s handling of Hurricane Katrina would help Bush to regain his standing with the public.16 On the eve of the 2008 elections, he predicted that Barack Obama would win Tennessee and Arkansas.17 In 2010, Morris predicted that the Republicans could easily win one hundred seats in the U.S. House of Representatives.18 In 2011, he said that Donald Trump would run for the Republican nomination—and had a “damn good” chance of winning it.19

  All those predictions turned out to be horribly wrong. Katrina was the beginning of the end for Bush—not the start of a rebound. Obama lost Tennessee and Arkansas badly—in fact, they were among the only states in which he performed worse than John Kerry had four years earlier. Republicans had a good night in November 2010, but they gained sixty-three seats, not one hundred. Trump officially declined to run for president just two weeks after Morris insisted he would do so.

  But Morris is quick on his feet, entertaining, and successful at marketing himself—he remains in the regular rotation at Fox News and has sold his books to hundreds of thousands of people.

  Foxes sometimes have more trouble fitting into type A cultures like television, business, and politics. Their belief that many problems are hard to forecast—and that we should be explicit about accounting for these uncertainties—may be mistaken for a lack of self-confidence. Their pluralistic approach may be mistaken for a lack of conviction; Harry Truman famously demanded a “one-handed economist,” frustrated that the foxes in his administration couldn’t give him an unqualified answer.

  But foxes happen to make much better predictions. They are quicker to recognize how noisy the data can be, and they are less inclined to chase false signals. They know more about what they don’t know.

  If you’re looking for a doctor to predict the course of a medical condition or an investment adviser to maximize the return on your retirement savings, you may want to entrust a fox. She might make more modest claims about what she is able to achieve—but she is much more likely to actually realize them.

  Why Political Predictions Tend to Fail

  Fox-like attitudes may be especially important when it comes to making predictions about politics. There are some particular traps that can make suckers of hedgehogs in the arena of political prediction and which foxes are more careful to avoid.

  One of these is simply partisan ideology. Morris, despite having advised Bill Clinton, identifies as a Republican and raises funds for their candidates—and his conservative views fit in with those of his network, Fox News. But liberals are not immune from the propensity to be hedgehogs. In my study of the accuracy of predictions made by McLaughlin Group members, Eleanor Clift—who is usually the most liberal member of the panel—almost never issued a prediction that would imply a more favorable outcome for Republicans than the consensus of the group. That may have served her well in predicting the outcome of the 2008 election,
but she was no more accurate than her conservative counterparts over the long run.

  Academic experts like the ones that Tetlock studied can suffer from the same problem. In fact, a little knowledge may be a dangerous thing in the hands of a hedgehog with a Ph.D. One of Tetlock’s more remarkable findings is that, while foxes tend to get better at forecasting with experience, the opposite is true of hedgehogs: their performance tends to worsen as they pick up additional credentials. Tetlock believes the more facts hedgehogs have at their command, the more opportunities they have to permute and manipulate them in ways that confirm their biases. The situation is analogous to what might happen if you put a hypochondriac in a dark room with an Internet connection. The more time that you give him, the more information he has at his disposal, the more ridiculous the self-diagnosis he’ll come up with; before long he’ll be mistaking a common cold for the bubonic plague.

  But while Tetlock found that left-wing and right-wing hedgehogs made especially poor predictions, he also found that foxes of all political persuasions were more immune from these effects.20 Foxes may have emphatic convictions about the way the world ought to be. But they can usually separate that from their analysis of the way that the world actually is and how it is likely to be in the near future.

  Hedgehogs, by contrast, have more trouble distinguishing their rooting interest from their analysis. Instead, in Tetlock’s words, they create “a blurry fusion between facts and values all lumped together.” They take a prejudicial view toward the evidence, seeing what they want to see and not what is really there.

  You can apply Tetlock’s test to diagnose whether you are a hedgehog: Do your predictions improve when you have access to more information? In theory, more information should give your predictions a wind at their back—you can always ignore the information if it doesn’t seem to be helpful. But hedgehogs often trap themselves in the briar patch.

 

‹ Prev