Farsighted

Home > Other > Farsighted > Page 12
Farsighted Page 12

by Steven Johnson


  Of course, with military exercises, red teams can involve more active simulations than simply sitting around in a conference room dreaming up stories. McRaven developed elaborate red teams to simulate possible responses both from the residents of the compound and from the Pakistani military in the event that they detected the helicopters during their incursion into Pakistani airspace. According to Peter Bergen, the raid was “constantly ‘red-teamed’” to simulate defenses the SEALs had encountered in other, similar situations: “armed women, people with suicide jackets hidden under their pajamas, insurgents hiding in ‘spider holes,’ and even buildings entirely rigged with explosives.” By the end of the exercise, one colleague observed, “McRaven had a backup for every possible failure, and a backup to the failure of the backup, and a backup to the failure of the backup of the backup.”

  It’s true that people naturally attempt to anticipate objections or possible failure points when they contemplate challenging decisions. The expression “let’s play devil’s advocate for a second” is often heard in conference rooms and casual conversations. The difference with strategies like premortems and red teams lies in the formal nature of the process: giving people a specific task and identity to role-play. It’s not enough to ask someone, “Can you think of any way this plan might fail?” Premortems and red teams force you to take on a new perspective, or consider an alternate narrative, that might not easily come to mind in a few minutes of casual devil’s advocacy. In a way, the process is similar to the strategy of assigning expert roles that we explored in the mapping stage of the decision. By taking on a new identity, seeing the world through a simulated worldview, new outcomes become visible.

  Experimenting with different identities is more than just a way of uncovering new opportunities or pitfalls. Hard choices are often hard because they impact other people’s lives in meaningful ways, and so our ability to imagine that impact—to think through the emotional and material consequences from someone else’s perspective—turns out to be an essential talent. New research suggests that this kind of psychological projection is part of what the brain’s default network does in daydreaming. When we simulate potential futures in our wandering minds, we often shift the mental camera from one consciousness to another without even realizing it, testing different scenarios and the emotional reactions they might provoke. You’re driving to work and thinking about a new job opportunity, and your mind flashes onto the image of your boss responding to the news. It’s a fantasy, a simulation, because it’s an event that hasn’t happened yet. But the work that goes into that fantasy is truly sublime. You’re mapping out all the reasons why you might reasonably be considering leaving your current job and you’re mapping out all the reasons why your boss might be alarmed or hurt (or both) at the news, and you’re building a mental forecast of what kind of response the collision of those two maps might trigger in him. That is a very rich and complicated form of mental simulation, but we run those calculations so fast we don’t appreciate them.

  Still, some of us do it better than others. And that ability to shift our imagination between different perspectives may be one of the core attributes of a farsighted mind. Part of being a smart decision-maker is being open-minded enough to realize that other people might have a different way of thinking about the decision. Recall Lydgate contemplating the way small-minded Middlemarch gossip would respond to his choice of which vicar to support. Lydgate himself is above the gossip, but he is farsighted enough to realize that the approbation of the town will make a meaningful difference if he makes the wrong choice, given that his practice as a local physician depends on being well-regarded by the community. Lydgate’s mind shifts effortlessly from the self-centric question “Which candidate do I like the most?” to an external frame of reference: “What will the town gossips think of me if I choose my patron’s candidate to be the vicar?” In that moment, he is running a rough simulation not just of the consequences of his choice, but something more remarkable: a simulation of other minds, with their own quirks and obsessions and values.

  This shifting of perspective played a key role in what was arguably the most impressive bit of long-term scenario planning in the hunt for bin Laden. An attack on a private compound raised a huge number of logistical questions: How do we determine who is inside? Should we capture or kill bin Laden? But it also raised a question that required the team to venture outside their default American perspective: What will the Pakistanis think if we launch an attack inside their borders without alerting them first? While a coordinated attack with Pakistani forces was still being considered, it was generally thought to be the least appealing option, given the risk of the plan leaking out in some fashion and alerting bin Laden that his hideout had been compromised. A stealth attack by Black Hawks through Pakistani airspace posed a different kind of risk. First, the helicopters might be detected—and potentially shot down—by Pakistani forces, though McRaven and his team believed they could get in and out without Pakistani patrols noticing them. The real risk was the downstream one. Pakistan, after all, was at least nominally an ally of the United States in the war on terror. The United States relied heavily on the good graces of the Pakistani government to get supplies into landlocked Afghanistan. More than three hundred daily flights by US planes, delivering supplies and personnel to American and NATO troops in Afghanistan, were permitted over Pakistani territory. Once the Pakistanis discovered the United States had invaded their airspace to attack a suburban residence without their permission—particularly if the residence turned out not to be the home of a certain terrorist ringleader—it was an open question whether they would continue to grant America and its allies the same access.

  On March 21, 2011, weeks before McRaven began his simulated attacks at Fort Bragg and months before Obama made the final decision to send in SEAL Team 6, defense secretary Robert Gates announced a new partnership to strengthen the so-called Northern Distribution Network, a route into Afghanistan running from ports on the Baltic Sea through Russia and other countries—a route that, crucially, bypasses Pakistan altogether. No one realized it at the time, but that expanded distribution network was a direct result of the perspective-shifting scenario planning behind the bin Laden raid. Even if they got their man, the administration realized that the downstream effects on Pakistan-US relations might be catastrophic, which would threaten a vital route relied on by the United States and allied troops immersed in active combat. And so they took the time to ensure that another route would be available if that scenario did, in fact, come to pass.

  * * *

  —

  In the end, the predictive exercises that shaped the bin Laden mission turned out to be as full spectrum as the mapping exercises. To build a coherent set of scenarios, it was necessary to think like a meteorologist, assessing the impact of the desert heat and altitude on the helicopters. They had to study the smallest architectural details of the compound to determine how the SEALs could successfully get inside. They had to wrestle with the juridical question of whether and where to hold a trial for bin Laden if he were captured alive. They had to imagine the conspiracy theories and folklore that might erupt if al-Qaeda’s leader were immolated in a B-2 bombing run, leaving no proof of his demise. They had to put themselves in the shoes of the Pakistani government and imagine what kind of response a violation of their airspace might engender. They collected DNA from bin Laden’s relatives so they would have genetic evidence to identify his remains. They even had to study Islamic burial rituals so that they could dispose of bin Laden’s body in a way that would not outrage moderate Muslims. Air pressure, international law, religious customs, the slant of a roof, genetic fingerprints, geopolitical backlash—all these variables and more found their way into the scenarios plotted in the late spring of 2011. They’d told stories that imagined different outcomes; they’d assembled red teams to challenge their assumptions. By early May, the divergence of all these different perspectives and possibilities had reached their lo
gical limits. The decision had been mapped, the options identified, the scenarios planned. It was time to decide.

  3

  DECIDING

  Mapping, predicting, simulating: they don’t quite add up to deciding. Once you’ve mapped the landscape, determined a full range of potential options, and simulated the outcomes for those options with as much certainty as you can—how, then, do you choose?

  Ever since Ben Franklin outlined his “moral algebra” to Joseph Priestley, people have concocted increasingly elaborate systems for adjudicating decisions based on some kind of calculation. Priestley himself played a defining role in one of the most influential of these strategies. A few years before he wrote his letter to Franklin, Priestley published a political treatise that suggested a different approach for making the final call on group decisions, like the creation of laws and regulations: “It must necessarily be understood,” Priestley wrote, “that all people live in society for their mutual advantage; so that the good and happiness of the members, that is the majority of the members of any state, is the great standard by which every thing relating to that state must finally be determined.” A few decades later, the line would plant the seed of an idea in the mind of the political philosopher Jeremy Bentham, who used it as the cornerstone of the utilitarian ideology that would become one of the most influential political ideas of the nineteenth century. Moral decisions—both public and private—should be based on actions that produced the “greatest happiness for the greatest number,” in Bentham’s famous phrase. The problem of doing good in the world was a problem that could, in theory at least, be solved by doing a kind of emotional census of all those connected to a given choice.

  The “greatest happiness for the greatest number” sounds like a vague platitude, but Bentham’s aim was to try to calculate those values with as much precision as possible. At first, he divided our experience of the world into two broad categories:

  Nature has placed mankind under the governance of two sovereign masters, pain and pleasure. It is for them alone to point out what we ought to do, as well as to determine what we shall do. On the one hand the standard of right and wrong, on the other the chain of causes and effects, are fastened to their throne. They govern us in all we do, in all we say, in all we think: every effort we can make to throw off our subjection, will serve but to demonstrate and confirm it.

  Bentham ultimately recognized that there were subcategories of pain and pleasure that would have to be brought into the equation: the intensity of pain or pleasure, the duration of the experience, how certain the outcome was, the proximity of the pain or pleasure to the action that triggered it, the “fecundity” of the experience—in other words, the likelihood that it would trigger more experiences of pain or pleasure—the purity of the experience, and the sheer number of people affected by the decision. A utilitarian confronted a decision by building a kind of mental map of all the waves of pain and pleasure that would ripple out from the various options under discussion. The moral choice would be the one that led to the greatest increase in the sum total of human happiness.

  The clarity of that formula—like the rational choice of classical economics—necessarily grows cloudy when confronted with actual decisions in the world, for all the reasons we have explored. It is easy to imagine why Bentham (and John Stuart Mill, his fellow utilitarian) might have imagined this kind of emotional census would be possible. The first century or two of the Enlightenment had demonstrated how powerful and illuminating new ways of measuring the world could be. Why couldn’t the same rational approach be applied to the choices that individual human beings and societies confront? The problem, of course, is the problem of bounded rationality that Herbert Simon observed more than a century later: hard choices send waves out into the world that are difficult to map and predict in advance, particularly when the calculation involves the future happiness of thousands or millions of people.

  But while the utilitarians might have been overly optimistic in thinking that those outcomes could be clearly measured, the truth is, we rely on the descendants of that moral calculation in many facets of modern life. In the United States, one of the most influential of those descendants was put into place on February 17, 1981, when Ronald Reagan signed Executive Order 12291 as one of the first actions of his administration. EO 12291 mandated that every new rule or regulation proposed by any agency of the government undergo what was called a “regulatory impact analysis.” By law, the analysis had to include:

  A description of the potential benefits of the rule, including any beneficial effects that cannot be quantified in monetary terms, and the identification of those likely to receive the benefits;

  A description of the potential costs of the rule, including any adverse effects that cannot be quantified in monetary terms, and the identification of those likely to bear the costs;

  A determination of the potential net benefits of the rule, including an evaluation of effects that cannot be quantified in monetary terms;

  A description of alternative approaches that could substantially achieve the same regulatory goal at lower cost, together with an analysis of this potential benefit and costs and a brief explanation of the legal reasons why such alternatives, if proposed, could not be adopted.

  Regulatory impact analysis was, in practice, what we commonly call cost-benefit analysis. In deciding whether to implement a new regulation, agencies would have to calculate the potential costs and benefits of the regulation, in part by predicting the downstream consequences of implementing it. The executive order effectively compelled government agencies, eventually overseen by the Office of Information and Regulatory Affairs (OIRA), to walk through the key steps of decision-making that we have explored—mapping all the potential variables and predicting the long-term effects—and it even pushed them to explore other decision paths that might not have been initially visible when the proposed regulation was originally being drafted. If, at the end of the analysis, the regulation could be shown to “maximize net benefits”—in other words, not just do more good than harm, but do more good than any other comparable option on the table—the agency would be free to implement it. “Reagan’s ideas applied across a spectacularly wide range, covering regulations meant to protect the environment, increase food safety, reduce risks on the highways and in the air, promote health care, improve immigration, affect the energy supply, or increase homeland security,” writes Cass Sunstein, who ran OIRA for several years during the Obama administration.

  When it was first proposed, regulatory impact analysis was seen as a conservative intervention, an attempt to rein in runaway government spending. But the basic framework has persevered, largely unmodified, through six administrations. It is one of the rarest of creatures in the Washington ecosystem: an institutional practice with bipartisan support that leads to better government. Cost-benefit analysis turned out to have genuine potential as a tool for progressive values, and not just anti–Big Government cutbacks. Under the Obama administration, an interagency group formulated a monetary figure measuring “the social cost of carbon”—a cost that many environmentalists felt had been long overlooked in our decisions about energy policy. Experts were drawn from the Council on Environmental Quality, the National Economic Council, the Office of Energy and Climate Change, the Office of Science and Technology Policy, the EPA, and the Departments of Agriculture, Commerce, Energy, Transportation, and Treasury. Collectively, they mapped all the downstream effects of releasing carbon into the atmosphere, from agriculture disruptions triggered by climate changes to the economic cost of increasingly severe weather events to the geographic dislocation triggered by rising sea levels. In the end, they calculated the social cost of carbon to be $36 per ton released into the atmosphere. The figure itself was only an estimate—a more recent Stanford study suggests it may be several times higher—but it provided a baseline cost for any government regulation that involved carbon-generating technology. The calcul
ation, for instance, was an essential justification for the aggressive targets for fuel economy standards that the EPA mandated for automobiles and trucks during the Obama administration. In a sense, by assigning a dollar value to the cost of carbon, regulators were adding a predictive stage to decisions that involved fossil fuels, one that offered a long-term view. Their decision was no longer limited to the present-tense benefit of using those fuels as a source of energy. That $36/ton cost gave them a way of measuring the future impact of the decision as well. It was, at its core, a calculation: If we choose this option, how much carbon will that release into the atmosphere, and how much will it cost for us to deal with the consequences of those emissions in the years to come? But that calculation made the choice far more farsighted than it would have been without it.

  THE VALUE MODEL

  The ultimate output of a regulatory impact analysis is a financial statement—the net costs and benefits reported in dollars—but the original executive order did recognize that not all effects could be quantified in purely monetary terms, and subsequent changes have made the formal analysis more sensitive to noneconomic outcomes. Within the government, this has led to some thorny economic translations, most famously the question of how agencies should appropriately measure the cost of a human life. (As it happens, OIRA values a single human life at approximately $9 million in its regulatory analyses.) If this seems inhumane, keep in mind that the government is forced to make tradeoffs every day that clearly result in the deaths of human beings. We would certainly save thousands of lives every year if we universally set the speed limit at 25 mph, but we have decided, as a society, that the transportation and commercial benefits that come from higher speed limits are “worth” the cost in traffic fatalities.

 

‹ Prev