You can enhance the diversity of an existing group—without bringing in outsiders—simply by designating “expert roles” to each of the participants based on the knowledge they happen to bring to the discussion. A team of psychologists at Miami University conducted their own murder-mystery experiment in the 1990s that recruited college students to participate in a series of three-person mock investigations of a crime. Control groups in the experiment were given all the relevant clues required to correctly identify the perpetrator. In these group decisions, there was no unshared information. Each individual in the group had access to all the information required to crack the case. Not surprisingly, those teams made for successful detectives, identifying the correct suspect 70 percent of the time. In the other groups, hidden profiles were introduced: each team member had unshared information about one of the potential suspects, information that was not possessed by other members of the team. When those groups deliberated without any guidance as to their roles in the investigation, their detective skills deteriorated dramatically: they identified the correct suspect only a third of the time. But when each team member was specifically informed that they possessed knowledge about one of the suspects—that they were, in effect, an expert on Miss Marple or Professor Plum—their sleuthing improved to the point where they were almost indistinguishable from the control group that possessed all the information from the outset. By defining expertise, the scientists had subtly altered the group dynamics of the decision: instead of looking for the common ground of shared knowledge, the participants were empowered to share their unique perspective on the choice.
Introducing expert roles turns out to be a particularly effective technique in addressing the challenges of full-spectrum thinking, because in many cases the different bands or layers of the full-spectrum perspective correspond to different fields of expertise. In a formal hearing like a design charrette or the Vancouver water review, those expert roles may be relatively intuitive ones: the economist is there to talk about the economic impact of developing a reservoir in one community; the environmental scientist is there to talk about the environmental impact. But in less formal group deliberation, the different kinds of expertise in the room can easily remain unacknowledged, making it more likely that hidden profiles will remain hidden.
THE CONE OF UNCERTAINTY
In 2008, management professor Katherine Phillips led a decision-making study that replaced the mock trial format with a framework that was closer to CSI than 12 Angry Men. Participants were asked to evaluate a collection of interviews from a detective’s investigation of a homicide and decide, based on that assessment, which of several suspects actually committed the crime. Predictably enough, the introduction of outsiders made the teams better detectives, more attentive to the clues and more willing to share their own hidden profiles. But Phillips and her team discovered an additional, seemingly counterintuitive finding, one that has since become a key assumption in the science of decision-making (and, as we will see, of prediction). While the diverse groups were better detectives—they identified the correct subject more frequently than their homogeneous equivalents—they were also far less confident in the decisions they made. They were both more likely to be right and, at the same time, more open to the idea that they might be wrong. That might seem to be a paradox, but there turns out to be a strong correlation between astute decision-making and a willingness to recognize—and even embrace—uncertainty. Phillips’s findings echo the famous Dunning-Kruger effect from cognitive psychology, in which low-ability individuals have a tendency to overestimate their skills. Sometimes the easiest way to be wrong is to be certain you are right.
If you have read a reasonable amount of the recent popular literature on decision-making or intuition, you are already well acquainted with the Tale of the Fire Commander and the Basement Fire. The story appeared originally in the 1999 book Sources of Power, written by the research psychologist Gary Klein, but it entered into the popular consciousness a few years later when Malcolm Gladwell featured it in his mega-bestseller Blink. Klein had spent many years exploring what he called “naturalistic decision-making”—breaking from the long-standing tradition of studying people’s mental routines through clever lab experiments, and instead observing people making decisions, particularly decisions that involved intense time pressures, out in the real world. He spent a long stretch of time traveling with firefighters in Dayton, Ohio, watching them respond to emergencies, and also interviewing them about past decisions. One commander told Klein a story about what had initially appeared to be a relatively straightforward blaze in a single-story suburban home. Flames had been reported in the kitchen, near the back of the house, and so the commander brought his men to the kitchen, where they tried to subdue the fire. But quickly the situation began to confound the commander’s expectations: it proved harder to extinguish the flames, and the house seemed both hotter and quieter than he would have normally expected with a fire of that scale. In a flash, he ordered his men to leave the building. Seconds later, the floor collapsed. A much larger fire had been burning in the basement all along. In his original account, Klein described the commander’s thinking as follows:
The whole pattern did not fit right. His expectations were violated, and he realized he did not quite know what was going on. That was why he ordered his men out of the building. With hindsight, the reasons for the mismatch were clear. Because the fire was under him and not in the kitchen, it was not affected by his crew’s attack, the rising heat was much greater than he had expected, and the floor acted like a baffle to muffle the noise, resulting in a hot but quiet environment.
For Klein, the mysterious basement fire is a parable of sorts, illustrating the power of what he came to call “recognition-primed decision-making.” Over years on the job, the Dayton commander had accumulated enough wisdom about how fires behaved that he was able to make a snap assessment of a novel situation, without being fully conscious of why he was making that assessment. It was a gut decision, but one that was primed by countless hours fighting fires in the past. But compare Klein’s original account to the retelling that appears in Malcolm Gladwell’s book. In Gladwell’s hands, the story becomes an argument not just for the surprising power of “blink” judgments but also a cautionary tale about the cost of overthinking:
The fireman’s internal computer effortlessly and instantly found a pattern in the chaos. But surely the most striking fact about that day is how close it all came to disaster. Had the lieutenant stopped and discussed the situation with his men, had he said to them, let’s talk this over and try to figure out what’s going on, had he done, in other words, what we often think leaders are supposed to do to solve difficult problems, he might have destroyed his ability to jump to the insight that saved their lives.
Gladwell is absolutely correct that holding a planning charrette in the middle of the inferno would have been a disastrous strategy for fighting the fire. In situations that involve intense time pressure, gut instincts—shaped by experience—are inevitably going to play an important role. Our concern, of course, is with decisions that by definition do not involve such intense time restrictions, decisions where we have the luxury of not being slaves to our intuitive assessments because our deliberation time is weeks or months, not seconds. But there is still an important lesson for us in Klein’s parable of the basement fire. Note the two different ways Klein and Gladwell describe the fateful decision point in the kitchen. In Klein’s account, the signal moment comes when the fire chief “realized he did not quite know what was going on.” But by Gladwell’s retelling of the story, the moment takes on a different aspect: “The fireman . . . instantly found a pattern in the chaos.” In Klein’s original account, the fireman doesn’t correctly diagnosis the situation, and he doesn’t hit upon a brilliant strategy for fighting the fire. Instead, he literally runs away from the problem. (As he should have, given the situation.) In Gladwell’s hands, the commander has an “insight that saved . . . lives.�
�
There is no contesting the premise that the commander saved lives with his actions. The question is whether he had an “insight.” To me, the parable of the basement fire teaches us how important it is to be aware of our blind spots, to recognize the elements of a situation that we don’t understand. The commander’s many years of experience fighting fires didn’t prime him to perceive the hidden truth of the basement fire; it simply allowed him to recognize that he was missing something. And that recognition was enough to compel him to retreat from the building until he had a better understanding of what was going on.
Years ago, former secretary of defense Donald Rumsfeld was widely mocked for talking about the “known unknowns” of the Iraq War during a press conference, but the concept he was alluding to is actually a critical one in complex decision-making. There is wisdom in building an accurate mental map of the system you are trying to navigate, but there is also a crucial kind of wisdom in identifying the blank spots on the map, the places where you don’t have clarity, either because you don’t have the right set of stakeholders advising you (as Washington experienced with the loss of Nathanael Greene) or because some element of the situation is fundamentally unknowable.
Complex situations can present very different kinds of uncertainty. Several years ago, the scholars Helen Regan, Mark Colyvan, and Mark Burgman published an essay that attempted to classify all the variations of uncertainty that might confront an environmental planning project, like the Vancouver Water Authority review or the decision to fill Collect Pond. They came up with thirteen distinct species: measurement error, systematic error, natural variation, inherent randomness, model uncertainty, subjective judgment, linguistic uncertainty, numerical vagueness, nonnumerical vagueness, context dependence, ambiguity, indeterminacy in theoretical terms, and underspecificity. For nonspecialists, however, there are three primary forms that uncertainty can take, each with different challenges—and opportunities. Borrowing from Donald Rumsfeld, you can think of them as knowable unknowns, inaccessible unknowns, and unknowable unknowns. There are uncertainties that arise from some failure in our attempt to map the situation, failures that can be remedied by building better maps. Washington’s incomplete understanding of the geography of Long Island falls into that category; had he been able to consult with Nathanael Greene during the crucial days leading up to the British assault, he would almost certainly have had a clearer map of the possible routes that Howe might take. There are uncertainties that involve information that exists but, for whatever reason, is inaccessible to us. It was entirely apparent to Washington and his subordinates that General Howe was planning some kind of attack on New York, but the specific plans he was considering were inaccessible to the Americans, assuming they had no spies within the British forces. And finally, there are uncertainties that result from the inherent unpredictability of the system being analyzed. Even if Washington had assembled the most advanced team of advisors on the planet, he would not have been able to predict, more than twenty-four hours in advance, the unusual fog that formed on the morning he evacuated Brooklyn, given the crude state of the art in weather forecasting in 1776.
Recognizing and separating these different forms of uncertainty is an essential step in building an accurate map of a hard choice. We all suffer from a tendency to overvalue the importance of the variables of a given system that we do understand, and undervalue the elements that are opaque to us, for whatever reason. It’s the old joke of the drunk looking for his keys under a streetlamp, far from where he actually dropped them, because “the light is better over here.” For the knowable unknowns, the best strategy is to widen and diversify the team of advisors or stakeholders, to track down your General Greene and get a more accurate map of the terrain, or build a scale model of the compound based on satellite imaging. But it’s also crucial to keep track of the stubborn blind spots—the places where uncertainty can’t be reduced with better maps or scouts on the ground. Weather forecasters talk about the “cone of uncertainty” in tracking hurricanes. They map out the most likely path the storm is going to take, but they also prioritize a much wider range of potential paths, all of which are within the realm of possibility as well. That wider range is the cone of uncertainty, and weather organizations go to great lengths to remind everyone living inside that cone to take precautions, even if they are outside the most likely path. Mapping decisions requires a similar vigilance. You can’t simply focus on the variables that you are confident about; you also need to acknowledge the blank spots, the known unknowns.
In a way, this embrace of uncertainty echoes the fundamental techniques of the scientific method, as Richard Feynman describes it in a famous passage from his book The Meaning of It All:
When the scientist tells you he does not know the answer, he is an ignorant man. When he tells you he has a hunch about how it is going to work, he is uncertain about it. When he is pretty sure of how it is going to work, and he tells you, “This is the way it’s going to work, I’ll bet,” he still is in some doubt. And it is of paramount importance, in order to make progress, that we recognize this ignorance and this doubt. Because we have the doubt, we then propose looking in new directions for new ideas. The rate of the development of science is not the rate at which you make observations alone but, much more important, the rate at which you create new things to test. If we were not able or did not desire to look in any new direction, if we did not have a doubt or recognize ignorance, we would not get any new ideas.
One of the defining properties of the decision process that led to the capture of bin Laden was its relentless focus on uncertainty levels. In many respects, this focus on uncertainty was a direct response to the WMD fiasco of the previous administration, where circumstantial evidence had led the intelligence community to what proved to be an irrationally high confidence that Saddam Hussein was actively working on nuclear and chemical weapons. With the bin Laden decision, at almost every stage of the process—from the first surveillance of the compound to the final planning of the raid itself—analysts were specifically asked to rate their level of confidence in the assessment they were presenting. In November 2010, a consensus had developed among the analysts that bin Laden was, in fact, likely to be residing in the compound, but when Leon Panetta polled the analysts and other CIA officials, certainty levels varied from 60 to 90 percent. Not surprisingly, the analysts claiming less certainty were career officers who had participated in the Iraq WMD investigation and knew firsthand how unknown variables can transform what seems like a slam-dunk case into a much murkier affair.
Asking people to rate their confidence level turns out to be a productive strategy on multiple levels, not just because it allows others to gauge how seriously to take their information, but because the very act of thinking about how certain you are about something makes you think about what you might be missing. It serves as a kind of antidote to the often fatal disease of overconfidence that plagues so many complex decisions. The decision process that led to bin Laden’s death didn’t stop at asking people to rate their uncertainty. The senior officials involved in the decision—from Panetta to John Brennan, Obama’s counterterrorism advisor—stoked the coals of uncertainty by demanding that the analysts challenge their assumptions. In the early stages of the investigation, when so much hinged on the question of whether al-Kuwaiti was still directly connected to bin Laden, the agency analysts were asked to come up with alternate explanations for al-Kuwaiti’s suspicious behavior—plausible scenarios that would not connect him to bin Laden. Analysts suggested that perhaps he had left al-Qaeda and was now working for some other criminal enterprise—a drug ring, perhaps—that might have had a need for a high-security compound. Others proposed that he had stolen money from the terrorist network and was using the compound for his own protection. In another scenario, al-Kuwaiti was still working for al-Qaeda, but the compound merely housed bin Laden’s relatives, not the terrorist mastermind himself.
Over time, the agency shot down
each of these alternate explanations and became increasingly convinced that the compound had some direct connection to al-Qaeda. But the analysts continued to have their assumptions challenged by their supervisors. As Peter Bergen writes:
Brennan pushed them to come up with intelligence that disproved the notion that bin Laden was living in the Abbottabad compound, saying, “I’m tired of hearing why everything you see confirms your case. What we need to look for are the things that tell us what’s not right about our theory. So what’s not right about your inferences?” The analysts came back to the White House one day and started their intelligence update, saying, “Looks like there’s a dog on the compound.” Denis McDonough, Obama’s deputy national security advisor, remembers thinking, “Oh, that’s a bummer. You know, no self-respecting Muslim’s gonna have a dog.” Brennan, who had spent much of his career focused on the Middle East and spoke Arabic, pointed out that bin Laden, in fact, did have dogs when he was living in Sudan in the mid-1990s.
What began as an explicit search for contradictory evidence—evidence that might undermine the interpretation around which the group was slowly coalescing—turned out, in the end, to generate evidence that made that interpretation even stronger. Either way, the exercise forces you to see the situation with more clarity, to detect the whorls of the fingerprint with greater accuracy.
Farsighted Page 6