More generally, making the effort to understand complex systems better through systems thinking—whether it be by using diagrams, running simulations, or employing other mental models—not only helps you get a broad picture of the system and its range of outcomes, but also can help you become aware of the best possible outcomes. Without such knowledge, you can get stuck chasing a local optimum solution, which is an admittedly good solution, but not the best one.
If you can, you want to work toward that best solution, which would be the global optimum. Think of rolling hills: the top of a nice nearby hill would be a good success (local optimum), though in the distance there is a much bigger hill that would be a much better success (global optimum). You want to be on that bigger hill. But first you have to have a full view of the system to know the bigger hill exists.
Local vs. Global Optimum
BEWARE OF UNKNOWN UNKNOWNS
In 1955, psychologists Joseph Luft and Harrington Ingham originated the concept of unknown unknowns, which was made popular by former U.S. Secretary of Defense Donald Rumsfeld at a news briefing on February 12, 2002, with this exchange:
Jim Miklaszewski: In regard to Iraq, weapons of mass destruction, and terrorists, is there any evidence to indicate that Iraq has attempted to or is willing to supply terrorists with weapons of mass destruction? Because there are reports that there is no evidence of a direct link between Baghdad and some of these terrorist organizations.
Rumsfeld: Reports that say that something hasn’t happened are always interesting to me, because as we know, there are known knowns; there are things we know we know. We also know there are known unknowns; that is to say we know there are some things we do not know. But there are also unknown unknowns—the ones we don’t know we don’t know. And if one looks throughout the history of our country and other free countries, it is the latter category that tend to be the difficult ones.
The context and evasiveness of the exchange aside, the underlying model is useful in decision making. When faced with a decision, you can use a handy 2 × 2 matrix (see Chapter 4) as a starting point to envision these four categories of things you know and don’t know.
Knowns & Unknowns
Known
Unknown
Known
What you know you know
What you know you don’t know
Unkown
What you don’t know you know
What you don’t know you don’t know
This model is particularly effective when thinking more systematically about risks, such as risks to a project’s success. Each category deserves its own attention and process:
Known knowns: These might be risks to someone else, but not to you since you already know how to deal with them based on your previous experience. For example, a project might require a technological solution, but you already know what that solution is and how to implement it; you just need to execute that known plan.
Known unknowns: These are also known risks to the project, but because of some uncertainty, it isn’t exactly clear how they will be resolved. An example is the risk of relying on a third party: until you engage with them directly, it is unknown how they will react. You can turn some of these into known knowns by doing de-risking exercises (see Chapter 1), getting rid of the uncertainty.
Unknown knowns: These are the risks you’re not thinking about, but for which there exist clear mitigation plans. For example, your project might involve starting to do business in Europe over the summer, but you don’t yet know they don’t do much business in August. An adviser with more experience can help identify these risks from the start and turn these into known knowns. That way they will not take you by surprise later on and potentially throw off your project.
Unknown unknowns: These are the least obvious risks, which require a concerted effort to uncover. For example, maybe something elsewhere in the organization or in the industry could dramatically change this project (like budget cuts or an acquisition or new product announcement). Even if you identify an unknown unknown (turning it into a known unknown), you still remain unsure of its likelihood or consequences. You must then still do de-risking exercises to finally turn it into a known known.
As you can see, you enumerate items in each of the four categories, and then work to make them all known knowns. This model is about giving yourself more complete knowledge of a situation. It’s similar to systems thinking, from the last section, in that you are attempting to get a full picture of the system so you can make better decisions.
As a personal example, consider having a new baby. From reading all the books, you know the first few weeks will be harrowing, you’ll want to take some time off work, you’ll need to buy a car seat, crib, diapers, etc.—these are the known knowns. You also know that how your baby might sleep and eat (or not) can be an issue, but until the baby is born, their proclivities remain uncertain—they are known unknowns. You might not yet know that swaddling a baby is a thing, but you’ll be shown how soon enough by a nurse or family member, turning this unknown known into a known known. And then there are things that no one knows yet or is even thinking about, such as whether your child could have a learning disability.
A related model that can help you uncover unknown unknowns is scenario analysis (also known as scenario planning), which is a method for thinking about possible futures more deeply. It gets its name because it involves analyzing different scenarios that might unfold. That sounds simple enough, but it is deceptively complicated in practice. That’s because thinking up possible future scenarios is a really challenging exercise, and thinking through their likelihoods and consequences is even more so.
Governments and large corporations have dedicated staff for scenario analysis. They are continually thinking up and writing reports about what the world could look like in the future and how their citizenry or shareholders might fare under those scenarios. Many academics, especially in political science, urban planning, economics, and related fields, similarly engage in prognosticating about the future. And of course, science fiction is essentially an entire literary genre dedicated to scenario analysis.
To do scenario analysis well, you must conjure plausible yet distinct futures, ultimately considering several possible scenarios. This process is difficult because you tend to latch onto your first thoughts (see anchoring in Chapter 1), which usually depict a direct extrapolation of your current trajectory (the present), without challenging your own assumptions.
One technique to ensure that you do challenge your assumptions is to list major events that could transpire (e.g., stock market crash, government regulation, major industry merger, etc.) and then trace their possible effects back to your situation. Some may have little to no effect, whereas others might form the basis for a scenario you should consider deeply.
Another technique for thinking more broadly about possible future scenarios is the thought experiment, literally an experiment that occurs just in your thoughts, i.e., not in the physical world. The most famous thought experiment is probably “Schrödinger’s cat,” named after Austrian physicist Erwin Schrödinger, who thought it up in 1935 to explore the implications of different interpretations of the physics of quantum mechanics. From his 1935 paper “The Present Situation in Quantum Mechanics”:
A cat is penned up in a steel chamber, along with the following device (which must be secured against direct interference by the cat): in a Geiger counter, there is a tiny bit of radioactive substance, so small, that perhaps in the course of the hour one of the atoms decays, but also, with equal probability, perhaps none; if it happens, the counter tube discharges and through a relay releases a hammer that shatters a small flask of hydrocyanic acid. If one has left this entire system to itself for an hour, one would say that the cat still lives if meanwhile no atom has decayed. The first atomic decay would have poisoned it.
So, you have a cat in a box, and if a radioactive atom decayed in the last hour, it would have killed the cat. This thought experiment poses some
seemingly unanswerable questions: Until you observe the cat by opening the box, is it alive or dead, or in an in-between state, as certain interpretations of quantum mechanics would suggest? And what exactly happens when you open the box?
Schrödinger’s Cat Thought Experiment
Answers to this thought experiment are beyond the scope of this book and were argued over for decades after it was posed. Therein lies the power of the thought experiment.
Thought experiments are particularly useful in scenario analysis. Posing questions that start with “What would happen if . . .” is a good practice in this way: What would happen if life expectancy jumped forty years? What would happen if a well-funded competitor copied our product? What would happen if I switched careers?
These types of what-if questions can also be applied to the past, in what is called counterfactual thinking, which means thinking about the past by imagining that the past was different, counter to the facts of what actually occurred. You’ve probably seen this model in books and movies about scenarios such as what would have happened if Germany had won World War II (e.g., Philip K. Dick’s The Man in the High Castle). Examples from your own life can help you improve your decision making when you think through the possible consequences of your past decisions. What if I had taken that job? What if I had gone to that other school? What if I hadn’t done that side project?
When reconsidering your past decisions, though, it is important not only to think of the positive consequences that might have occurred if you had made a different life choice. The butterfly effect (see Chapter 4) reminds us that one small change can have ripple effects, so when considering a counterfactual scenario, it is important to remember that if you change one thing, it is unlikely that everything else would stay the same.
Posing what-if questions can nevertheless help you think more creatively, coming up with scenarios that diverge from your intuition. More generally, this technique is one of many associated with lateral thinking, a type of thinking that helps you move laterally from one idea to another, as opposed to critical thinking, which is more about judging an idea in front of you. Lateral thinking is thinking outside the box.
Another helpful lateral-thinking technique involves adding some randomness when you are generating ideas. For example, you can choose an object at random from your surroundings or a noun from the dictionary and try to associate it in some way with your current idea list, laterally forming new offshoot ideas in the process.
No matter what techniques you use, however, it is extremely difficult to perform scenario analysis alone. Seeking outside input produces better results, as different people with different perspectives bring new ideas to the table.
It is therefore tempting to involve multiple people in brainstorming sessions from the get-go. However, studies show this is not the right approach because of groupthink, a bias that emerges because groups tend to think in harmony. Within group settings, members often strive for consensus, avoiding conflict, controversial issues, or even alternative solutions once it seems a solution is already favored by the group.
The bandwagon effect describes the phenomenon whereby consensus can take hold quickly, as other group members “hop on the bandwagon” as an idea gains popularity. More generally, it describes people’s tendency to take social cues and follow the decisions of others. In this way, the probability of a person adopting an idea increases the more other people have already done so.
In some cases, this is rational behavior, as when you follow the bandwagon and adopt a product based on well-researched reviews from owners of the product. In other cases, though, fads and trends can be based on little substance.
Groupthink is terrible for scenario analysis and can have much wider implications, leading to bad group decision making in general if not actively managed. There are many ways to manage groupthink, though, including setting a culture of questioning assumptions, making sure to evaluate all ideas critically, establishing a Devil’s advocate position (see Chapter 1), actively recruiting people with differing opinions, reducing leadership’s influence on group recommendations, and splitting the group into independent subgroups.
It is this last recommendation that is particularly relevant for scenario analysis, as it forms the basis for divergent thinking, where you actively try to get thinking to diverge in order to discover multiple possible solutions, as opposed to convergent thinking, where you actively try to get thinking to converge on one solution. One tactic is to meet once without brainstorming at all, just to go over the goal of the scenario analysis. Then send everyone off individually or in small groups. You could give them a prompt to react to, such as survey data, or have them come up with their own thought experiments and scenario ideas from scratch (divergent thinking). Finally, you bring everyone back together to go over all the proposed scenarios in order to narrow them down to just a few scenarios to explore further (convergent thinking).
It is additionally likely that people close to you, such as those within your organization, share similar cultural traits, and therefore you should look beyond your normal contacts and venture outside your organization to get as much lateral and divergent thinking as you can. One way to do so is actively to seek out people from different backgrounds to participate. Another way, easily enabled by the internet, is to crowdsource ideas, where you seek (source) ideas quite literally from anyone who would like to participate (the crowd).
Crowdsourcing has been effective across a wide array of situations, from soliciting tips in journalism, to garnering contributions to Wikipedia, to solving the real-world problems of companies and governments. For example, Netflix held a contest in 2009 in which crowdsourced researchers beat Netflix’s own recommendation algorithms.
Crowdsourcing can help you get a sense of what a wide array of people think about a topic, which can inform your future decision making, updating your prior beliefs (see Bayesian statistics in Chapter 5). It can also help you uncover unknown unknowns and unknown knowns as you get feedback from people with previous experiences you might not have had.
In James Surowiecki’s book The Wisdom of Crowds, he examines situations where input from crowds can be particularly effective. It opens with a story about how the crowd at a county fair in 1906, attended by statistician Francis Galton, correctly guessed the weight of an ox. Almost eight hundred people participated, each individually guessing, and the average weight guessed was 1,197 pounds—exactly the weight of the ox, to the pound! While you cannot expect similar results in all situations, Surowiecki explains the key conditions in which you can expect good results from crowdsourcing:
Diversity of opinion: Crowdsourcing works well when it draws on different people’s private information based on their individual knowledge and experiences.
Independence: People need to be able to express their opinions without influence from others, avoiding groupthink.
Aggregation: The entity doing the crowdsourcing needs to be able to combine the diverse opinions in such a way as to arrive at a collective decision.
If you can design a system with these properties, then you can draw on the collective intelligence of the crowd. This allows you to glean the useful bits of information that might be hidden among a group of diverse participants. In the ox example, a butcher may notice something different than a farmer would and different yet than a vet would. All this knowledge was captured in the collective weight guessed. A more modern example of making use of collective intelligence would be an audience poll as done on the television show Who Wants to Be a Millionaire?
In general, drawing on collective intelligence makes sense when the group’s collective pool of knowledge is greater than what you could otherwise get access to; this helps you arrive at a more intelligent decision than you would arrive at on your own. “The crowd” can help systematically think through various scenarios, get new data and ideas, or simply help improve existing ideas.
One direct application of crowdsourcing to scenario analysis is the use of a predict
ion market, which is like a stock market for predictions. In a simple formulation of this concept, the price of each stock can range between $0 and $1 and represents the market’s current probability of an event taking place, such as whether a certain candidate will be elected. For example, a price of $0.59 would represent a 59 percent probability that the candidate would be elected.
If you think the probability is significantly higher than 59 percent, then you could buy a yes share at that price. Alternatively, if you think the probability is significantly lower than 59 percent, then you could buy a no share at that price. If the candidate actually gets elected, then the market pays out holders of yes predictions at $1 per share, and if they are not elected, then those yes shares become worthless. Conversely, if the candidate doesn’t get elected, then the market pays out holders of the no predictions at $1 per share and the yes shares become worthless.
If more people are making yes predictions than no predications, then the price of the stock rises, and vice versa. By looking at the current prices in the prediction market, you can get a sense of what the market thinks will happen, based on how people are betting (buying shares). Many big companies operate similar prediction markets internally, where employees can predict the outcome of things like sales forecasts and marketing campaigns.
Super Thinking Page 23