Book Read Free

Super Thinking

Page 5

by Gabriel Weinberg


  One way to accelerate building up useful intuition like this is to try consistently to argue from first principles. Another is to take every opportunity you can to figure out what is actually causing things to happen. The remaining mental models in this chapter can help you do just that.

  At 11:39 A.M. EST on January 28, 1986, the space shuttle Challenger disintegrated over the Atlantic Ocean, just seventy-three seconds into its flight, killing the seven crew members on board. It was a sad day we both remember vividly. A U.S. presidential commission was appointed to investigate the incident, ultimately producing the Rogers Commission Report, named after its chairman, William Rogers.

  When something happens, the proximate cause is the thing that immediately caused it to happen. In the case of the Challenger, the Rogers Commission Report showed that the proximate cause was the external hydrogen tank igniting.

  The root cause, by contrast, is what you might call the real reason something happened. People’s explanations for their behavior are no different: anyone can give you a reason for their behavior, but that might not be the real reason they did something. For example, consistent underperformers at work usually have a plausible excuse for each incident, but the real reason is something more fundamental, such as lack of skills, motivation, or effort.

  The Rogers Commission, in its June 6, 1986, report to the president, concluded that the root cause of the Challenger disaster was organizational failure:

  Failures in communication . . . resulted in a decision to launch 51-L based on incomplete and sometimes misleading information, a conflict between engineering data and management judgments, and a NASA management structure that permitted internal flight safety problems to bypass key Shuttle managers.

  As part of its work, the commission conducted a postmortem. In medicine, a postmortem is an examination of a dead body to determine the root cause of death. As a metaphor, postmortem refers to any examination of a prior situation to understand what happened and how it could go better next time. At DuckDuckGo, it is mandatory to conduct a postmortem after every project so that the organization can collectively learn and become stronger (antifragile).

  One technique commonly used in postmortems is called 5 Whys, where you keep asking the question “Why did that happen?” until you reach the root causes.

  Why did the Challenger’s hydrogen tank ignite? Hot gases were leaking from the solid rocket motor.

  Why was hot gas leaking? A seal in the motor broke.

  Why did the seal break? The O-ring that was supposed to protect the seal failed.

  Why did the O-ring fail? It was used at a temperature outside its intended range.

  Why was the O-ring used outside its temperature range? Because on launch day, the temperature was below freezing, at 29 degrees Fahrenheit. (Previously, the coldest launch had been at 53 degrees.)

  Why did the launch go forward when it was so cold? Safety concerns were ignored at the launch meeting.

  Why were safety concerns ignored? There was a lack of proper checks and balances at NASA. That was the root cause, the real reason the Challenger disaster occurred.

  As you can see, you can ask as many questions as you need in order to get to the root cause—five is just an arbitrary number. Nobel Prize–winning physicist Richard Feynman was on the Rogers Commission, agreeing to join upon specific request even though he was then dying of cancer. He uncovered the organizational failure within NASA and threatened to resign from the commission unless its report included an appendix consisting of his personal thoughts around root cause, which reads in part:

  It appears that there are enormous differences of opinion as to the probability of a failure with loss of vehicle and of human life. The estimates range from roughly 1 in 100 to 1 in 100,000. The higher figures come from the working engineers, and the very low figures from management. . . .

  It would appear that, for whatever purpose, be it for internal or external consumption, the management of NASA exaggerates the reliability of its product, to the point of fantasy. . . .

  For a successful technology, reality must take precedence over public relations, for nature cannot be fooled.

  Sometimes you may want something to be true so badly that you fool yourself into thinking it is likely to be true. This feeling is known as optimistic probability bias, because you are too optimistic about the probability of success. NASA managers were way too optimistic about the probability of success, whereas the engineers who were closer to the analysis were much more on target.

  Root cause analysis, whether you use 5 Whys or some other framework, helps you cut through optimistic probability bias, forcing you to slow down your thinking, push through your intuition, and deliberately uncover the truth.

  The reason that root causes are so important is that, by addressing them, you can prevent the same mistakes from happening in the future. An apt analogy is that by investigating root causes, you are not just treating the symptoms but treating the underlying disease.

  We started this chapter explaining that to be wrong less, you need to both work at getting better over time (antifragile) and make fewer avoidable mistakes in your thinking (unforced errors). Unfortunately, there are a lot of mental traps that you actively need to try to avoid, such as relying too much on recent information (availability bias), being too wed to your existing position (confirmation bias), and overstating the likelihood of your desired outcome (optimistic probability bias). As Feynman warned Caltech graduates in 1974: “You must not fool yourself—and you are the easiest person to fool.”

  KEY TAKEAWAYS

  To avoid mental traps, you must think more objectively. Try arguing from first principles, getting to root causes, and seeking out the third story.

  Realize that your intuitive interpretations of the world can often be wrong due to availability bias, fundamental attribution error, optimistic probability bias, and other related mental models that explain common errors in thinking.

  Use Ockham’s razor and Hanlon’s razor to begin investigating the simplest objective explanations. Then test your theories by de-risking your assumptions, avoiding premature optimization.

  Attempt to think gray in an effort to consistently avoid confirmation bias.

  Actively seek out other perspectives by including the Devil’s advocate position and bypassing the filter bubble. Consider the adage “You are what you eat.” You need to take in a variety of foods to be a healthy person. Likewise, taking in a variety of perspectives will help you become a super thinker.

  2

  Anything That Can Go Wrong, Will

  ALL YOUR ACTIONS HAVE CONSEQUENCES, but sometimes those consequences are unexpected. On the surface, these unintended consequences seem unpredictable. However, if you dig deeper, you will find that unintended consequences often follow predictable patterns and can therefore be avoided in many situations. You just need to know which patterns to look out for—the right mental models.

  Here is an example. In 2016, the UK government asked the public to help name a new polar research ship. Individuals could submit names and then vote on them in an online poll. More than seven thousand names were submitted, but one name won easily, with 124,109 votes: RSS Boaty McBoatface. (The ship was eventually named RSS Sir David Attenborough instead.)

  Could the government have predicted this result? Well, maybe not that the exact name RSS Boaty McBoatface would triumph. But could they have guessed that someone might turn the contest into a joke, that the joke would be well received by the public, and that the joke answer might become the winner? You bet.

  People turn open contests like this into jokes all the time. In 2012, Mountain Dew held a similar campaign to name a new soda, but they quickly closed it down when “Diabeetus” and “Hitler Did Nothing Wrong” appeared near the top of the rankings. Also that year, Walmart teamed up with Sheets Energy Strips and offered to put on a concert by international recording artist Pitbull at the Walmart location that received the most new Facebook likes. After an internet pranks
ter took hold of the contest, Walmart’s most remote store, in Kodiak, Alaska, won. Walmart and Pitbull still held the concert there and they even had the prankster who rigged the contest join Pitbull on the trip!

  Unintended consequences are not a laughing matter under more serious circumstances. For instance, medical professionals routinely prescribe opioids to help people with chronic pain. Unfortunately, these drugs are also highly addictive. As a result, pain patients may abuse their prescribed medication or even seek out similar, cheaper, and more dangerous drugs like street heroin. According to the National Institutes of Health, in the U.S., nearly half of young people who inject heroin started abusing prescription opioids first.

  Patients’ susceptibility to opioid addiction and abuse has substantially contributed to the deadliest drug crisis in American history. As reported by The New York Times on November 29, 2018, more people died from drug overdoses in 2017 than from HIV/AIDS, car crashes, or gun deaths in the years of their respective peaks. Of course, no doctor prescribing painkillers intends for their patients to die—these deaths are unintended consequences.

  Through this chapter, we want to help you avoid unintended consequences like these. You will be much less likely to fall into their traps if you are equipped with the right mental models to help you better predict and deal with these situations.

  HARM THY NEIGHBOR, UNINTENTIONALLY

  There is a class of unintended consequences that arise when a lot of people choose what they think is best for them individually, but the sum total of the decisions creates a worse outcome for everyone. To illustrate how this works, consider Boston Common, the oldest public park in the United States.

  Before it was a park, way back in the 1630s, this fifty-acre plot of land in downtown Boston, Massachusetts, was a grazing pasture for cows, with local families using it collectively as common land. In England, this type of land is referred to legally as commons.

  Pasture commons present a problem, though: Each additional cow that a farmer gets benefits their family, but if all the farmers keep getting new cows, then the commons can be depleted. All farmers would experience the negative effects of overgrazing on the health of their herds and land.

  In an 1833 essay, “Two Lectures on the Checks to Population,” economist William Lloyd described a similar, but hypothetical, overgrazing scenario, now called the tragedy of the commons. However, unbeknownst to him, his hypothetical situation had really occurred in Boston Common two hundred years earlier (and many other times before and since). More affluent families did in fact keep buying more cows, leading to overgrazing, until, in 1646, a limit of seventy cows was imposed on Boston Common.

  Any shared resource, or commons, is vulnerable to this tragedy. Overfishing, deforestation, and dumping waste have obvious parallels to overgrazing, though this model extends far beyond environmental issues. Each additional spam message benefits the spammer who sends it while simultaneously degrading the entire email system. Collective overuse of antibiotics in medicine and agriculture is leading to dangerous antibiotic resistance. People make self-serving edits to Wikipedia articles, diminishing the overall reliability of the encyclopedia.

  In each of these cases, an individual makes what appears to be a rational decision (e.g., prescribing an antibiotic to a patient who might have a bacterial infection). They use the common resource for their own benefit at little or no cost (e.g., each course of treatment has only a small chance of increasing resistance). But as more and more people make the same decision, the common resource is collectively depleted, reducing the ability for everyone to benefit from it in the future (e.g., the antibiotic becomes much less useful).

  More broadly, the tragedy of the commons arises from what is called the tyranny of small decisions, where a series of small, individually rational decisions ultimately leads to a system-wide negative consequence, or tyranny. It’s death by a thousand cuts.

  You’ve probably gone out to dinner with friends expecting that you will equally split the check. At dinner, each person is faced with a decision to order an expensive meal or a cheaper one. When dining alone, people often order the cheaper meal. However, when they know that the cost of dinner is shared by the whole group, people tend to opt for the expensive meal. If everyone does this then everyone ends up paying more!

  Ecologist William E. Odum made the connection between the tyranny of small decisions and environmental degradation in his 1982 BioScience article: “Much of the current confusion and distress surrounding environmental issues can be traced to decisions that were never consciously made, but simply resulted from a series of small decisions.”

  It’s the individual decision to place a well here, cut down some trees there, build a factory over there—over time these isolated decisions aggregate to create widespread problems in our environment that are increasingly difficult to reverse.

  You can also find the tyranny of small decisions in your own life. Think of those small credit card purchases or expenses that seem individually warranted at the time, but collectively add up to significant credit card bills or cash crunches. Professionally, it may be the occasional distractions and small procrastinations that, in aggregate, make your deadlines hard to reach.

  The tyranny of small decisions can be avoided when someone who has a view over the whole system can veto or curb particular individual decisions when broad negative impacts can be foreseen. When the decisions are all your own, you could do this for yourself. For example, to stop your out-of-control spending, you could self-impose a budget, checking each potential purchase against the budget to see if it’s compatible with your spending plan. You could do the same for your time management, by more strictly regulating your calendar.

  When decisions are made by more than just you, then a third party is usually needed to fill this role, just as the city of Boston did when it restricted the number of cows on Boston Common. Company expense policies that help prevent overspending are an organizational example.

  Another cause of issues like the tragedy of the commons is the free rider problem, where some people get a free ride by using a resource without paying for it. People or companies who cheat on their taxes are free riders to government services they use, such as infrastructure and the legal system. If you’ve ever worked on a team project where one person didn’t do anything substantive, that person was free-riding on the rest of the group. Another familiar example: Has anyone ever leeched off your wi-fi or Netflix account? Or perhaps you’ve been the free rider?

  Free-riding is commonplace with public goods, such as national militaries, broadcast television, even the air we breathe. As you can see from these examples, it is usually difficult to exclude people from using public goods, because they are broadly available (public). Since one person’s use does not significantly reduce a public good’s availability to others, it might seem as though there is no harm in free-riding. However, if enough people free-ride on a public good, then it can degrade to the point of creating a tragedy of the commons.

  Vaccinations provide an illustrative example that combines all these models (tragedy of the commons, free rider problem, tyranny of small decisions, public goods), plus one more: herd immunity. Diseases can spread only when they have an eligible host to infect. However, when the vast majority of people are vaccinated against a disease, there are very few eligible new hosts, since most people (in the herd) are immune from infection due to getting vaccinated. As a result, the overall public is less susceptible to outbreaks of the disease.

  In this example, the public good is a disease-free environment due to herd immunity, and the free riders are those who take advantage of this public good by not getting vaccinated. The tyranny of small decisions can arise when enough individuals choose not to get vaccinated, resulting in an outbreak of the disease, creating a tragedy of the commons.

  In practice, the percentage of people who need to be vaccinated for a given disease to achieve herd immunity varies by how contagious the disease is. For measles, an extremely cont
agious disease, the threshold is about 95 percent. That means an outbreak is possible if the measles vaccination rate in a community falls below 95 percent!

  Before the measles vaccine was introduced in 1963, more than 500,000 people a year contracted measles in the United States, resulting in more than 400 annual deaths. After the vaccine was in popular use, measles deaths dropped to literally zero.

  In recent years, some parents have refused to vaccinate their kids for measles and other diseases due to the belief that vaccines are linked to autism, based on since-discredited and known-to-be-fraudulent research. These people who choose not to vaccinate are free-riding on the herd immunity from the people who do choose to vaccinate.

  Herd Immunity

  Pre-vaccine

  Post-vaccine

  Disease

  Average annual deaths

  Annual deaths (2004)

  Diptheria

  1,822 (1936-1945)

  0

  Measles

  440 (1953-1962)

  0

  Mumps

  39 (1963-1968)

  0

  Pertussis

  4,034 (1934-1943)

  27

  Polio

  3,272 (1941-1954)

  0

  Rubella

  17 (1966-1968)

  0

  Smallpox

  337 (1900-1949)

  0

  Tetanus

 

‹ Prev