The expected value for your usual contractor (Contractor 2 in the decision tree) is just $2,500, since there is only one possible outcome. The expected value for the new contractor (Contractor 1 in the decision tree) is the sum of the multiplications across their four possible outcomes: $1,000 + $562.50 + $500 + $150 = $2,212.50. Even though the new contractor has an outcome that might cost you $3,000, the expected value you’d pay is still less than you’d pay your usual contractor.
Expected Value
What this means is that if these probabilities are accurate, and you could run the scenario one hundred times in the real world where you pick the new contractor each time, your average payment to them would be expected to be $2,212.50. That’s because half the time you’d pay only $2,000, and the other half, more. You’d never pay exactly $2,212.50, since that isn’t a possible outcome, but overall your payments would average out to that expected value over many iterations.
If you find this confusing, the following example might be helpful. In 2015, U.S. mothers had 2.4 kids on average. Does any particular mother have exactly 2.4 kids? We hope not. Some have one child, some two, some three, and so on, and it all averages out to 2.4. Likewise, the various contractor payment outcomes and their probabilities add up to the expected value payment amount, even though you never pay that exact amount.
In any case, from this lens of the decision tree and the resulting expected values, you might rationally choose the new contractor, even with all their potential issues. That’s because your expected outlay is lower with that contractor.
Of course, this result could change with different probabilities and/or potential outcome payments. For example, if you thought that, instead of a 5 percent chance for a $3,000 bill, there was a 50 percent chance you could end in this highest outcome, then the expected value for the new contractor would become higher than your usual contractor’s bid. Remember that you can always run a sensitivity analysis on any inputs that you think might significantly influence the decision, as we discussed in the last section. Here you would vary the probabilities and/or potential outcome payments and see how the expected values change accordingly.
Additionally, consider another way the decision could change. Suppose you’ve already scheduled a pool party a few weeks out. Now, if the lower-bid contractor pushes into that second week, you’re going to be faced with a lot of anxiety about your party. You will have to put pressure on the contractor to get the job done, and you might even have to bring in reinforcements to help finish the job at a much higher cost. That’s a lot of extra hassle.
To a wealthier person who associates a high opportunity cost with their time, all this extra anxiety and hassle may be valued at an extra $1,000 worth of cost, even if you aren’t paying that $1,000 directly to the contractor. Accounting for this possible extra burden would move up the two-week-late outcome from $2,500 (previously a $500 overrun) to $3,500 (now a $1,500 overrun).
Similarly, if this new contractor really messes up the job and you do have to bring in your regular contractor to do most everything over again on short notice, it will cost you the extra $1,000 in anxiety and hassle, as well as literally more payment to the other contractor. So, that small 5 percent chance of a $3,000 outcome might end up costing the equivalent of an extra $2,000, moving it to $5,000 in total.
By using these increased values in your decision tree, you can effectively “price in” the extra costs. Because these new values include more than the exact cost you’d have to pay out, they are called utility values, which reflect your total relative preferences across the various scenarios. We already saw this idea in the last section when we discussed putting a price to the preference of not having a landlord. This is the mental model that encapsulates the concept.
Utility values can be disconnected from actual prices in that you can value something more than something else, even though it costs the same on the open market. Think about your favorite band—it’s worth more to you to see them in concert than another band that offers their concerts at the same price, simply because you like them more. You would get more utility out of that concert because of your preference. In the pool case, the stress involved with scrambling to fix the pool before your party is an extra cost of lost utility in addition to the actual cost you would have to pay out to the contractors.
In terms of the decision tree, the outcome values for the leaves can become the utility values, incorporating all the costs and benefits (tangible and intangible) into one number for each possible outcome. If you do that, then the conclusion now results in a flipped decision to use your usual contractor (Contractor 2 in the decision tree below).
Utility Values
However, note that it is still a really close decision, as both contractors now have almost the same expected value! This closeness illustrates the power of probabilistic outcomes. Even though the new contractor is now associated with much higher potential “costs,” 50 percent of the time you’d still expect to pay them a much smaller amount. This lower cost drives the expected value down a lot because it happens so frequently.
Just as in cost-benefit analysis and scoring pro-con lists, we recommend using utility values whenever possible because they paint a fuller picture of your underlying preferences, and therefore should result in more satisfactory decisions. In fact, more broadly, there is a philosophy called utilitarianism that expresses the view that the most ethical decision is the one that creates the most utility for all involved.
Utilitarianism as a philosophy has various drawbacks, though. Primarily, decisions involving multiple people that increase overall utility can seem quite unfair when that utility is not equally distributed among the people involved (e.g., income inequality despite rising standards of living). Also, utility values can be hard to estimate. Nevertheless, utilitarianism is a useful philosophical model to be aware of, if only to consider what decision would increase overall utility the most.
In any case, decision trees will help you start to make sense of what to do in situations with an array of diverse, probabilistic outcomes. Think about health insurance—should you go for a higher-deductible plan with lower payments or a lower-deductible plan with higher payments? It depends on your expected level of care, and whether you can afford the lower-probability scenario where you will need to pay out a high deductible. (Note that the answer isn’t obvious, because with the lower-deductible plan you are making higher monthly premium payments. This increase in premiums could be viewed as paying out a portion of your deductible each month.) You can examine this scenario and others like it via a decision tree, accounting for your preferences along with the actual costs.
Decision trees are especially useful to help you think about unlikely but severely impactful events. Consider more closely the scenario where you have a medical incident that requires you to pay out your full deductible. For some people, that amount of outlay could equate to bankruptcy, and so the true cost of this event occurring to them is much, much higher than the actual cost of the deductible.
As a result, if you were in this situation, you would want to make the loss in utility value for this scenario extremely high to reflect your desire to avoid bankruptcy. Doing so would likely push you into a higher-premium plan with a lower deductible (that you can still afford), and more assurance that you would avoid bankruptcy. In other words, if there is a chance of financial ruin, you might want to avoid that plan even though on average it would lead to a better financial outcome.
One thing to watch out for in this type of analysis is the possibility of black swan events, which are extreme, consequential events (that end in things like financial ruin), but which have significantly higher probabilities than you might initially expect. The name is derived from the false belief, held for many centuries in Europe and other places, that black swans did not exist, when in fact they were (and still are) common birds in Australia.
As applied to decision tree analysis, a conservative approach would be to increase your probability estimat
es of low-probability but highly impactful scenarios like the bankruptcy one. This revision would account for the fact that the scenario might represent a black swan event, and that you might therefore be wrong about its probability.
One reason that the probability of black swan events may be miscalculated relates to the normal distribution (see Chapter 5), which is the bell-curve-shaped probability distribution that explains the frequency of many natural phenomena (e.g., people’s heights). In a normal distribution, rare events occur on the tails of the distribution (e.g., really tall or short people), far from the middle of the bell curve. Black swan events, though, often come from fat-tailed distributions, which literally have fatter tails, meaning that events way out from the middle have a much higher probability when compared with a normal distribution.
Fat-Tailed Distribution
There are many naturally occurring fat-tailed distributions as well, and sometimes people just incorrectly assume they are dealing with a normal distribution when in fact they are dealing with a distribution with a fatter tail, and that means that events in the tail occur with higher probability. In practice, these are distributions where some of the biggest outliers happen more often than you would expect from a normal distribution, such as occurs with insurance payouts, or in the U.S. income distribution (see the histogram in Chapter 5).
Another reason why you might miscalculate the probability of a black swan event is that you misunderstand the reasons for its occurrence. This can happen when you think a situation should come from one distribution, but multiple are really involved. For example, there are genetic reasons (e.g., dwarfism and Marfan syndrome) why there might be many more shorter or taller people than you would expect from just a regular normal distribution, which doesn’t account for these rarer genetic variations.
A third reason is that you may underestimate the possibility and impact of cascading failures (see Chapter 4). As you recall, in a cascading-failure scenario, parts of the system are correlated: if one part falters, the next part falters, and so on. The 2007/2008 financial crisis is an example, where the failure of mortgage-backed securities cascaded all the way to the banks and associated insurance companies.
Our climate presents another example. The term one-hundred-year flood, denotes a flood that has a 1 percent chance of occurring in any given year. Unfortunately, climate change is raising the probability of the occurrence of what was once considered a one-hundred-year flood, and it no longer has a 1 percent chance in many areas. The dice are loaded. Houston, Texas, for example, has had three so-called five-hundred-year floods in the last three years! The probabilities of these events clearly need to be adjusted as the cascading effects of climate change continue to unfold.
To better determine the outcome probabilities in highly complex systems like banking or climate, you may first have to take a step back and try to make sense of the whole system before you can even try to create a decision tree or cost-benefit analysis for a particular subset or situation. Systems thinking describes this act, when you attempt to think about the entire system at once. By thinking about the overall system, you are more likely to understand and account for subtle interactions between components that could otherwise lead to unintended consequences from your decisions. For example, when thinking about making an investment, you might start to appreciate how seemingly unrelated parts of the economy might affect its outcome.
Some systems are fairly simple and you can picture the whole system in your head. Others are so complex that it is too challenging simultaneously to hold all the interlocking pieces in your head. One solution is literally to diagram the system visually. Drawing diagrams can help you get a better sense of complex systems and how the parts of the system interact with one another.
Techniques for how to effectively diagram complex systems are beyond the scope of this book, but know that there are many techniques that you can learn, including causal loop diagrams (which showcase feedback loops in a system) and stock and flow diagrams (which showcase how things accumulate and flow in a system). Gabriel’s master’s thesis involved diagraming the email spam system. The picture on the next page is one of his causal loop diagrams—you aren’t meant to understand this diagram; it’s just an example of what these things can end up looking like. Just know now that it was really helpful in gaining a much better understanding of this complex system.
Email Spam Causal Loop Diagram
As a further step, you can use software to imitate the system, called a simulation. In fact, software exists that allows you to compose a diagram of a system on your screen and then immediately turn it into a working simulation. (Two such programs that do this online are Insight Maker and True-World.) In the process, you can set initial conditions, and then see how the system unfolds over time.
Simulations help you more deeply understand a complex system and lead to better predictions of black swans and other events. Simulations can also help you identify how a system will adjust when faced with changing conditions. Chatelier’s principle, named after French chemist Henri-Louis Le Chatelier, states that when any chemical system at equilibrium is subject to a change in conditions, such as a shift in temperature, volume, or pressure, it readjusts itself into a new equilibrium state and usually partially counteracts the change.
For example, if someone hands you a box to carry, you don’t immediately topple over; you instead shift your weight distribution to account for the new weight. Or in economics, if a new tax is introduced, tax revenues from that tax end up being lower in the long run than one would expect under current conditions because people adjust their behavior to avoid the tax.
If this sounds like a familiar concept, it’s because Chatelier’s principle is similar to the mental model homeostasis (see Chapter 4), which comes from biology: recall how your body automatically shivers and sweats in response to external conditions in order to regulate its internal temperature. Chatelier’s principle doesn’t necessarily mean the system will regulate around a predetermined value, but that it will react to externally imposed conditions, and usually in a way that partially counteracts the external stimulus. You can see the principle in action in real time with simulations because they allow you to calculate how your simulated system will adjust to various changes.
A related mental model that also arises in dynamic systems and simulations is hysteresis, which describes how a system’s current state can be dependent on its history. Hysteresis is also a naturally occurring phenomenon, with examples across most scientific disciplines. In physics, when you magnetize a material in one direction, such as by holding a magnet to another piece of metal, the metal does not fully demagnetize after you remove the magnet. In biology, the T cells that help power your immune system, once activated, thereafter require a lower threshold to reactivate. Hysteresis describes how both the metal and the T cells partially remember their states, such that what happened previously can impact what will happen next.
Again, this may already seem like a familiar concept, because it is similar to the mental model of path dependence (see Chapter 2), which more generally describes how choices have consequences in terms of limiting what you can do in the future. Hysteresis is one type of path dependence, as applied to systems.
In engineering systems, for example, it is useful to build some hysteresis into the system to avoid rapid changes. Modern thermostats do this by allowing for a range of temperatures around the set point: if you want to maintain 70 degrees Fahrenheit, a thermostat might be set to turn the heater on when the temperature drops to 68 degrees and back off when it hits 72 degrees. In this way, it isn’t kicking on and off constantly. Similarly, on websites, designers and developers often build in a lag for when you move your mouse off page elements like menus. They build their programs to remember that you were on the menu so that when you move off, it doesn’t abruptly go away, which can appear jarring to the eye.
You can use all these mental models around visualizing complex systems and simulating them to help you better
assess potential outcomes and their associated probabilities. Then you can feed these results into a more straightforward decision model like a decision tree or cost-benefit analysis.
A particular type of simulation that can be especially useful in this way is a Monte Carlo simulation. Like critical mass (see Chapter 4), this is a model that emerged during the Manhattan Project in Los Alamos in the run-up to the discovery of the atomic bomb. Physicist Stanislaw Ulam was struggling with using traditional mathematics to determine how far neutrons would travel through various materials and came up with this new method after playing solitaire (yes, the card game). In his words, quoted in Los Alamos Science:
The first thoughts and attempts I made to practice [the Monte Carlo method] were suggested by a question which occurred to me in 1946 as I was convalescing from an illness and playing solitaires. The question was what are the chances that a Canfield solitaire laid out with 52 cards will come out successfully? After spending a lot of time trying to estimate them by pure combinatorial calculations, I wondered whether a more practical method than “abstract thinking” might not be to lay it out say one hundred times and simply observe and count the number of successful plays.
A Monte Carlo simulation is actually many simulations run independently, with random initial conditions or other uses of random numbers within the simulation itself. By running a simulation of a system many times, you can begin to understand how probable different outcomes really are. Think of it as a dynamic sensitivity analysis.
Monte Carlo simulations are used in nearly every branch of science. But they are useful outside science as well. For example, venture capitalists often use Monte Carlo simulations to determine how much capital to reserve for future financings. When a venture fund invests in a company, that company, if successful, will probably raise more money in the future, and the fund will often want to participate in some of those future financings to maintain its ownership percentage. How much money should it reserve for a company? Not all companies are successful, and different companies raise different amounts, so the answer is not straightforward at the time of the initial investment. Many funds use Monte Carlo simulations to understand how much they ought to reserve, given their current fund history and the estimates of company success and size of potential financings.
Super Thinking Page 22