The Great Mental Models
Page 2
You all know the person who has all the answers on how to improve your organization, or the friend who has the cure to world hunger. While pontificating with friends over a bottle of wine at dinner is fun, it won’t help you improve. The only way you’ll know the extent to which you understand reality is to put your ideas and understanding into action. If you don’t test your ideas against the real world—keep contact with the earth—how can you be sure you understand?
Getting in our own way
The biggest barrier to learning from contact with reality is ourselves. It’s hard to understand a system that we are part of because we have blind spots, where we can’t see what we aren’t looking for, and don’t notice what we don’t notice.
« There are these two young fish swimming along and they happen to meet an older fish swimming the other way, who nods at them and says “Morning, boys. How’s the water?” And the two young fish swim on for a bit, and then eventually one of them looks over at the other and goes “What the hell is water?” »
David Foster Wallace5
Our failures to update from interacting with reality spring primarily from three things: not having the right perspective or vantage point, ego-induced denial, and distance from the consequences of our decisions. As we will learn in greater detail throughout the volumes on mental models, these can all get in the way. They make it easier to keep our existing and flawed beliefs than to update them accordingly. Let’s briefly flesh these out:
The first flaw is perspective. We have a hard time seeing any system that we are in. Galileo had a great analogy to describe the limits of our default perspective. Imagine you are on a ship that has reached constant velocity (meaning without a change in speed or direction). You are below decks and there are no portholes. You drop a ball from your raised hand to the floor. To you, it looks as if the ball is dropping straight down, thereby confirming gravity is at work.
Now imagine you are a fish (with special x-ray vision) and you are watching this ship go past. You see the scientist inside, dropping a ball. You register the vertical change in the position of the ball. But you are also able to see a horizontal change. As the ball was pulled down by gravity it also shifted its position east by about 20 feet. The ship moved through the water and therefore so did the ball. The scientist on board, with no external point of reference, was not able to perceive this horizontal shift.
This analogy shows us the limits of our perception. We must be open to other perspectives if we truly want to understand the results of our actions. Despite feeling that we’ve got all the information, if we’re on the ship, the fish in the ocean has more he can share.
The second flaw is ego. Many of us tend to have too much invested in our opinions of ourselves to see the world’s feedback—the feedback we need to update our beliefs about reality. This creates a profound ignorance that keeps us banging our head against the wall over and over again. Our inability to learn from the world because of our ego happens for many reasons, but two are worth mentioning here. First, we’re so afraid about what others will say about us that we fail to put our ideas out there and subject them to criticism. This way we can always be right. Second, if we do put our ideas out there and they are criticized, our ego steps in to protect us. We become invested in defending instead of upgrading our ideas.
The third flaw is distance. The further we are from the results of our decisions, the easier it is to keep our current views rather than update them. When you put your hand on a hot stove, you quickly learn the natural consequence. You pay the price for your mistakes. Since you are a pain-avoiding creature, you update your view. Before you touch another stove, you check to see if it’s hot. But you don’t just learn a micro lesson that applies in one situation. Instead, you draw a general abstraction, one that tells you to check before touching anything that could potentially be hot.
Organizations over a certain size often remove us from the direct consequences of our decisions. When we make decisions that other people carry out, we are one or more levels removed and may not immediately be able to update our understanding. We come a little off the ground, if you will. The further we are from the feedback of the decisions, the easier it is to convince ourselves that we are right and avoid the challenge, the pain, of updating our views.
Admitting that we’re wrong is tough. It’s easier to fool ourselves that we’re right at a high level than at the micro level, because at the micro level we see and feel the immediate consequences. When we touch that hot stove, the feedback is powerful and instantaneous. At a high or macro level we are removed from the immediacy of the situation, and our ego steps in to create a narrative that suits what we want to believe, instead of what really happened.
These flaws are the main reasons we keep repeating the same mistakes, and why we need to keep our feet on the ground as much as we can. As Confucius said, “A man who has committed a mistake and doesn’t correct it, is committing another mistake.”
The majority of the time, we don’t even perceive what conflicts with our beliefs. It’s much easier to go on thinking what we’ve already been thinking than go through the pain of updating our existing, false beliefs. When it comes to seeing what is—that is, understanding reality—we can follow Charles Darwin’s advice to notice things “which easily escape attention,” and ask why things happened.
We also tend to undervalue the elementary ideas and overvalue the complicated ones. Most of us get jobs based on some form of specialized knowledge, so this makes sense. We don’t think we have much value if we know the things everyone else does, so we focus our effort on developing unique expertise to set ourselves apart. The problem is then that we reject the simple to make sure what we offer can’t be contributed by someone else. But simple ideas are of great value because they can help us prevent complex problems.
In identifying the Great Mental Models we have looked for elementary principles, the ideas from multiple disciplines that form a time-tested foundation. It may seem counterintuitive, to work on developing knowledge that is available to everyone, but the universe works in the same way no matter where you are in it. What you need is to understand the principles, so that when the details change you are still able to identify what is really going on. This is part of what makes the Great Mental Models so valuable—understanding the principles, you can easily change tactics to apply the ones you need.
« Most geniuses—especially those who lead others—prosper not by deconstructing intricate complexities but by exploiting unrecognized simplicities. »
Andy Benoit6
These elementary ideas, so often overlooked, are from multiple disciplines: biology, physics, chemistry, and more. These help us understand the interconnections of the world, and see it for how it really is. This understanding allows us to develop causal relationships, which allow us to match patterns, which allow us to draw analogies. All of this so we can navigate reality with more clarity and comprehension of the real dynamics involved.
Understanding is not enough
However, understanding reality is not everything. The pursuit of understanding fuels meaning and adaptation, but this understanding, by itself, is not enough.
Understanding only becomes useful when we adjust our behavior and actions accordingly. The Great Models are not just theory. They are actionable insights that can be used to effect positive change in your life. What good is it to know that you constantly interrupt people if you fail to adjust your behavior in light of this? In fact, if you know and don’t change your behavior it often has a negative effect. People around you will tell themselves the simplest story that makes sense to them given what they see: that you just don’t care. Worse still, because you understand that you interrupt people, you’re surprised when you get the same results over and over. Why? You’ve failed to reflect on your new understanding and adjust your behavior.
In the real world you will either understand and adapt to find success or you will fail
Now you can see how we make suboptimal
decisions and repeat mistakes. We are afraid to learn and admit when we don’t know enough. This is the mindset that leads to poor decisions. They are a source of stress and anxiety, and consume massive amounts of time. Not when we’re making them—no, when we’re making them they seem natural because they align with our view of how we want things to work. We get tripped up when the world doesn’t work the way we want it to or when we fail to see what is. Rather than update our views, we double down our effort, accelerating our frustrations and anxiety. It’s only weeks or months later, when we’re spending massive amounts of time fixing our mistakes, that they start to increase their burden on us. Then we wonder why we have no time for family and friends and why we’re so consumed by things outside of our control.
We are passive, thinking these things just happened to us and not that we did something to cause them. This passivity means that we rarely reflect on our decisions and the outcomes. Without reflection we cannot learn. Without learning we are doomed to repeat mistakes, become frustrated when the world doesn’t work the way we want it to, and wonder why we are so busy. The cycle goes on.
But we are not passive participants in our decisions. The world does not act on us as much as it reveals itself to us and we respond. Ego gets in the way, locking reality behind a door that it controls with a gating mechanism. Only through persistence in the face of having it slammed on us over and over can we begin to see the light on the other side.
Ego, of course, is more than the enemy. It’s also our friend. If we had a perfect view of the world and made decisions rationally, we would never attempt to do the amazing things that make us human. Ego propels us. Why, without ego, would we even attempt to travel to Mars? After all, it’s never been done before. We’d never start a business because most of them fail. We need to learn to understand when ego serves us and when it hinders us. Wrapping ego up in outcomes instead of in ourselves makes it easier to update our views.
We optimize for short-term ego protection over long-term happiness. Increasingly, our understanding of things becomes black and white rather than shades of grey. When things happen in accord with our view of the world we naturally think they are good for us and others. When they conflict with our views, they are wrong and bad. But the world is smarter than we are and it will teach us all we need to know if we’re open to its feedback—if we keep our feet on the ground.
_
Despite having consistently bad results for patients, bloodletting was practiced for over 2,000 years.
Mental models and how to use them
Perhaps an example will help illustrate the mental models approach. Think of gravity, something we learned about as kids and perhaps studied more formally in university as adults. We each have a mental model about gravity, whether we know it or not. And that model helps us to understand how gravity works. Of course we don’t need to know all of the details, but we know what’s important. We know, for instance, that if we drop a pen it will fall to the floor. If we see a pen on the floor we come to a probabilistic conclusion that gravity played a role.
This model plays a fundamental role in our lives. It explains the movement of the Earth around the sun. It informs the design of bridges and airplanes. It’s one of the models we use to evaluate the safety of leaning on a guard rail or repairing a roof. But we also apply our understanding of gravity in other, less obvious ways. We use the model as a metaphor to explain the influence of strong personalities, as when we say, “He was pulled into her orbit.” This is a reference to our basic understanding of the role of mass in gravity—the more there is the stronger the pull. It also informs some classic sales techniques. Gravity diminishes with distance, and so too does your propensity to make an impulse buy. Good salespeople know that the more distance you get, in time or geography, between yourself and the object of desire, the less likely you are to buy. Salespeople try to keep the pressure on to get you to buy right away.
Gravity has been around since before humans, so we can consider it to be time-tested, reliable, and representing reality. And yet, can you explain gravity with a ton of detail? I highly doubt it. And you don’t need to for the model to be useful to you. Our understanding of gravity, in other words, our mental model, lets us anticipate what will happen and also helps us explain what has happened. We don’t need to be able to describe the physics in detail for the model to be useful.
However, not every model is as reliable as gravity, and all models are flawed in some way. Some are reliable in some situations but useless in others. Some are too limited in their scope to be of much use. Others are unreliable because they haven’t been tested and challenged, and yet others are just plain wrong. In every situation, we need to figure out which models are reliable and useful. We must also discard or update the unreliable ones, because unreliable or flawed models come with a cost.
For a long time people believed that bloodletting cured many different illnesses. This mistaken belief actually led doctors to contribute to the deaths of many of their patients. When we use flawed models we are more likely to misunderstand the situation, the variables that matter, and the cause and effect relationships between them. Because of such misunderstandings we often take suboptimal actions, like draining so much blood out of patients that they die from it.
Better models mean better thinking. The degree to which our models accurately explain reality is the degree to which they improve our thinking. Understanding reality is the name of the game. Understanding not only helps us decide which actions to take but helps us remove or avoid actions that have a big downside that we would otherwise not be aware of. Not only do we understand the immediate problem with more accuracy, but we can begin to see the second-, third-, and higher-order consequences. This understanding helps us eliminate avoidable errors. Sometimes making good decisions boils down to avoiding bad ones.
Flawed models, regardless of intentions, cause harm when they are put to use. When it comes to applying mental models we tend to run into trouble either when our model of reality is wrong, that is, it doesn’t survive real world experience, or when our model is right and we apply it to a situation where it doesn’t belong.
Models that don’t hold up to reality cause massive mistakes. Consider that the model of bloodletting as a cure for disease caused unnecessary death because it weakened patients when they needed all their strength to fight their illnesses. It hung around for such a long time because it was part of a package of flawed models, such as those explaining the causes of sickness and how the human body worked, that made it difficult to determine exactly where it didn’t fit with reality.
We compound the problem of flawed models when we fail to update our models when evidence indicates they are wrong. Only by repeated testing of our models against reality and being open to feedback can we update our understanding of the world and change our thinking. We need to look at the results of applying the model over the largest sample size possible to be able to refine it so that it aligns with how the world actually works.
— Sidebar: What Can the Three Buckets of Knowledge Teach Us About History?
The power of acquiring new models
The quality of our thinking is largely influenced by the mental models in our heads. While we want accurate models, we also want a wide variety of models to uncover what’s really happening. The key here is variety. Most of us study something specific and don’t get exposure to the big ideas of other disciplines. We don’t develop the multidisciplinary mindset that we need to accurately see a problem. And because we don’t have the right models to understand the situation, we overuse the models we do have and use them even when they don’t belong.
You’ve likely experienced this first hand. An engineer will often think in terms of systems by default. A psychologist will think in terms of incentives. A business person might think in terms of opportunity cost and risk-reward. Through their disciplines, each of these people sees part of the situation, the part of the world that makes sense to them. None of them, however, see
the entire situation unless they are thinking in a multidisciplinary way. In short, they have blind spots. Big blind spots. And they’re not aware of their blind spots. There is an old adage that encapsulates this: “To the man with only a hammer, everything starts looking like a nail.” Not every problem is a nail. The world is full of complications and interconnections that can only be explained through understanding of multiple models.
What Can the Three Buckets of Knowledge Teach Us About History?
“Every statistician knows that a large, relevant sample size is their best friend. What are the three largest, most relevant sample sizes for identifying universal principles? Bucket number one is inorganic systems, which are 13.7 billion years in size. It’s all the laws of math and physics, the entire physical universe. Bucket number two is organic systems, 3.5 billion years of biology on Earth. And bucket number three is human history, you can pick your own number, I picked 20,000 years of recorded human behavior. Those are the three largest sample sizes we can access and the most relevant.” —Peter Kaufman
The larger and more relevant the sample size, the more reliable the model based on it is. But the key to sample sizes is to look for them not just over space, but over time. You need to reach back into the past as far as you can to contribute to your sample. We have a tendency to think that how the world is, is how it always was. And so we get caught up validating our assumptions from what we find in the here and now. But the continents used to be pushed against each other, dinosaurs walked the planet for millions of years, and we are not the only hominid to evolve. Looking to the past can provide essential context for understanding where we are now.
Removing blind spots means thinking through the problem using different lenses or models. When we do this the blind spots slowly go away and we gain an understanding of the problem.
We’re much like the blind men in the classic parable of the elephant, going through life trying to explain everything through one limited lens. Too often that lens is driven by our particular field, be it economics, engineering, physics, mathematics, biology, chemistry, or something else entirely. Each of these disciplines holds some truth and yet none of them contain the whole truth.