Super Thinking

Home > Other > Super Thinking > Page 2
Super Thinking Page 2

by Gabriel Weinberg


  Antifragility is beyond resilience or robustness. The resilient resists shocks and stays the same; the antifragile gets better.

  Just as it pays off to make your financial portfolio antifragile in the face of economic shocks, it similarly pays off to make your thinking antifragile in the face of new decisions. If your thinking is antifragile, then it gets better over time as you learn from your mistakes and interact with your surroundings. It’s like working out at the gym—you are shocking your muscles and bones so they grow stronger over time. We’d like to improve your thought process by helping you incorporate mental models into your day-to-day thinking, increasingly matching the right models to a given situation.

  By the time you’ve finished reading this book, you will have more than three hundred mental models floating around in your head from dozens of disciplines, eager to pop up at just the right time. You don’t have to be an expert at tennis or financial analysis to benefit from these concepts. You just need to understand their broader meaning and apply them when appropriate. If you apply these mental models consistently and correctly, your decisions will become wrong much less, or inverted, right much more. That’s super thinking.

  In this chapter we’re going to explore solving problems without bias. Unfortunately, evolution has hardwired us with several mind traps. If you are not aware of them, you will make poor decisions by default. But if you can recognize these traps from afar and avoid them by using some tried-and-true techniques, you will be well on the path to super thinking.

  KEEP IT SIMPLE, STUPID!

  Any science or math teacher worth their salt stresses the importance of knowing how to derive every formula that you use, because only then do you really know it. It’s the difference between being able to attack a math problem with a blank sheet of paper and needing a formula handed to you to begin with. It’s also the difference between being a chef—someone who can take ingredients and turn them into an amazing dish without looking at a cookbook—and being the kind of cook who just knows how to follow a recipe.

  Lauren was the teaching assistant for several statistics courses during her years at MIT. One course had a textbook that came with a computer disk, containing a simple application that could be used as a calculator for the statistical formulas in the book. On one exam, a student wrote the following answer to one of the statistical problems posed: “I would use the disk and plug the numbers in to get the answer.” The student was not a chef.

  The central mental model to help you become a chef with your thinking is arguing from first principles. It’s the practical starting point to being wrong less, and it means thinking from the bottom up, using basic building blocks of what you think is true to build sound (and sometimes new) conclusions. First principles are the group of self-evident assumptions that make up the foundation on which your conclusions rest—the ingredients in a recipe or the mathematical axioms that underpin a formula.

  Given a set of ingredients, a chef can adapt and create new recipes, as on Chopped. If you can argue from first principles, then you can do the same thing when making decisions, coming up with novel solutions to hard problems. Think MacGyver, or the true story depicted in the movie Apollo 13 (which you should watch if you haven’t), where a malfunction on board the spacecraft necessitated an early return to Earth and the creation of improvised devices to make sure, among other things, that there was enough usable air for the astronauts to breathe on the trip home.

  NASA engineers figured out a solution using only the “ingredients” on the ship. In the movie, an engineer dumps all the parts available on the spacecraft on a table and says, “We’ve got to find a way to make this [holding up square canister] fit into the hole for this [holding up round canister] using nothing but that [pointing to parts on the table].”

  If you can argue from first principles, then you can more easily approach unfamiliar situations, or approach familiar situations in innovative ways. Understanding how to derive formulas helps you to understand how to derive new formulas. Understanding how molecules fit together enables you to build new molecules. Tesla founder Elon Musk illustrates how this process works in practice in an interview on the Foundation podcast:

  First principles is kind of a physics way of looking at the world. . . . You kind of boil things down to the most fundamental truths and say, “What are we sure is true?” . . . and then reason up from there. . . .

  Somebody could say . . . “Battery packs are really expensive and that’s just the way they will always be. . . . Historically, it has cost $600 per kilowatt-hour, and so it’s not going to be much better than that in the future.” . . .

  With first principles, you say, “What are the material constituents of the batteries? What is the stock market value of the material constituents?” . . . It’s got cobalt, nickel, aluminum, carbon, and some polymers for separation, and a seal can. Break that down on a material basis and say, “If we bought that on the London Metal Exchange, what would each of those things cost?” . . .

  It’s like $80 per kilowatt-hour. So clearly you just need to think of clever ways to take those materials and combine them into the shape of a battery cell and you can have batteries that are much, much cheaper than anyone realizes.

  When arguing from first principles, you are deliberately starting from scratch. You are explicitly avoiding the potential trap of conventional wisdom, which could turn out to be wrong. Even if you end up in agreement with conventional wisdom, by taking the first-principles approach, you will gain a much deeper understanding of the subject at hand.

  Any problem can be approached from first principles. Take your next career move. Most people looking for work will apply to too many jobs and take the first job that is offered to them, which is likely not the optimal choice. When using first principles, you’ll instead begin by thinking about what you truly value in a career (e.g., autonomy, status, mission, etc.), your required job parameters (financial, location, title, etc.), and your previous experience. When you add those up, you will get a much better picture of what might work best for your next career move, and then you can actively seek that out.

  Thinking alone, though, even from first principles, only gets you so far. Your first principles are merely assumptions that may be true, false, or somewhere in between. Do you really value autonomy in a job, or do you just think you do? Is it really true you need to go back to school to switch careers, or might it actually be unnecessary?

  Ultimately, to be wrong less, you also need to be testing your assumptions in the real world, a process known as de-risking. There is risk that one or more of your assumptions are untrue, and so the conclusions you reach could also be false.

  As another example, any startup business idea is built upon a series of principled assumptions:

  My team can build our product.

  People will want our product.

  Our product will generate profit.

  We will be able to fend off competitors.

  The market is large enough for a long-term business opportunity.

  You can break these general assumptions down into more specific assumptions:

  My team can build our product. We have the right number and type of engineers; our engineers have the right expertise; our product can be built in a reasonable amount of time; etc.

  People will want our product. Our product solves the problem we think it does; our product is simple enough to use; our product has the critical features needed for success; etc.

  Our product will generate profit. We can charge more for our product than it costs to make and market it; we have good messaging to market our product; we can sell enough of our product to cover our fixed costs; etc.

  We will be able to fend off competitors. We can protect our intellectual property; we are doing something that is difficult to copy; we can build a trusted brand; etc.

  The market is large enough for a long-term business opportunity. There are enough people out there who will want to buy our product; the market for our produc
t is growing rapidly; the bigger we get, the more profit we can make; etc.

  Once you get specific enough with your assumptions, then you can devise a plan to test (de-risk) them. The most important assumptions to de-risk first are the ones that are necessary conditions for success and that you are most uncertain about. For example, in the startup context, take the assumption that your solution sufficiently solves the problem it was designed to solve. If this assumption is untrue, then you will need to change what you are doing immediately before you can proceed any further, because the whole endeavor won’t work otherwise.

  Once you identify the critical assumptions to de-risk, the next step is actually going out and testing these assumptions, proving or disproving them, and then adjusting your strategy appropriately.

  Just as the concept of first principles is universally applicable, so is de-risking. You can de-risk anything: a policy idea, a vacation plan, a workout routine. When de-risking, you want to test assumptions quickly and easily. Take a vacation plan. Assumptions could be around cost (“I can afford this vacation”), satisfaction (“I will enjoy this vacation”), coordination (“my relatives can join me on this vacation”), etc. Here, de-risking is as easy as doing a few minutes of online research, reading reviews, and sending an email to your relatives.

  Unfortunately, people often make the mistake of doing way too much work before testing assumptions in the real world. In computer science this trap is called premature optimization, where you tweak or perfect code or algorithms (optimize) too early (prematurely). If your assumptions turn out to be wrong, you’re going to have to throw out all that work, rendering it ultimately a waste of time.

  It’s as if you booked an entire vacation assuming your family could join you, only to finally ask them and they say they can’t come. Then you have to go back and change everything, but all this work could have been avoided by a simple communication up front.

  Back in startup land, there is another mental model to help you test your assumptions, called minimum viable product, or MVP. The MVP is the product you are developing with just enough features, the minimum amount, to be feasibly, or viably, tested by real people.

  The MVP keeps you from working by yourself for too long. LinkedIn cofounder Reid Hoffman put it like this: “If you’re not embarrassed by the first version of your product, you’ve launched too late.”

  As with many useful mental models, you will frequently be reminded of the MVP now that you are familiar with it. An oft-quoted military adage says: “No battle plan survives contact with the enemy.” And boxer Mike Tyson (prior to his 1996 bout against Evander Holyfield): “Everybody has a plan until they get punched in the mouth.” No matter the context, what they’re all saying is that your first plan is probably wrong. While it is the best starting point you have right now, you must revise it often based on the real-world feedback you receive. And we recommend doing as little work as possible before getting that real-world feedback.

  As with de-risking, you can extend the MVP model to fit many other contexts: minimum viable organization, minimum viable communication, minimum viable strategy, minimum viable experiment. Since we have so many mental models to get to, we’re trying to do minimum viable explanations!

  Minimum Viable Product

  Vision

  MVP

  2.0

  The MVP forces you to evaluate your assumptions quickly. One way you can be wrong with your assumptions is by coming up with too many or too complicated assumptions up front when there are clearly simpler sets you can start with. Ockham’s razor helps here. It advises that the simplest explanation is most likely to be true. When you encounter competing explanations that plausibly explain a set of data equally well, you probably want to choose the simplest one to investigate first.

  This model is a razor because it “shaves off” unnecessary assumptions. It’s named after fourteenth-century English philosopher William of Ockham, though the underlying concept has much older roots. The Greco-Roman astronomer Ptolemy (circa A.D. 90–168) stated, “We consider it a good principle to explain the phenomena by the simplest hypotheses possible.” More recently, the composer Roger Sessions, paraphrasing Albert Einstein, put it like this: “Everything should be made as simple as it can be, but not simpler!” In medicine, it’s known by this saying: “When you hear hoofbeats, think of horses, not zebras.”

  A practical tactic is to look at your explanation of a situation, break it down into its constituent assumptions, and for each one, ask yourself: Does this assumption really need to be here? What evidence do I have that it should remain? Is it a false dependency?

  For example, Ockham’s razor would be helpful in the search for a long-term romantic partner. We’ve seen firsthand that many people have a long list of extremely specific criteria for their potential mates, enabled by online dating sites and apps. “I will only date a Brazilian man with blue eyes who loves hot yoga and raspberry ice cream, and whose favorite Avengers character is Thor.”

  However, this approach leads to an unnecessarily small dating pool. If instead people reflected on whom they’ve dated in the past in terms of what underlying characteristics drove their past relationships to fail, a much simpler set of dating criteria would probably emerge. It is usually okay for partners to have more varied cultural backgrounds and looks, and even to prefer different Avengers characters, but they probably do need to make each other think and laugh and find each other attractive.

  Therefore, a person shouldn’t narrow their dating pool unnecessarily with overly specific criteria. If it turns out that dating someone who doesn’t share their taste in superheroes really does doom the relationship, then they can always add that specific filter back in.

  Ockham’s razor is not a “law” in that it is always true; it just offers guidance. Sometimes the true explanation can indeed be quite complicated. However, there is no reason to jump immediately to the complex explanation when you have simpler alternatives to explore first.

  If you don’t simplify your assumptions, you can fall into a couple of traps, described in our next mental models. First, most people are, unfortunately, hardwired to latch onto unnecessary assumptions, a predilection called the conjunction fallacy, studied by Amos Tversky and Daniel Kahneman, who provided this example in the October 1983 Psychological Review:

  Linda is 31 years old, single, outspoken, and very bright. She majored in philosophy. As a student, she was deeply concerned with issues of discrimination and social justice, and also participated in anti-nuclear demonstrations.

  Which is more probable?

  1. Linda is a bank teller.

  2. Linda is a bank teller and is active in the feminist movement.

  In their study, most people answered that number 2 is more probable, but that’s impossible unless all bank tellers are also active in the feminist movement. The fallacy arises because the probability of two events in conjunction is always less than or equal to the probability of either one of the events occurring alone, a concept illustrated in the Venn diagram on the next page.

  You not only have a natural tendency to think something specific is more probable than something general, but you also have a similarly fallacious tendency to explain data using too many assumptions. The mental model for this second fallacy is overfitting, a concept from statistics. Adding in all those overly specific dating requirements is overfitting your dating history. Similarly, believing you have cancer when you have a cold is overfitting your symptoms.

  Conjunction Fallacy

  Overfitting occurs when you use an overly complicated explanation when a simpler one will do. It’s what happens when you don’t heed Ockham’s razor, when you get sucked into the conjunction fallacy or make a similar unforced error. It can occur in any situation where an explanation introduces unnecessary assumptions.

  As a visual example, the data depicted on the next page can be easily explained by a straight line, but you could also overfit the data by creating a curved one that moves through every s
ingle point, as the wavy line does.

  One approach to combatting both traps is to ask yourself: How much does my data really support my conclusion versus other conclusions? Do my symptoms really point only to cancer, or could they also point to a variety of other ailments, such as the common cold? Do I really need the curvy line to explain the data, or would a simple straight line explain just as much?

  A pithy mnemonic of this advice and all the advice in this section is KISS: Keep It Simple, Stupid! When crafting a solution to a problem, whether making a decision or explaining data, you want to start with the simplest set of assumptions you can think of and de-risk them as simply as possible.

  Overfitting

  IN THE EYE OF THE BEHOLDER

  You go through life seeing everything from your perspective, which varies widely depending on your particular life experiences and current situation.

  In physics your perspective is called your frame of reference, a concept central to Einstein’s theory of relativity. Here’s an example from everyday life: If you are in a moving train, your reference frame is inside the train, which appears at rest to you, with objects inside the train not moving relative to one another, or to yourself. However, to someone outside the train looking in, you and all the objects in the train are moving at great speed, as seen from their different frame of reference, which is stationary to them. In fact, everything but the speed of light—even time—appears different in different frames of reference.

  If you’re trying to be as objective as possible when making a decision or solving a problem, you always want to account for your frame of reference. You will of course be influenced by your perspective, but you don’t want to be unknowingly influenced. And if you think you may not have the full understanding of a situation, then you must actively try to get it by looking from a variety of different frames of reference.

 

‹ Prev