Book Read Free

More Than You Know

Page 4

by Michael J Mauboussin


  A Useful Analogy

  Long-term success in any of these probabilistic exercises shares some common features. I summarize four of them:• Focus. Professional gamblers do not play a multitude of games—they don’t stroll into a casino and play a little blackjack, a little craps, spend a little time on the slot machine. They focus on a specific game and learn the ins and outs. Similarly, most investors must define a circle of competence—areas of relative expertise. Seeking a competitive edge across a spectrum of industries and companies is a challenge, to say the least. Most great investors stick to their circle of competence.

  • Lots of situations. Players of probabilistic games must examine lots of situations because the market price is usually pretty accurate. Investors, too, must evaluate lots of situations and gather lots of information. For example, the very successful president and CEO of Geico’s capital operations, Lou Simpson, tries to read five to eight hours a day and trades very infrequently.

  • Limited opportunities. As Thorp notes in Beat the Dealer, even when you know what you’re doing and play under ideal circumstances, the odds still favor you less than 10 percent of the time. And rarely does anyone play under ideal circumstances. The message for investors is that even when you are competent, favorable situations—where you have a clear-cut variant perception vis-à-vis the market—don’t appear very often.

  • Ante. In the casino, you must bet every time to play. Ideally, you can bet a small amount when the odds are poor and a large sum when the odds are favorable, but you must ante to play the game. In investing, on the other hand, you need not participate when you perceive the expected value as unattractive, and you can bet aggressively when a situation appears attractive (within the constraints of an investment policy, naturally). In this way, investing is much more favorable than other games of probability.

  Constantly thinking in expected-value terms requires discipline and is somewhat unnatural. But the leading thinkers and practitioners from somewhat varied fields have converged on the same formula: focus not on the frequency of correctness but on the magnitude of correctness.

  4

  Sound Theory for the Attribute Weary

  The Importance of Circumstance-Based Categorization

  One reason why platitudes and fads in management come and go with such predictability is they typically are not grounded in a robust categorization scheme. They are espoused as one-size-fits-all statements of cause and effect. Hence, managers try the fad out because it sounds good, and then discard it when they encounter circumstances in which the recommended actions do not yield the predicted results. Their conclusion most often is, “It doesn’t work”—when the reality often is that it works well in some (as yet undefined) circumstances, but not others.

  —Clayton M. Christensen, Paul Carlile, and David Sundahl, “The Process of Theory-Building”

  Circumstance Over Attributes

  You’d probably guess it isn’t too hard to categorize slime mold, the somewhat yucky stuff you see on walks through cool, damp parts of the forest. But you’d be wrong. As it turns out, slime mold has some strange behavior—so strange, in fact, that it stumped scientists for centuries.

  When food is abundant, slime mold cells operate as independent single-celled units. They move around, eat bacteria, and divide to reproduce. When food is in short supply, however, the slime-mold cells converge and form a cluster of tens of thousands of cells. The cells literally stop acting as individuals and start acting like a collective. That’s why slime mold is so hard to categorize: it is an “it” or a “they” depending on the circumstances.1

  Investment approaches based solely on attributes, without considering the circumstances, also don’t make sense. Sometimes a stock that looks expensive is cheap, and what looks cheap is expensive. It’s context dependent.

  Yet investment consultants encourage, nay, compel most investment professionals to articulate an attribute-based investment approach and stick with it. The game is pretty straightforward. Growth investors strive to beat the market by filling their portfolios with companies that are rapidly increasing sales and earnings, without too much concern about valuation. Value investors load up on cheap stocks with a decent yield and consider corporate growth gravy.

  Organization or external constraints aside, most money managers actually believe their attribute-based investment style—combined with their skill—will generate market-beating results.2 These various investment approaches are grounded in theory: a belief that investor actions will lead to satisfactory outcomes.

  The word “theory,” however, makes most investors and corporate managers leery because they associate theory with theoretical, which implies impractical. But if you define theory as a contingent explanation of cause and effect, it is eminently practical. A sound theory helps predict how actions or events lead to specific outcomes across a broad range of circumstances.3

  The main message is that much of investment theory is unsound because it is based on poor categorization. We can say the same about much of management theory.4 More specifically, investors generally dwell on attribute-based categorizations (like low multiples) versus circumstance-based categorizations. A shift from attribute- to circumstance-based thinking can be of great help to investors and managers. Take a lesson from the slime mold.

  The Three Steps of Theory Building

  In a thought-provoking paper, Clayton Christensen, Paul Carlile, and David Sundahl break the process of theory building into three stages (see exhibit 4.1). I discuss each of these stages and provide some perspective on how this general theory-building process applies specifically to investing:1. Describe what you want to understand in words and numbers. In this stage, the goal is to carefully observe, describe, and measure a phenomenon to be sure that subsequent researchers can agree on the subject.

  Stock market performance is an example of a phenomenon that requires good theory. Today we largely take for granted this descriptive phase for the market, but the first comprehensive study of the performance of all stocks wasn’t published until 1964. In that paper, University of Chicago professors Lawrence Fisher and James Lorie documented that stocks delivered about a 9 percent return from 1926 to 1960. Peter Bernstein notes that the article was a “bombshell” that “astonished” academics and practitioners alike. The description itself caused a stir in the finance and investing worlds.5

  2. Classify the phenomena into categories based on similarities. Categorization simplifies and organizes the world so as to clarify differences between phenomena. An example of categorization in physics is solids, liquids, and gases. In innovation research—Christensen’s specialty—the categories are sustaining and disruptive innovations.

  Investing has many variations of categorization, including value versus growth stocks, high risk versus low risk, and large- versus small-capitalization stocks. These categories are deeply ingrained in the investment world, and many investment firms and their products rely on these categories.

  3. Build a theory that explains the behavior of the phenomena. A robust theory based on sound categorization explains cause and effect, why the cause and effect works, and most critically under what circumstances the cause and effect operates. Importantly, a theory must be falsifiable.

  The investment world is filled with theories about investment returns. Proponents of the efficient-market theory argue that no strategy exists to generate superior risk-adjusted investment returns. Active money managers pursue myriad strategies—many along specific style boxes—based on the theory that their approach will lead to excess returns.

  How does a theory improve? Once researchers develop a theory, they can then use it to predict what they will see under various circumstances.

  EXHIBIT 4.1 The Process of Building Theory

  Source: Christensen, Carlile, and Sundahl, “The Process of Theory-Building.” Reproduced with permission.

  In so doing, they often find anomalies, or results that are inconsistent with the theory (see the right side of ex
hibit 4.1). Anomalies force researchers to revisit the description and categorization stages. The goal is to be able to explain the phenomenon in question more accurately and thoroughly than in the prior theory. Proper theory building requires researchers to cycle through the stages in search of greater and greater predictive power.

  That a theory must be falsifiable is a challenge for economists because a number of economic constructs assume an answer in their definitions. One example is utility maximization, which asserts that individuals act so as to maximize utility. But since we can define utility in any way that is consistent with the outcome, we can’t falsify the construct.

  An example from finance is the capital asset pricing model. Economists use the CAPM to test market efficiency, while the CAPM assumes market efficiency. In the words of noted financial economist Richard Roll, any test of CAPM is “really a joint test of CAPM and market efficiency.”6 Christensen et al. suggest that a number of central concepts in economics should be properly labeled as “constructs” rather than “theories” precisely because they cannot be directly falsified.

  To be sure, not all researchers are committed to improving theory. Many are satisfied to develop a theory and demonstrate that it is not false. Much of the advice that management consultants dole out fits this description. For instance, the consultants may argue that “outsourcing is good” and find a few examples to “confirm” the theory. Since researchers have not refined the theory by iterating it through the description/categorization/improved-theory process, the theory may not be at all robust. The theory looks good on paper but fails upon implementation.7

  When, Not What

  Perhaps the single most important message from Christensen et al. is that proper categorization is essential to good theory. More specifically, theories evolve from attribute-based categories to circumstance-based categories as they improve. Theories that rest on circumstance-based categories tell practitioners what to do in different situations. In contrast, attribute-based categories prescribe action based on the traits of the phenomena.

  This message is critical for investors, who often rely heavily on attribute-based categories. One example is low-price-earnings-multiple investing, often a central plank in the value investor’s theory. An investor would have fared poorly using P/E as a tool to time moves in (when the ratio is low) and out (high ratio) of the market over the past 125 years.8 This doesn’t mean that low P/Es are bad but does mean that buying the market when the P/E is low is not a valid theory for generating superior long-term returns.

  Indeed, onlookers often describe the investment strategy of successful investors as eclectic. Perhaps it is more accurate to describe their approach as circumstance-based, not attribute-based. Legg Mason Value Trust’s Bill Miller, the only fund manager in the past four decades to beat the S&P 500 fifteen years in a row, is a good case in point. Miller’s approach is decidedly circumstance based, yet he is routinely criticized for straying from an attribute-based mindset:Legg Mason Value’s portfolio has hardly reflected the stocks with low price-to-book and price-to-earnings ratios you would expect to find in a value fund. According to Morningstar, at the end of 1999 its price-to-book ratio was 178 percent higher than the value category average, and its price-to-earnings ratio was 45 percent higher than average.9

  All investors use theory, either wittingly or unwittingly. The lesson from the process of theory building is that sound theories reflect context. Too many investors cling to attribute-based approaches and wring their hands when the market doesn’t conform to what they think it should do.

  5

  Risky Business

  Risk, Uncertainty, and Prediction in Investing

  The practical difference between . . . risk and uncertainty . . . is that in the former the distribution of the outcome in a group of instances is known . . . while in the case of uncertainty, this is not true . . . because the situation dealt with is in high degree unique.

  —Frank H. Knight, Risk, Uncertainty, and Profit

  Our knowledge of the way things work, in society or in nature, comes trailing clouds of vagueness. Vast ills have followed a belief in certainty.

  —Kenneth Arrow, “ ‘I Know a Hawk from a Handsaw’ ”

  Rocket Science

  Cognitive scientist Gerd Gigerenzer noted something unusual when he took a guided tour through Daimler-Benz Aerospace, maker of the Ariane rocket. A poster tracking the performance of all ninety-four launches of Ariane 4 and 5 showed eight accidents, including launches sixty-three, seventy, and eighty-eight. Curious, Gigerenzer asked his guide what the risk of accident was. The guide replied that the security factor was around 99.6 percent.

  When Gigerenzer asked how eight accidents in ninety-four launches could translate into 99.6 percent certainty, the guide noted that they didn’t consider human error in the computation. Rather, DASA calculated the security factor based on the design features of the individual rocket parts.1

  This DASA story smacks of the probabilities surrounding the 2003 space shuttle catastrophe. NASA engineers estimated the rate of failure for the shuttle at 1-in-145 (0.7 percent), but the program suffered two complete losses in its first 113 launches.2 The DASA and NASA calculations call into question how we relate uncertainty and risk to probability.

  So how should we think about risk and uncertainty? A logical starting place is Frank Knight’s distinction: Risk has an unknown outcome, but we know what the underlying outcome distribution looks like. Uncertainty also implies an unknown outcome, but we don’t know what the underlying distribution looks like. So games of chance like roulette or blackjack are risky, while the outcome of a war is uncertain. Knight said that objective probability is the basis for risk, while subjective probability underlies uncertainty.

  To see another distinction between risk and uncertainty, we consult the dictionary: Risk is “the possibility of suffering harm or loss.” Uncertainty is “the condition of being uncertain,” and uncertain is “not known or established.” So risk always includes the notion of loss, while something can be uncertain but might not include the chance of loss.

  Why should investors care about the distinctions between risk and uncertainty? The main reason is that investing is fundamentally an exercise in probability. Every day, investors must translate investment opportunities into probabilities—indeed, this is an essential skill. So we need to think carefully about how we come up with probabilities for various situations and where the potential pitfalls lie.

  From Uncertainty to Probability

  In his book Calculated Risks, Gigerenzer provides three ways to get to a probability. These classifications follow a progression from least to most concrete and can help investors classify probability statements:3 • Degrees of belief. Degrees of belief are subjective probabilities and are the most liberal means to translate uncertainty into a probability. The point here is that investors can translate even onetime events into probabilities provided they satisfy the laws of probability—the exhaustive and exclusive set of alternatives adds up to one. Also, investors can frequently update probabilities based on degrees of belief when new, relevant information becomes available.

  • Propensities. Propensity-based probabilities reflect the properties of the object or system. For example, if a die is symmetrical and balanced, then you have a one-in-six probability of rolling any particular side. The risk assessment in the DASA and NASA cases appears to be propensity-based. This method of probability assessment does not always consider all the factors that may shape an outcome (such as human error in the rocket launchings).

  • Frequencies. Here the probability is based on a large number of observations in an appropriate reference class. Without an appropriate reference class, there can be no frequency-based probability assessment. So frequency users would not care what someone believes the outcome of a die roll will be, nor would they care about the design of the die. They would focus only on the yield of repeated die rolls.

  What about long-term stock market returns? Much of
the ink spilled on market prognostications is based on degrees of belief, with the resulting probabilities heavily colored by recent experience. Degrees of belief have a substantial emotional component.

  We can also approach the stock market from a propensity perspective. According to Jeremy Siegel’s Stocks for the Long Run, U.S. stocks have generated annual real returns just under 7 percent over the past 200 years, including many subperiods within that time.4 The question is whether there are properties that underlie the economy and profit growth that support this very consistent return result.

  We can also view the market from a frequency perspective. For example, we can observe the market’s annual returns from 1926 through 2006. This distribution of returns has an arithmetic average of 12.0 percent, with a standard deviation of 20.1 percent (provided that the statistics of normal distributions apply). If we assume that distribution of future annual returns will be similar to that of the past (i.e., the last eighty years is a legitimate reference class), we can make statements about the probabilities of future annual returns.5

  Of the three ways to come up with probabilities, the academic finance community is largely in the last camp. Most of the models in finance assume that price changes follow a normal distribution. One example is the Black-Scholes options-pricing model, where one of the key inputs is volatility—or the standard deviation of future price changes.

 

‹ Prev