Book Read Free

Trillion Dollar Economists_How Economists and Their Ideas have Transformed Business

Page 12

by Robert Litan


  There are important real-world exceptions to both these notions—that is, price should be set at marginal cost, and that diminishing returns rule the world. I lump the exceptions under a single label: Learning by doing.

  The phrase suggests its own definition, but first I want you to know what it is not. First, learning by doing is different from economies of scale. The latter arise when the average costs of producing extra units just keeps falling the more a firm produces. As output expands, there is more stuff to divide those fixed costs into, so average costs fall. Note this can happen even if the marginal cost of additional units remains the same (we will see in a moment what happens when this assumption is relaxed). Economies of scale are typically found in industries or sectors requiring large capital investments at the front end, as in utilities, oil refineries, railroads, and, in the Internet age, with heavy advertising to build brand awareness.

  Learning by doing also is not the same as another widely used term often associated with the Internet: network externalities, which are roughly the demand side equivalent of economies of scale on the supply or cost side. Network externalities exist when a service, such as telephone service or a social media network, becomes more valuable as the number of users increases (Metcalfe’s law, named after the computer scientist Robert Metcalfe, refers to the observation that the value of a network increases with the square of the number of users).

  Network externalities are prevalent in the software world and in mobile communications, which each have operating systems on which applications software, or apps, are overlaid. Here, too, a virtuous circle operates: The more apps that exist for a particular operating system, the greater will be the demand for the system. Likewise, the more popular the operating system, the greater are the incentives for applications programmers to develop apps that run on that system. In the presence of network externalities, it is hard to dislodge the dominant player—not always the first one, by the way—since new entrants must somehow persuade users or applications developers to abandon a platform in which they have invested time and money and one in which they already have a high degree of comfort (and for these reasons, they are sometimes said to be locked in).

  So, if we’re not talking about economies of scale or network externalities, then what do economists mean when they refer to learning by doing, and why is the concept so important? As you might guess, the answer is actually pretty simple. When workers engage in some task more frequently and routinely, they are likely to get better at it, just as most athletes, musicians, or almost any professionals, improve with practice.13 In the workplace, the worker who learns by doing may be getting paid the same rate per hour (and very likely will continue to be until an annual or more regular pay review is conducted), but because he or she is becoming more productive, then the cost per unit is falling. That’s learning by doing: when marginal costs fall as output increases.

  In fact, this is generally what happens in most manufacturing plants. It would be surprising if it didn’t. At the same time, however, at some point the learning levels off, the productivity advances diminish, and the effect slowly comes to a halt. If you were graphing the impact of learning by doing, you might see a steep upward slope in productivity (or conversely a steep downward slope in per unit cost) as production increases, followed by a leveling off, so that the overall productivity graph looks like an S (or an upside down S for unit costs).

  But there is at least one notable product that has long been an exception to this pattern—integrated circuits or computer chips. Gordon Moore, one of the cofounders of Intel, uttered a famous statement some time ago that has been enshrined as Moore’s law: Computing power doubles roughly every 18 months, which means that the cost of computing comes down by 50 percent over the same period. This has been happening for decades, despite some skeptics who thought the trend would stop. So far, there has been no S curve in semiconductors.

  This fact has important macroeconomic implications because so much hardware and software is linked to the processing capability of integrated circuits. But for my purpose here, I want to stress one key and often overlooked aspect of Moore’s law: what it suggests as an optimal pricing strategy, which in turn has important implications for cost control.

  The standard economics textbook tells readers that prices in competitive markets equal marginal costs. It turns out, however, that this equality does not describe the optimal pricing strategy for an innovative firm in perhaps a new market, where learning by doing is expected to last for a long time—as in semiconductors. If a firm is reasonably certain that its marginal costs will continue to fall as more output is produced and purchased, then the firm will want to set a price for its product below its current marginal cost. Doing so will generate additional cost-reducing demand, which in turn will help establish the firm as a market leader. Admittedly, Moore’s law may not have been evident when semiconductors were first used, but once the silicon revolution was underway, then it became safer for the two major firms in the industry (Intel and AMD) to set their prices with each new generation of chips on the assumption that firms and their workers would learn by doing.

  But how many firms are in industries that will end up having Moore’s law equivalents of their own—decades of falling marginal costs that will give firms sufficient confidence to price below marginal cost at the outset? It is doubtful that there are that many, but who really knows?

  The only way to find out is for truly entrepreneurial firms to try learning by doing pricing and then find out. I’m not optimistic that this will be a successful strategy for many firms, based on Norm Brodsky’s observation cited at the beginning of Chapter 3: It’s a lot easier to lower prices from having set them too high than to raise them once you’ve realized that you’ve set them too low. Where you set prices at first establishes important expectations among purchasers about your price points, and once those expectations are established, it is dangerous to upset them.

  There is another rationale for setting prices below marginal cost, however, which is a loose intellectual cousin of learning by doing. This is the notion of the first mover’s advantage—the idea being that if a company with a truly new idea or way of doing things gets a sufficient head start on future competitors, it can establish a brand presence that is impossible, or at least very difficult, for others to overcome. Amazon in Internet retailing (first starting with books) fits this model.

  Yet even this example has its limits. Other companies that have not been first movers have refined earlier companies’ efforts to become dominant players in their industries. Microsoft is perhaps the most notable example. While it did not charge for its Internet browser, the company made its fortune on operating systems and applications software for personal computers, and later did well in video game consoles. In each of these segments, Microsoft was not the first mover, but financially among the best movers. And as far as I know, in none of these profit-making markets did Microsoft set a price based on learning by doing.

  The Bottom Line

  There comes a time in many businesses when the common-sense maxim—just keep your costs down—just isn’t good enough. As this chapter has sketched out only on the surface, powerful mathematical techniques, developed initially by mathematicians and later refined by economists, were developed just prior to and during World War II, that have enabled firms in many industries with multiple outputs and inputs, or with business models built around transportation networks of some kind (of goods, people, or electrons), to figure out on a continuous basis how best to deploy limited resources to minimize costs or maximize profits.

  Importantly, these techniques do not require a lot of data. Problems or challenges growing out of the need to make sense of masses of data are another area where economists have made great contributions, and they are the subject of the next chapter.

  That is not all. In some cases, what it costs to produce an item depends on how much of it is made. In the semiconductor industry and perhaps in a few others, it can make sense�
��and it is the profit-maximizing strategy—to price what a firm is producing so low that the market literally forces the firm to produce more than it may initially had planned for or thought it possible to produce in order to compel the use of the learning curve to reduce costs.

  The propositions in this chapter are not self-evident. They take some economic knowledge. Hopefully, you are beginning to see that economists are even more important to success in business than you may have thought before picking up this book. That conclusion is reinforced in subsequent chapters.

  Notes

  1. I am indebted to Roger Noll, who made valuable suggestions regarding the organization of this chapter, and to William Baumol, who reviewed the chapter for accuracy and made a number of valuable substantive and editorial suggestions.

  2. See, for example, Joseph Czyzyk and Timothy J. Wisniewski, “The Diet Problem: A WWW-Based Interactive Case Study in Linear Programming,” Mathematics and Computer Science Division, Argonne National Laboratory, http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.47.5308. For an earlier version of this challenge, see George Dantzig, “The Diet Problem,” http://dl.dropbox.com/u/5317066/1990-dantzig-dietproblem.pdf.

  3. George Stigler, “The Cost of Subsistence,” www.jstor.org/stable/1231810.

  4. See, for example, this video, which tells one how to solve a linear programming problem in Excel: www.youtube.com/watch?v=I3pckP_8T-k.

  5. This profile draws on his obituary in The New York Times by Jeremy Pearce, May 23, 2005.

  6. Sira M. Allende and Carlos N. Bouza, “Professor George Bernard Dantzig, Life and Legend,” Revista Investigación Operacional, 26, no. 3, http://rev-inv-ope.univ-paris1.fr/files/26305/IO-26305-1.pdf.

  7. Joe Holley, “Obituaries of George Dantzig,” Washington Post, May 19, 2005, p. B6, http://supernet.som.umass.edu/photos/gdobit.html.

  8. William J. Baumol, Economic Theory and Operations Analysis (Englewood Cliffs, NJ: Prentice Hall, 1972), 70.

  9. Ibid.

  10. Ibid., 83.

  11. One of the pioneering works in the field is James Kelly, “Critical Path Planning and Scheduling: Mathematical Basis,” Operations Research 9, no. 3 (May–June, 1961).

  12. This profile is based on the author’s personal knowledge about Professor Baumol and correspondence with him.

  13. The writer Malcolm Gladwell made this observation a central theme of his book Outliers: The Stories of Success (New York: Little, Brown & Co., 2008). Gladwell argued that to do anything really well requires at least 10,000 hours of practice.

  Chapter 5

  Beyond Moneyball

  All sciences—social and scientific—have their techniques for validating hypotheses. Economics is no exception. Its practitioners make heavy use of increasingly sophisticated statistical techniques to try to sort out cause and effect and make predictions.

  Except in the situations I discuss in the next chapter, empirically oriented economists do not have the luxury of conducting experiments to test their hypotheses, as do their counterparts in the hard physical sciences. Instead, economists must try to tease out relationships and infer behaviors from what is already going on in the real world, which cannot be stopped and restarted to suit the needs of economists who want to know what is really happening.

  In this chapter, I will take you through a tour of the statistical method economists most commonly use—regression analysis—first by briefly explaining the concept and its origins, then discussing its use during the heydays of large forecasting models (the era when I learned economics) and later during the waning popularity of those models. I will also discuss how the tools of economics have been used to analyze complex challenges and solve real-world business disputes, in what is called the economic consulting industry. I will then introduce you to the exciting world of sports analytics, in which statistical and economic methods have played a central role. I conclude with an application of the Moneyball concept, popularized by Michael Lewis, to policy and business.1

  Consider this introduction to a basic statistical technique (which I promise will be painless) as a worthwhile investment in understanding the really fun stuff in the latter half of the chapter.

  A Brief Guide to Regression Analysis

  Suppose you are a farmer with a reasonably large amount of acreage and you grow corn. You have historical data on the amount you plant, the volume of your crop in bushels (we will ignore prices since they are outside your control because you compete in a highly competitive market), and the amounts of fertilizer and insecticide you apply. Now suppose an agribusiness conglomerate comes to you and talks you into buying its special supplement which it says will enhance your crop volume, based on data supplied from the company’s experience with other farms, and apply it after planting.

  Months pass and you reap your crop. Amazingly, it’s up 10 percent compared to the year before. Can you say with confidence that the application of the supplement did it?

  Of course you can’t. A whole lot of things influence your crop output, some within your control like the fertilizer, the insecticide, and the supplement, and other factors, such as amount of rain, days of sun, daily temperatures during the growing season, and so on. Ideally, you’d like to control for all factors other than the application of the supplement, so you can know with some degree of confidence whether and to what extent that supplement worked or didn’t.

  How would you go about addressing this challenge? Well, it turns out that some very smart statisticians in the early part of the nineteenth century developed techniques to enable you to do precisely this. Furthermore, these same techniques, known as multivariate regression analysis, have been taught to undergraduate and graduate students for decades in statistics and social science classes, in some cases even to advanced high school students.

  Regression analysis enables economists (or other social and physical scientists) to understand the relationships of different variables. In the farming example above, an economist or statistician would estimate an equation to understand how different independent factors (such as the special supplement, the fertilizer, the amount of rain, the days of sun, the temperature, and so on) cause an effect on crop output—the dependent factor.

  It turns out that when the data are collected and organized, an economist or statistician, or, frankly, many analysts with even less formal training, can estimate an equation to find out if the 10 percent increase in your crop output was in fact caused primarily by the application of the special supplement, or if instead the other factors had a larger effect. The beauty of regression analysis is that it enables the analyst to estimate the effect of every causal variable on the key dependent variable, controlling for the effect of all other causal variables that you want or need to explain and influence, such as crop output.

  It could be the case, for example, that the special supplement contributed very little to your increased crop yield and that instead the amounts of fertilizer, rain, and sun were the most important factors. If that is true, you have valuable information: There’s no need to buy the special supplement. Simply keep using the fertilizer and make sure your corn gets enough water (the sun you can’t control). You could even cut back on the insecticide, since the regression results may have shown a negligible effect.

  Besides helping you to understand how different factors relate to each other, regression analysis can be used to predict the value of one variable if you know the values of the factors that you think affect that variable. For example, with regression analysis, you can get a fairly good idea of roughly how much output your crop will yield next month under different scenarios (a lot of rain, few days of sun, and so on) based on historical data that you’ve collected. That is the magic of statistical analysis.

  In the real world, knowing not only the direction of the effect of a given factor on another (positively or negatively), but also the approximate size of that effect, can be extremely valuable. In a sense, many decisions in business, sports, public policy, and life involve questions about
the unknown effect of changing one factor. Will adding the special supplement cause your crop output to increase? Will revenues increase if we raise ticket prices, and if so, by how much? What are the main determinants of economic growth? Regression analysis can be used to answer these and many other questions.

  Although I spend the rest of this chapter highlighting some of the ways regression analysis and other statistical techniques have been used in the business world, I want to add a word of caution. There is an old saying: “There are lies, damn lies, and statistics;” implying that, given enough time and ingenuity, one can prove just about anything with statistics. That overstates things, but there is some truth to the saying. Careful choice of the time periods and the specifications of equations to be estimated can generate the results a particular researcher wants. Or, better yet, in this age of essentially costless computing, it is easier than ever to engage data mining to run regressions mindlessly to see which equations best fit the data and then proclaim that one has found the truth.

  The best way to guard against data mining, used in its pejorative sense (there is a more positive use of the term I will discuss later), is to test estimated equations out of sample, or in future periods after the period used to estimate the equation. Clearly, an equation that does a poor job predicting future values of the variable of interest, or the dependent variable, calls into question the value of the explanations it purports to advance from the historical data. Conversely, equations that do relatively well predicting future values inspire confidence in the validity of the regression results, which brings us to the first business use of regression analysis—predicting the future.

 

‹ Prev