Being in the Right Place at the Right Time
We noted earlier that the pure forms of interdependence and modularity are the extremes on a continuum, and companies may choose strategies anywhere along the spectrum at any point in time. A company may not necessarily fail if it starts with a prematurely modular architecture when the basis of competition is functionality and reliability. It will simply suffer from an important competitive disadvantage until the basis of competition shifts and modularity becomes the predominant architectural form. This was the experience of IBM and its clones in the personal computer industry. The superior performance of Apple’s computers did not preclude IBM from succeeding. IBM just had to fight its performance disadvantage because it opted prematurely for a modular architecture.
What happens to the initial leaders when they overshoot, after having jumped ahead of the pack with performance and reliability advantages that were grounded in proprietary architecture? The answer is that they need to modularize and open up their architectures and begin aggressively to sell their subsystems as modules to other companies whose low-cost assembly capability can help grow the market. Had good theory been available to provide guidance, for example, there is no reason why the executives of Apple Computer could not have modularized their design and have begun selling their operating system with its interdependent applications to other computer assemblers, preempting Microsoft’s development of Windows. Nokia appears today to be facing the same decision. We sense that adding even more features and functions to standard wireless handsets is overshooting what its less-demanding customers can utilize; and a dis-integrated handset industry that utilizes Symbian’s operating system is rapidly gaining traction. The next chapter will show that a company can begin with a proprietary architecture when disruptive circumstances mandate it, and then, when the basis of competition changes, open its architecture to become a supplier of key subsystems to low-cost assemblers. If it does this, it can avoid the traps of becoming a niche player on the one hand and the supplier of an undifferentiated commodity on the other. The company can become capitalism’s equivalent of Wayne Gretzky, the hockey great. Gretzky had an instinct not to skate to where the puck presently was on the ice, but instead to skate to where the puck was going to be. chapter 6 can help managers steer their companies not to the profitable businesses of the past, but to where the money will be.
There are few decisions in building and sustaining a new-growth business that scream more loudly for sound, circumstance-based theory than those addressed in this chapter. When the functionality and reliability of a product are not good enough to meet customers’ needs, then the companies that will enjoy significant competitive advantage are those whose product architectures are proprietary and that are integrated across the performance-limiting interfaces in the value chain. When functionality and reliability become more than adequate, so that speed and responsiveness are the dimensions of competition that are not now good enough, then the opposite is true. A population of nonintegrated, specialized companies whose rules of interaction are defined by modular architectures and industry standards holds the upper hand.
At the beginning of a wave of new-market disruption, the companies that initially will be the most successful will be integrated firms whose architectures are proprietary because the product isn’t yet good enough. After a few years of success in performance improvement, those disruptive pioneers themselves become susceptible to hybrid disruption by a faster and more flexible population of nonintegrated companies whose focus gives them lower overhead costs.
For a company that serves customers in multiple tiers of the market, managing the transition is tricky, because the strategy and business model that are required to successfully reach unsatisfied customers in higher tiers are very different from those that are necessary to compete with speed, flexibility, and low cost in lower tiers of the market. Pursuing both ends at once and in the right way often requires multiple business units—a topic that we address in the next two chapters.
Notes
1. We are indebted to a host of thoughtful researchers who have framed the existence and the role of core and competence in making these decisions. These include C. K. Prahalad and Gary Hamel, “The Core Competence of the Corporation,” Harvard Business Review, May–June 1990, 79–91; and Geoffrey Moore, Living on the Fault Line (New York: HarperBusiness, 2002). It is worth noting that “core competence,” as the term was originally coined by C. K. Prahalad and Gary Hamel in their seminal article, was actually an apology for the diversified firm. They were developing a view of diversification based on the exploitation of established capabilities, broadly defined. We interpret their work as consistent with a well-respected stream of research and theoretical development that goes all the way back to Edith Penrose’s 1959 book The Theory of the Growth of the Firm (New York: Wiley). This line of thinking is very powerful and useful. As it is used now, however, the term “core competence” has become synonymous with “focus”; that is, firms that seek to exploit their core competence do not diversify—if anything, they focus their business on those activities that they do particularly well. It is this “meaning in use” that we feel is misguided.
2. IBM arguably had much deeper technological capability in integrated circuit and operating system design and manufacturing than did Intel or Microsoft at the time IBM put these companies into business. It probably is more correct, therefore, to say that this decision was based more on what was core than what was competence. The sense that IBM needed to outsource was based on the correct perception of the new venture’s managers that they needed a far lower overhead cost structure to become acceptably profitable to the corporation and needed to be much faster in new-product development than the company’s established internal development processes, which had been honed in a world of complicated interdependent products with longer development cycles, could handle.
3. In the past decade there has been a flowering of important studies on these concepts. We have found the following ones to be particularly helpful: Rebecca Henderson and Kim B. Clark, “Architectural Innovation: The Reconfiguration of Existing Product Technologies and the Failure of Established Firms,” Administrative Science Quarterly 35 (1990): 9–30; K. Monteverde, “Technical Dialog as an Incentive for Vertical Integration in the Semiconductor Industry,” Management Science 41 (1995): 1624–1638; Karl Ulrich, “The Role of Product Architecture in the Manufacturing Firm,” Research Policy 24 (1995): 419–440; Ron Sanchez and J. T. Mahoney, “Modularity, Flexibility and Knowledge Management in Product and Organization Design,” Strategic Management Journal 17 (1996): 63–76; and Carliss Baldwin and Kim B. Clark, Design Rules: The Power of Modularity (Cambridge, MA: MIT Press, 2000).
4. The language we have used here characterizes the extremes of interdependence, and we have chosen the extreme end of the spectrum simply to make the concept as clear as possible. In complex product systems, there are varying degrees of interdependence, which differ over time, component by component. The challenges of interdependence can also be dealt with to some degree through the nature of supplier relationships. See, for example, Jeffrey Dyer, Collaborative Advantage: Winning Through Extended Enterprise Supplier Networks (New York: Oxford University Press, 2000).
5. Many readers have equated in their minds the terms disruptive and breakthrough. It is extremely important, for purposes of prediction and understanding, not to confuse the terms. Almost invariably, what prior writers have termed “breakthrough” technologies have, in our parlance, a sustaining impact on the trajectory of technological progress. Some sustaining innovations are simple, incremental year-to-year improvements. Other sustaining innovations are dramatic, breakthrough leapfrogs ahead of the competition, up the sustaining trajectory. For predictive purposes, however, the distinction between incremental and breakthrough technologies rarely matters. Because both types have a sustaining impact, the established firms typically triumph. Disruptive innovations usually do not entail technological breakthroughs. Ra
ther, they package available technologies in a disruptive business model. New breakthrough technologies that emerge from research labs are almost always sustaining in character, and almost always entail unpredictable interdependencies with other subsystems in the product. Hence, there are two powerful reasons why the established firms have a strong advantage in commercializing these technologies.
6. Professor Alfred Chandler’s The Visible Hand (Cambridge, MA: Belknap Press, 1977) is a classic study of how and why vertical integration is critical to the growth of many industries during their early period.
7. Economists’ concept of utility, or the satisfaction that customers receive when they buy and use a product, is a good way to describe how competition in an industry changes when this happens. The marginal utility that customers receive is the incremental addition to satisfaction that they get from buying a better-performing product. The increased price that they are willing to pay for a better product will be proportional to the increased utility they receive from using it—in other words, the marginal price improvement will equal the improvement in marginal utility. When customers can no longer utilize further improvements in a product, marginal utility falls toward zero, and as a result customers become unwilling to pay higher prices for better-performing products.
8. Sanchez and Mahoney, in “Modularity, Flexibility and Knowledge Management in Product and Organization Design,” were among the first to describe this phenomenon.
9. The landmark work of Professors Carliss Baldwin and Kim B. Clark, cited in note 3, describes the process of modularization in a cogent, useful way. We recommend it to those who are interested in studying the process in greater detail.
10. Many students of IBM’s history will disagree with our statement that competition forced the opening of IBM’s architecture, contending instead that the U.S. government’s antitrust litigation forced IBM open. The antitrust action clearly influenced IBM, but we would argue that government action or not, competitive and disruptive forces would have brought an end to IBM’s position of near-monopoly power.
11. Tracy Kidder’s Pulitzer Prize–winning account of product development at Data General, The Soul of a New Machine (New York: Avon Books, 1981), describes what life was like as the basis of competition began to change in the minicomputer industry.
12. MIT Professor Charles Fine has written an important book on this topic as well: Clockspeed (Reading, MA: Perseus Books, 1998). Fine observed that industries go through cycles of integration and nonintegration in a sort of “double helix” cycle. We hope that the model outlined here and in chapter 6 both confirms and adds causal richness to Fine’s findings.
13. The evolving structure of the lending industry offers a clear example of these forces at work. Integrated banks such as J.P. Morgan Chase have powerful competitive advantages in the most complex tiers of the lending market. Integration is key to their ability to knit together huge, complex financing packages for sophisticated and demanding global customers. Decisions about whether and how much to lend cannot be made according to fixed formulas and measures; they can only be made through the intuition of experienced lending officers.
Credit scoring technology and asset securitization, however, are disrupting and dis-integrating the simpler tiers of the lending market. In these tiers, lenders know and can measure precisely those attributes that determine whether borrowers will repay a loan. Verifiable information about borrowers—such as how long they have lived where they live, how long they have worked where they work, what their income is, and whether they’ve paid other bills on time—is combined to make algorithm-based lending decisions. Credit scoring took root in the 1960s in the simplest tier of the lending market, in department stores’ decisions to issue their own credit cards. Then, unfortunately for the big banks, the disruptive horde moved inexorably up-market in pursuit of profit—first to general consumer credit card loans, then to automobile loans and mortgage loans, and now to small business loans. The lending industry in these simpler tiers of the market has largely dis-integrated. Specialist nonbank companies have emerged to provide each slice of added value in these tiers of the lending industry. Whereas integration is a big advantage in the most complex tiers of the market, in overserved tiers it is a disadvantage.
14. Our conclusions support those of Stan J. Liebowitz and Stephen E. Margolis in Winners, Losers & Microsoft: Competition and Antitrust in High Technology (Oakland, CA: Independent Institute, 1999).
15. Another good illustration of this is the push being made by Apple Computer, at the time of this writing, to be the gateway to the consumer for multimedia entertainment. Apple’s interdependent integration of the operating system and applications creates convenience, which customers value at this point because convenience is not yet good enough.
16. Specifiability, measurability, and predictability constitute what an economist would term “sufficient information” for an efficient market to emerge at an interface, allowing organizations to deal with each other at arm’s length. A fundamental tenet of capitalism is that the invisible hand of market competition is superior to that of managerial oversight as a coordinating mechanism between actors in a market. This is why, when a modular interface becomes defined, an industry will dis-integrate at that interface. However, when specifiability, measurability, and predictability do not exist, efficient markets cannot function. It is under these circumstances that managerial oversight and coordination perform better than market competition as a coordinating mechanism.
This is an important underpinning of the award-winning findings of Professor Tarun Khanna and his colleagues, which show that in developing economies, diversified business conglomerates outperform focused, independent companies, whereas the reverse is true in developed economies. See, for example, Tarun Khanna and Krishna G. Palepu, “Why Focused Strategies May Be Wrong for Emerging Markets,” Harvard Business Review, July–August 1997, 41–51; and Tarun Khanna and Jan Rivkin, “Estimating the Performance Effects of Business Groups in Emerging Markets,” Strategic Management Journal 22 (2001): 45–74.
A bedrock set of concepts in understanding why organizational integration is critical when the conditions of modularity are not met is developed in the transaction cost economics (TCE) school of thought, which traces its origins to the work of Ronald Coase (R. H. Coase, “The Nature of the Firm,” Econometrica 4 [1937]: 386–405). Coase argued that firms were created when it got “too expensive” to negotiate and enforce contracts between otherwise “independent” parties. More recently, the work of Oliver Williamson has proven seminal in the exploration of transaction costs as a determinant of firm boundaries. See, for example, O. E. Williamson, Markets and Hierarchies (New York: Free Press, 1975); “Transaction Cost Economics,” in The Economic Institutions of Capitalism, ed., O. E. Williamson (New York: Free Press, 1985), 15–42; and “Transaction-Cost Economics: The Governance of Contractual Relations,” in Organiational Economics, ed., J. B. Barney and W. G. Ouichi (San Francisco: Jossey-Bass, 1986). In particular, TCE has been used to explain the various ways in which firms might expand their operating scope: either through unrelated diversification (C. W. L. Hill, et al., “Cooperative Versus Competitive Structures in Related and Unrelated Diversified Firms,” Organization Science 3, no. 4 [1992]: 501–521); related diversification (D. J. Teece, “Economics of Scope and the Scope of the Enterprise,” Journal of Economic Behavior and Organization 1 [1980]: 223–247); and D. J. Teece, “Toward an Economic Theory of the Multiproduct Firm,” Journal of Economic Behavior and Organization 3 [1982], 39–63); or vertical integration (K. Arrow, The Limits of Organization [New York: W. W. Norton, 1974]; B. R. G. Klein, et al., “Vertical Integration, Appropriable Rents and Competitive Contracting Process,” Journal of Law and Economics 21 [1978] 297–326; and K. R. Harrigan, “Vertical Integration and Corporate Strategy,” Academy of Management Journal 28, no. 2 [1985]: 397–425). More generally, this line of research is known as the “market failures” paradigm for explaining changes in firm scope (K. N.
M. Dundas, and P. R. Richardson, “Corporate Strategy and the Concept of Market Failure,” Strategic Management Journal 1, no. 2 [1980]: 177–188). Our hope is that we have advanced this line of thinking by elaborating more precisely the considerations that give rise to the contracting difficulties that lie at the heart of the TCE school.
17. Even if the incumbent local exchange carriers (ILECs) didn’t understand all the complexities and unintended consequences better than CLEC engineers, organizationally they were much better positioned to resolve any difficulties, since they could appeal to organizational mechanisms rather than have to rely on cumbersome and likely incomplete ex ante contracts.
18. See Jeffrey Lee Funk, The Mobile Internet: How Japan Dialed Up and the West Disconnected (Hong Kong: ISI Publications, 2001). This really is an extraordinarily insightful study from which a host of insights can be gleaned. In his own language, Funk shows that another important reason why DoCoMo and J-Phone were so successful in Japan is that they followed the pattern that we describe in chapters 3 and 4 of this book. They initially targeted customers who were largely non-Internet users (teenaged girls) and helped them get done better a job that they had already been trying to do: have fun with their friends. Western entrants into this market, in contrast, envisioned sophisticated offerings to be sold to current customers of mobile phones (who primarily used them for business) and current users of the wire-line Internet. An internal perspective on this development can be found in Mari Matsunaga, The Birth of I-Mode: An Analogue Account of the Mobile Internet (Singapore: Chuang Yi Publishing, 2001). Matsunaga was one of the key players in the development of i-mode at DoCoMo.
19. See “Integrate to Innovate,” a Deloitte Research study by Michael E. Raynor and Clayton M. Christensen. Available at < http://www.dc.com/vcd>, or upon request from [email protected].
The Innovator's Solution Page 18