Book Read Free

The Innovator's Solution

Page 19

by Clayton Christensen

20. Some readers who are familiar with the different experiences of the European and American mobile telephony industries may take issue with this paragraph. Very early on, the Europeans coalesced around a prenegotiated standard called GSM, which enabled mobile phone users to use their phones in any country. Mobile phone usage took off more rapidly and achieved higher penetration rates than in America, where several competing standards were battling it out. Many analysts have drawn the general conclusion from the Europeans’ strategy of quickly coalescing around a standard that it is always advisable to avoid the wasteful duplication of competing mutually incompatible architectures. We believe that the benefits of a single standard have been largely exaggerated, and that other important differences between the United States and Europe which contributed significantly to the differential adoption rates have not been given their due.

  First, the benefits of a single standard appear to have manifested themselves largely in terms of supply-side rather than demand-side benefits. That is, by stipulating a single standard, European manufacturers of network equipment and handsets were able to achieve greater scale economies than companies manufacturing for the North American markets. This might well have manifested itself in the form of lower prices to consumers; however, the relevant comparison is not the cost of mobile telephony in Europe versus North America—these services were not competing with each other. The relevant comparison is with wireline telephony in each respsective market. And here it is worth noting that wireline local and long distance telephony services are much more expensive in Europe than in North America, and as a result, wireless telephony was a much more attractive substitute for wireline in Europe than in North America. The putative demand-side benefit of transnational usage has not, to our knowledge, been demonstrated in the usage patterns of European consumers. Consequently, we would be willing to suggest that a far more powerful cause of the relative success of mobile telephony in Europe was not that schoolgirls from Sweden could use their handset when on holiday in Spain, but rather the relative improvement in ease of use and cost provided by mobile telephony versus the wireline alternative.

  Second, and perhaps even more important, European regulation mandated that “calling party pays” with respect to mobile phone usage, whereas North American regulators mandated that “mobile party pays.” In other words, in Europe, if you call someone’s mobile phone number, you pay the cost of the call; to the recipient, it’s free. In North America, if someone calls you on your mobile phone, it’s on your dime. As a result, Europeans were far freer in giving out their mobile phone numbers, hence increasing the likelihood of usage. For more on this topic, see Strategis Group, “Calling Party Pays Case Study Analysis; ITU-BDT Telecommunication Regulatory Database”; and ITU Web site: .

  Teasing out the effects of each of these contributors (the GSM standard, lower relative price versus wireline, and calling party pays regulation), as well as others that might be adduced is not a trivial task. But we would suggest that the impact of the single standard is far less than typically implied, and certainly is not the principal factor in explaining higher mobile phone penetration rates in Europe versus North America.

  CHAPTER SIX

  HOW TO AVOID COMMODITIZATION

  What causes commoditization? Is it the inevitable end-state of all companies in competitive markets? Can companies take action at any point in their development that can arrest its onset? Once the tide of commoditization has swept through an industry, can the flow reverse back toward proprietary, differentiated, profitable products? How can I respond to this?

  Many executives have resigned themselves to the belief that, no matter how miraculous their innovations, their inevitable fate is to be “commoditized.” These fears are grounded in painful experience. Here’s a frightening example: The first one-gigabyte 3.5-inch disk drives were introduced to the world in 1992 at prices that enabled their manufacturers to earn 60 percent gross margins. These days, disk drive companies are struggling to eke out 15 percent margins on drives that are sixty times better. This isn’t fair, because these things are mechanical and microelectronic marvels. How many of us could mechanically position the head so that it stored and retrieved data in circular tracks that are only 0.00008 inch apart on the surface of disks, without ever reading data off the wrong track? And yet disk drives of this genre are regarded today as undifferentiable commodities. If products this precise and complicated can be commoditized, is there any hope for the rest of us?

  It turns out that there is hope. One of the most exciting insights from our research about commoditization is that whenever it is at work somewhere in a value chain, a reciprocal process of de-commoditization is at work somewhere else in the value chain.1 And whereas commoditization destroys a company’s ability to capture profits by undermining differentiability, de-commoditization affords opportunities to create and capture potentially enormous wealth. The reciprocality of these processes means that the locus of the ability to differentiate shifts continuously in a value chain as new waves of disruption wash over an industry. As this happens, companies that position themselves at a spot in the value chain where performance is not yet good enough will capture the profit.

  Our purpose in this chapter is to help managers understand how these processes of commoditization and de-commoditization work, so that they can detect when and where they are beginning to happen. We hope that this understanding can help those who are building growth businesses to do so in a place in the value chain where the forces of de-commoditization are at work. We also hope it helps those who are running established businesses to reposition their firms in the value chain to catch these waves of de-commoditization as well. To return to Wayne Gretzky’s insight about great hockey playing, we want to help managers develop the intuition for skating not to where the money presently is in the value chain, but to where the money will be.2

  The Processes of Commoditization and De-commoditization

  The process that transforms a profitable, differentiated, proprietary product into a commodity is the process of overshooting and modularization we described in chapter 5. At the leftmost side of the disruption diagram, the companies that are most successful are integrated companies that design and assemble the not-good-enough enduse products. They make attractive profits for two reasons. First, the interdependent, proprietary architecture of their products makes differentiation straightforward. Second, the high ratio of fixed to variable costs that often is inherent in the design and manufacture of architecturally interdependent products creates steep economies of scale that give larger competitors strong cost advantages and create formidable entry barriers against new competitors.

  This is why, for example, IBM, as the most integrated competitor in the mainframe computer industry, held a 70 percent market share but made 95 percent of the industry’s profits: It had proprietary products, strong cost advantages, and high entry barriers. For the same reasons, from the 1950s through the 1970s, General Motors, with about 55 percent of the U.S. automobile market, garnered 80 percent of the industry’s profits. Most of the firms that were suppliers to IBM and General Motors, in contrast, had to make do with subsistence profits year after year. These firms’ experiences are typical. Making highly differentiable products with strong cost advantages is a license to print money, and lots of it.3

  We must emphasize that the reason many companies don’t reach this nirvana or remain there for long is that it is the not-good-enough circumstance that enables managers to offer products with proprietary architectures that can be made with strong cost advantages versus competitors. When that circumstance changes—when the dominant, profitable companies overshoot what their mainstream customers can use—then this game can no longer be played, and the tables begin to turn. Customers will not pay still-higher prices for products they already deem too good. Before long, modularity rules, and commoditization sets in. When the relevant dimensions of your product’s performance are determined not by you but by the
subsystems that you procure from your suppliers, it becomes difficult to earn anything more than subsistence returns in a product category that used to make a lot of money. When your world becomes modular, you’ll need to look elsewhere in the value chain to make any serious money.

  The natural and inescapable process of commoditization occurs in six steps:

  As a new market coalesces, a company develops a proprietary product that, while not good enough, comes closer to satisfying customers’ needs than any of its competitors. It does this through a proprietary architecture, and earns attractive profit margins.

  As the company strives to keep ahead of its direct competitors, it eventually overshoots the functionality and reliability that customers in lower tiers of the market can utilize.

  This precipitates a change in the basis of competition in those tiers, which . . .

  . . . precipitates an evolution toward modular architectures, which . . .

  . . . facilitates the dis-integration of the industry, which in turn . . .

  . . . makes it very difficult to differentiate the performance or costs of the product versus those of competitors, who have access to the same components and assemble according to the same standards. This condition begins at the bottom of the market, where functional overshoot occurs first, and then moves up inexorably to affect the higher tiers.

  Note that it is overshooting—the more-than-good-enough circumstance—that connects disruption and the phenomenon of commoditization. Disruption and commoditization can be seen as two sides of the same coin. A company that finds itself in a more-than-good-enough circumstance simply can’t win: Either disruption will steal its markets, or commoditization will steal its profits. Most incumbents eventually end up the victim of both, because, although the pace of commoditization varies by industry, it is inevitable, and nimble new entrants rarely miss an opportunity to exploit a disruptive foothold.

  There can still be prosperity around the corner, however. The attractive profits of the future are often to be earned elsewhere in the value chain, in different stages or layers of added value. That’s because the process of commoditization initiates a reciprocal process of de-commoditization. Ironically, this de-commoditization—with the attendant ability to earn lots of money—occurs in places in the value chain where attractive profits were hard to attain in the past: in the formerly modular and undifferentiable processes, components, or subsystems.4

  To visualize the reciprocal process, remember the steel minimills from chapter 2. As long as the minimills were competing against integrated mills in the rebar market, they made a lot of money because they had a 20 percent cost advantage relative to the integrated mills. But as soon as they drove the last high-cost competitor out of the rebar market, the low-cost minimills found themselves slugging it out against equally low-cost minimills in a commodity market, and competition among them caused pricing to collapse. The assemblers of modular products generally receive the same reward for victory as the minimills did whenever they succeed in driving the higher-cost competitors and their proprietary architectures out of a tier in their market: The victorious disruptors are left to slug it out against equally low-cost disruptors who are assembling modular components procured from a common supplier base. Lacking any basis for competitive differentiation, only subsistence levels of profit remain. A low-cost strategy works only as long as there are higher-cost competitors left in the market.5

  The only way that modular disruptors can keep profits healthy is to carry their low-cost business models up-market as fast as possible so that they can keep competing at the margin against higher-cost makers of proprietary products. Assemblers of modular products do this by finding the best performance-defining components and subsystems and incorporating them in their products faster than anyone else.6 The assemblers need the very best performance-defining components in order to race up-market where they can make money again. Their demand for improvements in performance-defining components, as a result, throws the suppliers of those components back to the not-good-enough side of the disruption diagram.

  Competitive forces consequently compel suppliers of these performance-defining components to create architectures that, within the subsystems, are increasingly interdependent and proprietary. Hence, the performance-defining subsystems become de-commoditized as the result of the end-use products becoming modular and commoditized.

  Let us summarize the steps in this reciprocal process of decommoditization:

  The low-cost strategy of modular product assemblers is only viable as long as they are competing against higher-cost opponents. This means that as soon as they drive the high-cost suppliers of proprietary products out of a tier of the market, they must move up-market to take them on again in order to continue to earn attractive profits.

  Because the mechanisms that constrain or determine how rapidly they can move up-market are the performance-defining subsystems, these elements become not good enough and are flipped to the left side of the disruption diagram.

  Competition among subsystem suppliers causes their engineers to devise designs that are increasingly proprietary and interdependent. They must do this as they strive to enable their customers to deliver better performance in their end-use products than the customers could if they used competitors’ subsystems.

  The leading providers of these subsystems therefore find themselves selling differentiated, proprietary products with attractive profitability.

  This creation of a profitable, proprietary product is the beginning, of course, of the next cycle of commoditization and decommoditization.

  Figure 6-1 illustrates more generally how this worked in the product value chain of the personal computer industry in the 1990s. Starting at the top of the diagram, money flowed from the customer to the companies that designed and assembled computers; as the decade progressed, however, less and less of the total potential profit stayed with the computer makers—most of it flowed right through these companies to their suppliers.7

  As a result, quite a bit of the money that the assemblers got from their customers flowed over to Microsoft and lodged there. Another chunk flowed to Intel and stopped there. Money also flowed to the makers of dynamic random access memory (DRAM), such as Samsung and Micron, but not much of it stopped at those stages in the value chain in the form of profit. It flowed through and accumulated instead at firms like Applied Materials, which supplied the manufacturing equipment that the DRAM makers used. Similarly, money flowed right through the assemblers of modular disk drives, such as Maxtor and Quantum, and tended to lodge at the stage of value added where heads and disks were made.

  FIGURE 6 - 1

  Where the Money Was Made in the PC Industry’s Product Value Chain

  What is different about the baskets in the diagram that held money, versus those through which the money seemed to leak? The tight baskets in which profit accumulated for most of this period were products that were not yet good enough for what their immediate customers in the value chain needed. The architectures of those products therefore tended to be interdependent and proprietary. Firms in the leaky-basket situation could only hang onto subsistence profits because the functionality of their products tended to be more than good enough. Their architectures therefore were modular.

  If a company supplies a performance-defining but not-yet-good-enough input for its customers’ products or processes, it has the power to capture attractive profit. Consider the DRAM industry as an example. While the architecture of their own chips was modular, DRAM makers could not be satisfied even with the very best manufacturing equipment available. In order to succeed, DRAM makers needed to make their products at ever-higher yields and ever-lower costs. This rendered the functionality of equipment made by firms such as Applied Materials not good enough. The architecture of this equipment became interdependent and proprietary as a consequence, as the equipment makers strove to inch closer to the functionality that their customers needed.

  It is important never to conclude that an i
ndustry such as disk drives or DRAMs is inherently unprofitable, whereas others such as microprocessors or semiconductor manufacturing equipment are inherently profitable. “Industry” is usually a faulty categorization scheme.8 What makes an industry appear to be attractively profitable is the circumstance in which its companies happen to be at a particular point in time, at each point in the value-added chain, because the law of conservation of attractive profits is almost always at work (see the appendix to this chapter). Let’s take a deeper look at the disk drive industry to see why this is so.

  For most of the 1990s, in the market tiers where disk drives were sold to makers of desktop personal computers, the capacity and access times of the drives were more than adequate. The drives’ architectures consequently became modular, and the gross margins that the nonintegrated assemblers of 3.5-inch drives could eke out in the desktop PC segment declined to around 12 percent. Nonintegrated disk drive assemblers such as Maxtor and Quantum dominated this market (their collective market share exceeded 90 percent) because integrated manufacturers such as IBM could not survive on such razor-thin margins.

  The drives had adequate capacity, but the assemblers could not be satisfied even with the very best heads and disks available, because if they maximized the amount of data they could store per square inch of disk space, they could use fewer disks and heads in the drives—which was a powerful driver of cost. The heads and disks, consequently, became not good enough and evolved toward complex, interdependent subassemblies. Head and disk manufacturing became so profitable, in fact, that many major drive makers integrated backward into making their own heads and disks.9

  But it wasn’t the disk drive industry that was marginally profitable—it was the modular circumstance in which the 3.5-inch drive makers found themselves. The evidence: The much smaller 2.5-inch disk drives used in notebook computers tended not to have enough capacity during this same era. True to form, their architectures were interdependent, and the products had to be made by integrated companies. As the most integrated manufacturer and the one with the most advanced head and disk technology in the 1990s, IBM made 40 percent gross margins in 2.5-inch drives and controlled 80 percent of that market. In contrast, IBM had less than 3 percent of the unit volume in drives sold to the desktop PC market, where its integration rendered it uncompetitive.10

 

‹ Prev