One example is in diagnosing and resolving colon disorders. To date, if a patient appeared to have a possible lesion or tumor in the colon, the physician would perform a colonoscopy in a relatively expensive clinic or hospital. Threading the flexible scope through a serpentine colon requires the skill of a very capable specialist. If the colonoscopy revealed a problem, then the patient would be referred to an even higher-cost surgeon, who would operate to correct the problem in an even higher-cost hospital. This company is introducing a technology that is much easier to use and that will enable the less-specialized diagnosing physicians to perform these procedures safely and effectively right in their offices—and thereby to pull into the cost structure of their office value-added procedures that historically could only be done in more expensive channels.
This device could be marketed as a sustaining innovation to the specialists who already have mastered the difficult-to-use traditional scopes. You can imagine what the physician would ask the salesperson: “Why do I need this? Does it allow me to see better or do more than what I have right now? Is the scope cheaper? Won’t this thing here break?” This is a sustaining-technology conversation.
If the company marketed this as a disruptive technology enabling less-specialized physicians to do this procedure in their offices, however, the physician would likely ask, “What will it take to get trained on this thing?” This is a disruptive conversation.
What kinds of customers will provide the most solid foundation for future growth? You want customers who have long wanted your product but were not able to get one until you arrived on the scene. You want to be able to easily delight these customers, and you want them to need you. You want customers whom you can have all to yourself, protected from the advances of competitors. And you want your customers to be so attractive to those you work with that everyone in your value network is motivated to cooperate in pursuing the opportunity.
The search for customers like this is not a quixotic quest. These are the kinds of customers that you find when you shape innovative ideas to fit the four elements of the pattern of competing against nonconsumption.
Despite how appealing these kinds of customers appear to be on paper, the resource allocation process forces most companies, when faced with an opportunity like this, to pursue exactly the opposite kinds of customers: They target customers who already are using a product to which they have become accustomed. To escape this dilemma, managers need to frame the disruption as a threat in order to secure resource commitments, and then switch the framing for the team charged with building the business to be one of a search for growth opportunities. Carefully managing this process in order to focus on these ideal customers can give new-growth ventures a solid foundation for future growth.
Notes
1. Economists have great language for this phenomenon. As the performance of a product overshoots what customers are able to utilize, the customers experience diminishing marginal utility with each increment in product performance. Over time the marginal price that customers are willing to pay for an improvement comes to equal the marginal utility that they receive from consuming the improvement. When the marginal increase in price that a company can sustain in the market for an improved product approaches zero, it suggests that the marginal utility that customers derive from using the product also is approaching zero.
2. We stated earlier that few technologies are intrinsically sustaining or disruptive in character. These are extremes in a continuum, and the disruptiveness of an innovation can only be described relative to various companies’ business models, to customers, and to other technologies. What the transistor case illustrates is that attempting to commercialize some technologies as sustaining innovations in large and obvious markets is very costly.
3. Figure 4-2 was constructed from data provided by the American Heart Association National Center. Because these data measure only those procedures performed in hospitals, angioplasty procedures that were performed in outpatient and other nonhospital settings are not included. This means that the angioplasty numbers in the chart are underestimated, and that the underestimation becomes more significant over time.
4. There are many other examples of this, in addition to those cited in the text. For example, full-service stock brokers such as Merrill Lynch continue to move up-market in their original value network toward clients of even larger net worth, and their top and bottom lines improve as they do so. They do not yet feel the pain that they ultimately will experience as the online discount brokers find ways to provide ever-better service.
5. See Clark Gilbert and Joseph L. Bower, “Disruptive Change: When Trying Harder Is Part of the Problem,” Harvard Business Review, May 2002, 94–101; and Clark Gilbert, “Can Competing Frames Co-exist? The Paradox of Threatened Response,” working paper 02-056, Boston, Harvard Business School, 2002.
6. Daniel Kahneman and Amos Tversky, “Choice, Values, and Frames,” American Psychologist 39 (1984): 341–350. Kahneman and Tversky published prodigiously on these issues. This reference is simply an example of their work.
7. The phenomenon of threat rigidity has been examined by a number of scholars, notably Jane Dutton and her colleagues. See, for example, Jane E. Dutton and Susan E. Jackson, “Categorizing Strategic Issues: Links to Organizational Action,” Academy of Management Review 12 (1987): 76–90; and Jane E. Dutton, “The Making of Organizational Opportunities—An Interpretive Pathway to Organizational Change,” Research in Organizational Behavior 15 (1992): 195–226.
8. Arthur Stinchcombe has written eloquently on the proposition that getting the initial conditions right is key to causing subsequent events to happen as desired. See Arthur Stinchcombe, “Social Structure and Organizations,” in Handbook of Organizations, ed. James March (Chicago: McNally, 1965), 142–193.
9. Clark Gilbert, “Pandesic—The Challenges of a New Business Venture,” case 9-399-129 (Boston: Harvard Business School, 2000).
CHAPTER FIVE
GETTING THE SCOPE OF
THE BUSINESS RIGHT
Which activities should a new-growth venture do internally in order to be as successful as possible as fast as possible, and which should it outsource to a supplier or a partner? Will success be best built around a proprietary product architecture, or should the venture embrace modular, open industry standards? What causes the evolution from closed and proprietary product architectures to open ones? Might companies need to adopt proprietary solutions again, once open standards have emerged?
Decisions about what to in-source and what to procure from suppliers and partners have a powerful impact on a new-growth venture’s chances for success. A widely used theory to guide this decision is built on categories of core and competence. If something fits your core competence, you should do it inside. If it’s not your core competence and another firm can do it better, the theory goes, you should rely on them to provide it.1
Right? Well, sometimes. The problem with the core-competence/ not-your-core-competence categorization is that what might seem to be a noncore activity today might become an absolutely critical competence to have mastered in a proprietary way in the future, and vice versa.
Consider, for example, IBM’s decision to outsource the microprocessor for its PC business to Intel, and its operating system to Microsoft. IBM made these decisions in the early 1980s in order to focus on what it did best—designing, assembling, and marketing computer systems. Given its history, these choices made perfect sense. Component suppliers to IBM historically had lived a miserable, profit-free existence, and the business press widely praised IBM’s decision to out-source these components of its PC. It dramatically reduced the cost and time required for development and launch. And yet in the process of outsourcing what it did not perceive to be core to the new business, IBM put into business the two companies that subsequently captured most of the profit in the industry.
How could IBM have known in advance that such a sensible decision would prove so costly? More broadly, how can any execu
tive who is launching a new-growth business, as IBM was doing with its PC division in the early 1980s, know which value-added activities are those in which future competence needs to be mastered and kept inside? 2
Because evidence from the past can be such a misleading guide to the future, the only way to see accurately what the future will bring is to use theory. In this case, we need a circumstance-based theory to describe the mechanism by which activities become core or peripheral. Describing this mechanism and showing how managers can use the theory is the purpose of chapters 5 and 6.
Integrate or Outsource?
IBM and others have demonstrated—inadvertently, of course—that the core/noncore categorization can lead to serious and even fatal mistakes. Instead of asking what their company does best today, managers should ask, “What do we need to master today, and what will we need to master in the future, in order to excel on the trajectory of improvement that customers will define as important?”
The answer begins with the job-to-be-done approach: Customers will not buy your product unless it solves an important problem for them. But what constitutes a “solution” differs across the two circumstances in figure 5-1: whether products are not good enough or are more than good enough. The advantage, we have found, goes to integration when products are not good enough, and to outsourcing—or specialization and dis-integration—when products are more than good enough.
FIGURE 5 - 1
Product Architectures and Integration
To explain, we need to explore the engineering concepts of interdependence and modularity and their importance in shaping a product’s design. We will then return to figure 5-1 to see these concepts at work in the disruption diagram.
Product Architecture and Interfaces
A product’s architecture determines its constituent components and subsystems and defines how they must interact—fit and work together—in order to achieve the targeted functionality. The place where any two components fit together is called an interface. Interfaces exist within a product, as well as between stages in the value-added chain. For example, there is an interface between design and manufacturing, and another between manufacturing and distribution.
An architecture is interdependent at an interface if one part cannot be created independently of the other part—if the way one is designed and made depends on the way the other is being designed and made. When there is an interface across which there are unpredictable interdependencies, then the same organization must simultaneously develop both of the components if it hopes to develop either component.
Interdependent architectures optimize performance, in terms of functionality and reliability. By definition, these architectures are proprietary because each company will develop its own interdependent design to optimize performance in a different way. When we use the term interdependent architecture in this chapter, readers can substitute as synonyms optimized and proprietary architecture.
In contrast, a modular interface is a clean one, in which there are no unpredictable interdependencies across components or stages of the value chain. Modular components fit and work together in well-understood and highly defined ways. A modular architecture specifies the fit and function of all elements so completely that it doesn’t matter who makes the components or subsystems, as long as they meet the specifications. Modular components can be developed in independent work groups or by different companies working at arm’s length.
Modular architectures optimize flexibility, but because they require tight specification, they give engineers fewer degrees of freedom in design. As a result, modular flexibility comes at the sacrifice of performance.3
Pure modularity and interdependence are the ends of a spectrum: Most products fall somewhere between these extremes. As we shall see, companies are more likely to succeed when they match product architecture to their competitive circumstances.
Competing with Interdependent Architecture in a Not-Good-Enough World
The left side of figure 5-1 indicates that when there is a performance gap—when product functionality and reliability are not yet good enough to address the needs of customers in a given tier of the market—companies must compete by making the best possible products. In the race to do this, firms that build their products around proprietary, interdependent architectures enjoy an important competitive advantage against competitors whose product architectures are modular, because the standardization inherent in modularity takes too many degrees of design freedom away from engineers, and they cannot optimize performance.
To close the performance gap with each new product generation, competitive forces compel engineers to fit the pieces of their systems together in ever-more-efficient ways in order to wring the most performance possible out of the technology that is available. When firms must compete by making the best possible products, they cannot simply assemble standardized components, because from an engineering point of view, standardization of interfaces (meaning fewer degrees of design freedom) would force them to back away from the frontier of what is technologically possible. When the product is not good enough, backing off from the best that can be done means that you’ll fall behind.
Companies that compete with proprietary, interdependent architectures must be integrated: They must control the design and manufacture of every critical component of the system in order to make any piece of the system. As an illustration, during the early days of the mainframe computer industry, when functionality and reliability were not yet good enough to satisfy the needs of mainstream customers, you could not have existed as an independent contract manufacturer of mainframe computers because the way the machines were designed depended on the art that would be used in manufacturing, and vice versa. There was no clean interface between design and manufacturing. Similarly, you could not have existed as an independent supplier of operating systems, core memory, or logic circuitry to the mainframe industry because these key subsystems had to be interdependently and iteratively designed, too.4
New, immature technologies are often drafted into use as sustaining improvements when functionality is not good enough. One reason why entrant companies rarely succeed in commercializing a radically new technology is that breakthrough sustaining technologies are rarely plug-compatible with existing systems of use.5 There are almost always many unforseen interdependencies that mandate change in other elements of the system before a viable product that incorporates a radically new technology can be sold. This makes the new product development cycle tortuously long when breakthrough technology is expected to be the foundation for improved performance. The use of advanced ceramics materials in engines, the deployment of high-bandwidth DSL lines at the “last mile” of the telecommunications infrastructure, the building of superconducting electric motors for ship propulsion, and the transition from analog to digital to all-optical telecommunications networks could all only be accomplished by extensively integrated companies whose scope could encompass all of the interdependencies that needed to be managed. This is treacherous terrain for entrants.
For these reasons it wasn’t just IBM that dominated the early computer industry by virtue of its integration. Ford and General Motors, as the most integrated companies, were the dominant competitors during the not-good-enough era of the automobile industry’s history. For the same reasons, RCA, Xerox, AT&T, Standard Oil, and US Steel dominated their industries at similar stages. These firms enjoyed near-monopoly power. Their market dominance was the result of the not-good-enough circumstance, which mandated interdependent product or value chain architectures and vertical integration.6 But their hegemony proved only temporary, because ultimately, companies that have excelled in the race to make the best possible products find themselves making products that are too good. When that happens, the intricate fabric of success of integrated companies like these begins to unravel.
Overshooting and Modularization
One symptom that these changes are afoot—that the functionality and reliability of a product have become too
good—is that salespeople will return to the office cursing a customer: “Why can’t they see that our product is better than the competition? They’re treating it like a commodity!” This is evidence of overshooting. Such companies find themselves on the right side of figure 5-1, where there is a performance surplus. Customers are happy to accept improved products, but they’re unwilling to pay a premium price to get them.7
Overshooting does not mean that customers will no longer pay for improvements. It just means that the type of improvement for which they will pay a premium price will change. Once their requirements for functionality and reliability have been met, customers begin to redefine what is not good enough. What becomes not good enough is that customers can’t get exactly what they want exactly when they need it, as conveniently as possible. Customers become willing to pay premium prices for improved performance along this new trajectory of innovation in speed, convenience, and customization. When this happens, we say that the basis of competition in a tier of the market has changed.
The pressure of competing along this new trajectory of improvement forces a gradual evolution in product architecture, as depicted in figure 5-1—away from the interdependent, proprietary architectures that had the advantage in the not-good-enough era toward modular designs in the era of performance surplus. Modular architectures help companies to compete on the dimensions that matter in the lower-right portions of the disruption diagram. Companies can introduce new products faster because they can upgrade individual subsystems without having to redesign everything. Although standard interfaces invariably force compromise in system performance, firms have the slack to trade away some performance with these customers because functionality is more than good enough.
The Innovator's Solution Page 16