Everything Is Obvious
Page 18
This is the strategy paradox. The main cause of strategic failure, Raynor argues, is not bad strategy, but great strategy that just happens to be wrong. Bad strategy is characterized by lack of vision, muddled leadership, and inept execution—not the stuff of success for sure, but more likely to lead to persistent mediocrity than colossal failure. Great strategy, by contrast, is marked by clarity of vision, bold leadership, and laser-focused execution. When applied to just the right set of commitments, great strategy can lead to resounding success—as it did for Apple with the iPod—but it can also lead to resounding failure. Whether great strategy succeeds or fails therefore depends entirely on whether the initial vision happens to be right or not. And that is not just difficult to know in advance, but impossible.
STRATEGIC FLEXIBILITY
The solution to the strategy paradox, Raynor argues, is to acknowledge openly that there are limits to what can be predicted, and to develop methods for planning that respect those limits. In particular, he recommends that planners look for ways to integrate what he calls strategic uncertainty—uncertainty about the future of the business you’re in—into the planning process itself. Raynor’s solution, in fact, is a variant of a much older planning technique called scenario planning, which was developed by Herman Kahn of the RAND Corporation in the 1950s as an aid for cold war military strategists. The basic idea of scenario planning is to create what strategy consultant Charles Perrottet calls “detailed, speculative, well thought out narratives of ‘future history.’ ” Critically, however, scenario planners attempt to sketch out a wide range of these hypothetical futures, where the main aim is not so much to decide which of these scenarios is most likely as to challenge possibly unstated assumptions that underpin existing strategies.18
In the early 1970s, for example, the economist and strategist Pierre Wack led a team at Royal Dutch/Shell that used scenario planning to test senior management’s assumptions about the future success of oil exploration efforts, the political stability of the Middle East, and the emergence of alternative energy technologies. Although the main scenarios were constructed in the relatively placid years of energy production before the oil shocks of the 1970s and the subsequent rise of OPEC—events that definitely fall into the black swan category—Wack later claimed that the main trends had indeed been captured in one of his scenarios, and that the company was as a result better prepared both to exploit emerging opportunities and to hedge against potential pitfalls.19
Once these scenarios have been sketched out, Raynor argues that planners should formulate not one strategy, but rather a portfolio of strategies, each of which is optimized for a given scenario. In addition, one must differentiate core elements that are common to all these strategies from contingent elements that appear in only one or a few of them. Managing strategic uncertainty is then a matter of creating “strategic flexibility” by building strategies around the core elements and hedging the contingent elements through investments in various strategic options. In the Betamax case, for example, Sony expected the dominant use of VCRs would be to tape TV shows for the future, but it did have some evidence from the CTI experiment that the dominant use might instead turn out to be home movie viewing. Faced with these possibilities, Sony adopted a traditional planning approach, deciding first which of these outcomes they considered more likely, and then optimizing their strategy around that outcome. Optimizing for strategic flexibility, by contrast, would have led Sony to identify elements that would have worked no matter which version of the future played out, and then to hedge the residual uncertainty, perhaps by tasking different operating divisions to develop higher- and lower-quality models to be sold at different price points.
Raynor’s approach to managing uncertainty through strategic flexibility is certainly intriguing. However, it is also a time-consuming process—constructing scenarios, deciding what is core and what is contingent, devising strategic hedges, and so on—that necessarily diverts attention from the equally important business of running a company. According to Raynor, the problem with most companies is that their senior management, meaning the board of directors and the top executives, spends too much time managing and optimizing their existing strategies—what he calls operational management—and not enough thinking through strategic uncertainty. Instead, he argues that they should devote all their time to managing strategic uncertainty, leaving the operational planning to division heads. As he puts it, “The board of directors and CEO of an organization should not be concerned primarily with the short-term performance of the organization, but instead occupy themselves with creating strategic options for the organization’s operating divisions.”20
Raynor’s justification for this radical proposal is that the only way to deal adequately with strategic uncertainty is to manage it continuously—“Once an organization has gone through the process of building scenarios, developing optimal strategies, and identifying and acquiring the desired portfolio of strategic options, it is time to do it all over again.” And if indeed strategic planning requires such a continuous loop, it does make a kind of sense that the best people to be doing it are senior management. Nevertheless, it is hard to imagine how senior managers can suddenly stop doing the sort of planning that got them promoted to senior management in the first place and start acting like an academic think tank. Nor does it seem likely that shareholders or even employees would tolerate a CEO who didn’t consider it his or her business to execute strategy or to worry about short-term performance.21 This isn’t to say that Raynor isn’t right—he may be—just that his proposals have not exactly been embraced by corporate America.
FROM PREDICTION TO REACTION
A more fundamental concern is that even if senior management did embrace Raynor’s brand of strategic management as their primary task, it may still not work. Consider the example of a Houston-based oilfield drilling company that engaged in a scenario-planning exercise around 1980. As shown in the figure on this page, the planners identified three different scenarios that they considered to represent the full range of possible futures, and they plotted out the corresponding predicted yields—exactly what they were supposed to do. Unfortunately, none of the scenarios considered the possibility that the boom in oil exploration that had begun in 1980 might be a historical aberration. In fact, that’s exactly what it turned out to be, and as a result the actual future that unfolded wasn’t anywhere within the ballpark of possibilities that the participants had envisaged. Scenario planning, therefore, left the company just as unprepared for the future as if they hadn’t bothered to use the method at all. Arguably, in fact, the exercise had left them in an even worse position. Although it had accomplished its goal of challenging their initial assumptions, it had ultimately increased their confidence that they had considered the appropriate range of scenarios, which of course they hadn’t, and therefore left them even more vulnerable to surprise than before.22
Scenario planning gone wrong (reprinted from Schoemaker 1991)
Possibly, this bad outcome was merely a consequence of poor execution of scenario planning, not a fundamental limitation of the method.23 But how is a firm in the throes of a scenario analysis supposed to know that it isn’t making the same mistake as the oil producer? Perhaps Sony could have taken the home video market more seriously, but what killed them was really the speed with which it exploded. It’s hard to see how they could have anticipated that. Even worse, when developing the MiniDisc, it’s unclear how Sony could possibly have anticipated the complicated combination of technological, economic, and cultural changes that arrived in short order with the explosive growth of the Internet. As Raynor puts it, “Not only did everything that could go wrong for Sony actually go wrong, everything that went wrong had to go wrong in order to sink what was in fact a brilliantly conceived and executed strategy.”24 So although more flexibility in their strategy might have helped, it’s unclear how much flexibility they would have needed in order to adapt to such a radically shifting marketplace, or how they could have a
ccomplished the requisite hedging without undermining their ability to execute any one strategy in particular.
Ultimately, the main problem with strategic flexibility as a planning approach is precisely the same problem that it is intended to solve—namely that in hindsight the trends that turned out to shape a given industry always appear obvious. And as a result, when we revisit history it is all too easy to persuade ourselves that had we been faced with a strategic decision “back then,” we could have boiled down the list of possible futures to a small number of contenders—including, of course, the one future that did in fact transpire. But when we look to our own future, what we see instead is myriad potential trends, any one of which could be game changing and most of which will prove fleeting or irrelevant. How are we to know which is which? And without knowing what is relevant, how wide a range of possibilities should we consider? Techniques like scenario planning can help managers think through these questions in a systematic way. Likewise, an emphasis on strategic flexibility can help them manage the uncertainty that the scenarios expose. But no matter how you slice it, strategic planning involves prediction, and prediction runs into the fundamental “prophecy” problem I discussed in the previous chapter—that we just can’t know what it is that we should be worrying about until after its importance has been revealed to us. An alternative approach, therefore—and the subject of the next chapter—is to rethink the whole philosophy of planning altogether, placing less emphasis on anticipating the future, or even multiple futures, and more on reacting to the present.
CHAPTER 8
The Measure of All Things
Of all the prognosticators, forecasters, and fortune-tellers, few are at once more confident and yet less accountable than those in the business of predicting fashion trends. Every year, the various industries in the business of designing, producing, selling, and commenting on shoes, clothing, and apparel are awash in predictions for what could be, might be, should be, and surely will be the next big thing. That these predictions are almost never checked for accuracy, that so many trends arrive unforeseen, and that the explanations given for them are only possible in hindsight, seems to have little effect on the breezy air of self-assurance that the arbiters of fashion so often exude. So it’s encouraging that at least one successful fashion company pays no attention to any of it.
That company is Zara, the Spanish clothing retailer that has made business press headlines for over a decade with its novel approach to satisfying consumer demand. Rather than trying to anticipate what shoppers will buy next season, Zara effectively acknowledges that it has no idea. Instead, it adopts what we might call a measure-and-react strategy. First, it sends out agents to scour shopping malls, town centers, and other gathering places to observe what people are already wearing, thereby generating lots of ideas about what might work. Second, drawing on these and other sources of inspiration, it produces an extraordinarily large portfolio of styles, fabrics, and colors—where each combination is initially made in only a small batch—and sends them out to stores, where it can then measure directly what is selling and what isn’t. And finally, it has a very flexible manufacturing and distribution operation that can react quickly to the information that is coming directly from stores, dropping those styles that aren’t selling (with relatively little left-over inventory) and scaling up those that are. All this depends on Zara’s ability to design, produce, ship, and sell a new garment anywhere in the world in just over two weeks—a stunning accomplishment to anyone who has waited in limbo for just about any designer good that isn’t on the shelf.1
Ten years before Zara became a business-school case study, management theorist Henry Mintzberg anticipated their measure-and-react approach in a concept that he called “emergent strategy.” Reflecting on the problem raised in the previous chapter—that traditional strategic planning invariably requires planners to make predictions about the future, leaving them vulnerable to inevitable errors—Mintzberg recommended that planners should rely less on making predictions about long-term strategic trends and more on reacting quickly to changes on the ground. Rather than attempting to anticipate correctly what will work in the future, that is, they should instead improve their ability to learn about what is working right now. Then, like Zara, they should react to it as rapidly as possible, dropping alternatives that are not working—no matter how promising they might have seemed in advance—and diverting resources to those that are succeeding, or even developing new alternatives on the fly.2
BUCKETS, MULLETS, AND CROWDS
Nowhere are the virtues of a measure-and-react strategy more apparent than in the online world, where the combination of low-cost development, large numbers of users, and rapid feedback cycles allows for many variants of virtually everything to be tested and selected on the basis of performance. Before Yahoo! rolled out its new home page in 2009, for example, the company spent months “bucket testing” every element of the design. Roughly 100 million people have Yahoo! as their home page, which in turn drives a great deal of traffic to other Yahoo! properties; so any changes have to be made with caution. Throughout the redesign process, therefore, whenever the home-page team came up with an idea for a new design element, a tiny percentage of users—the “bucket”—would be randomly chosen to see a version of the page containing the element. Then through a combination of user feedback and observational metrics like how long the users in the bucket stayed on the page, or what they clicked on, and comparing them to ordinary users, the home-page team could assess whether the element created a positive or negative effect. In this way, the company was able to learn what would work and what would not in real time and with real audience data.3
Bucket testing is now routine. Major Web companies like Google, Yahoo!, and Microsoft use it to optimize ad placement, content selection, search results, recommendations, pricing, even page layout.4 A growing number of startup companies have also begun to offer advertisers automated services that winnow down a large suite of potential ads to those that perform the best, as measured by click-through rate.5 But the measure-and-react philosophy of planning is not restricted to learning how consumers will respond to options they are presented with—it can also include the consumer as a producer of content. In the media world, this view is exemplified by what Huffington Post cofounder Jonah Peretti calls the Mullet Strategy, after the much-maligned hairstyle characterized by “business up front, party in the back.”
The Mullet Strategy starts from the conventional view that user-generated content is a potential gold mine for media companies, in part because users can greatly amplify and extend the content of, say, a news story. But even more, it allows users to participate in conversations around the story that change the nature of the experience—from pure consumption to participation—thereby increasing their engagement and loyalty. Just as is true in real gold mines, however, a lot of user-generated content is closer to dirt than gold. As anyone who reads popular blogs or news sites can attest, many user comments are wildly inaccurate or simply dumb, and some of them are downright mean. Regardless, they do not constitute the kind of content that publishers want to promote or that advertisers want to be seen next to. Moderating online comments is the obvious solution to this problem, but it tends to alienate users, who resent the oversight and want to see their comments posted without filters. Editorial oversight also doesn’t scale well, as the Huffington Post quickly discovered: A handful of editors simply can’t read fast enough to keep up with potentially hundreds of blog posts every day. The solution is the Mullet Strategy: In the back pages where few people will see any particular story, let a thousand flowers bloom (or a million of them); then selectively promote material from the back to the front page, with all its premium advertising space, and keep that under strict editorial control.6
The Mullet Strategy is also an example of “crowdsourcing,” a term coined in a 2006 Wired article by Jeff Howe to describe the outsourcing of small jobs to potentially very large numbers of individual workers. Online journali
sm, in fact, is increasingly moving toward a crowdsourced model—not just for generating community activity around a news story but also for creating the stories themselves, or even deciding what topics to cover in the first place. The Huffington Post, for example, relies on thousands of unpaid bloggers who contribute content either out of passion for the topic they write about or else to benefit from the visibility they receive from being published on a widely read news site. Other sites, like Examiner.com, meanwhile, retain armies of contributors to write about specific topics of interest to them, and pay them by the page view. And finally, sites like Yahoo!’s news blog “The Upshot” and Associated Content not only crowdsource the writing work but also track search queries and other indicators of current crowd interest to decide which topics to write about.7
The idea of measuring audience interest and reacting to it in close to real time has also started to gain traction beyond the revenue-challenged world of news media. For example, the cable channel Bravo regularly spins off new reality TV shows from its existing shows by tracking online buzz surrounding different characters. The shows can be launched quickly and at relatively low cost—and if they don’t perform, the channel can quickly pull the plug. Following a similar principle, Cheezburger Network—a collection of nearly fifty websites featuring goofy user-contributed photos and videos, most with funny captions—is capable of launching a site within a week of noticing a new trend, and kills unsuccessful sites just as quickly. And BuzzFeed—a platform for launching “contagious media”—keeps track of hundreds of potential hits and only promotes those that are already generating enthusiastic responses from users.8