The Design of Business: Why Design Thinking Is the Next Competitive Advantage

Home > Other > The Design of Business: Why Design Thinking Is the Next Competitive Advantage > Page 5
The Design of Business: Why Design Thinking Is the Next Competitive Advantage Page 5

by Roger L. Martin


  A business that is overweighted toward reliability will erect organizational structures, processes, and norms that drive out the pursuit of valid answers to new questions. It fails to balance its pursuit of reliability with the equally important pursuit of validity, leaving it ill-positioned to solve mysteries and move knowledge along the funnel. Such organizations inevitably come to see maintenance of the status quo as an end in itself, short-circuiting their ability to design and redesign themselves continuously. This wouldn’t be such a big problem if the world never changed; in those circumstances, continuing to replicate the success model would make lots of sense. However, as we all know, the world is continuously changing, and with every change, crucial new mysteries spring up that reliable systems simply won’t address or even acknowledge. By implicitly or explicitly focusing on reliability only, organizations deny themselves the immense value that can be unleashed by balancing reliability and validity in a design-thinking organization and expose themselves to the risk of being outflanked by a new entrant. The business that fails to balance reliability and validity will find itself flat-footed when rivals advance knowledge through the funnel.

  But why do so many businesses have such a pronounced tilt toward reliability if that tilt does not serve their long-term interest? The short answer is that the modes of reasoning that produce reliable outcomes are familiar to businesspeople from long exposure and experience. The mode of reasoning that produces valid outcomes is sufficiently unfamiliar that it is often seen as no reasoning at all. Given those baseline attitudes, it is no surprise that most firms put reliability at the center of the business universe and drive validity to the margins.

  In most large business organizations, three forces converge to enshrine reliability and marginalize validity: the demand that an idea be proved before it is implemented, an aversion to bias, and the constraints of time.

  The Persistence of the Past

  The demand for proof might be the most powerful of those forces, dominating as it does senior management’s deliberations about allocating capital. Embedded in the verb prove are meanings that implicitly privilege reliability over validity. In corporations, to prove something means to look at the past and apply one of two forms of logic—inductive or deductive—to produce a declaration that something is or is not true.

  If, in 2007, General Motors marketing executives wanted to prove that the corporation should focus on producing and marketing full-size pickup trucks and SUVs, they could cite sales, margin, and cost and profit data from the previous ten years to inductively prove their case. Indeed, those vehicles had generated the company’s highest returns in the past. Alternatively, if executives at PepsiCo wanted to gain approval of a marketing plan, they would invoke a principle established by more than a century of continuous operation: increase market share and profits will follow as night follows day. That is deductive logic—application of a general rule derived from past experience—and Pepsi executives invoke it to prove that if their plan produces market-share growth, it will of necessity increase profits.

  Both these forms of analytical logic draw on past experience to predict the future. It is no accident that the future predicted through analytical methods closely resembles the past, differing in degree but not in kind. If a system has produced a consistent result over time—either over such a long period and so universally that it becomes a deductive rule, or over enough repetitions to support a statistically significant induction—it is by definition reliable, and past data can be adduced to prove its reliability.

  In an environment that relies primarily on analytical reasoning as a guide to action, past experience carries great argumentative weight. It nearly always prevails against proposals that can only be proven by future events. Because it is so well suited to satisfying the organizational demand for proof, reliability almost always trumps validity. But it is all too often a hollow victory. When the future takes a different course than the path the data predicted for it, all the proofs in the world are unavailing. Just ask the GM executives who invoked data from the recent past to make pickups and SUVs their production and marketing priority in 2008.

  The Attempt to Eliminate Bias

  Data, though imperfect as a predictor of the future, prevailed in part because it satisfied another demand of business: that decisions be free of the taint of bias. By eliminating bias and subjective judgment from common business decisions, corporations can achieve massive scale and efficiency honing their decision-making apparatus to an algorithm, indeed, to the ultimate algorithm, computer code. Remember, computers don’t exercise judgment. They are fast because they don’t think. At any point in time as they process the source code that dictates their operation, they ask only one question: am I looking at a one or a zero? Such algorithms have given the world credit-scoring systems, insurance pricing systems, and targeted marketing systems such as Amazon’s product recommendations, all of which process masses of bias-free empirical data to allocate credit, set premium prices, or place product offerings in front of individual consumers.

  But the market’s response suggests that such reliability-based approaches are less than fully satisfactory. No one can argue that credit-scoring systems and the like show bias or lack objectivity. But that does not make them popular. People who have been subjected to any of these systems feel the depersonalizing, dehumanizing effect of seeing their character and experience reduced to a numerical score. They object to the notion that something as personal as one’s tastes in books or music can be reduced to a formula. It is little consolation that the formula is bias-free.

  The Pressures of Time

  The third reason that reliability tends to trump validity in business settings is, quite simply, time. A reliable system can generate tremendous time savings; once designed, it eliminates the need for subjective and thoughtful analysis by an expensive and time-pressed manager or professional. Hence the appeal of automated asset-allocation systems at investment advisory firms: before new clients even meet an adviser, the clients complete a questionnaire designed to reliably assess their investment horizons, risk tolerance, and investment goals. The data feeds into a program that impersonally graphs the recommended mix of stocks, bonds, and other investments. It takes the massively complex job of understanding individual investment needs out of the hands of the adviser. Where there was once an adviser consulting with clients at length and depth, and then tailoring a portfolio by applying a heuristic and subjective judgment, there is now an algorithm that quickly produces reliable answers.

  Of course, the demand for proof, the absence of bias, and the pressures of time affect a good deal more than the forms customers have to fill out. They strongly influence the very shape of the corporation itself, and the structures, processes, and norms that guide its daily activities. If the goal of the reliability-oriented business is to ensure that tomorrow consistently and predictably replicates yesterday, then it follows that the business will be organized as a permanent structure with long-term ongoing job assignments. Daily work will consist of a series of permanent, continuous tasks: make stuff, sell it, ship it, follow up with customers, and service the installed base. There are few if any limited-term projects on the organizational chart, and for good reason. In most corporations, “special projects” is a euphemism for the purgatory reserved for terminated executives hunting for a new job.

  In such an environment, the organizational goal evolves toward managing permanent, continuous tasks to the highest possible level of reliability. Think of General Electric during the Jack Welch era, when the company’s flagship product was not an industrial turbine or a refrigerator or a medical imaging device but a quarterly earnings number that reliably met or ever so slightly exceeded earnings guidance. Because of the environment’s demands for reliability, work is only secondarily the business of making stuff and selling it. It is primarily a matter of ensuring that the existing heuristic or algorithm produces a consistent result time and time again. (See “Counterproductive Pressure from the P
ublic Capital Markets.”)

  The reliability bias is deeply embedded in organizational processes related to planning and budgeting, executive skill development, and the use of analytical technology. In all those processes, conventional wisdom says that reliability equals success. In most corporations, for example, the first measure of an operation’s success is whether it reliably meets a predetermined quantitative goal: the budget. Anything new and different that threatens the overriding goal of making budget is rejected out of hand. Constraints such as rising materials costs are equally threatening, as they add complexity, undermining the algorithm that produces the desired consistent result.

  The managerial skills that are built and rewarded are those of running heuristics or algorithms to produce reliable outcomes. Consider the cottage industry that has grown up around Six Sigma. Six Sigma relentlessly simplifies algorithms to the bare minimum, taking reliability to its logical extreme. Its statistical measures plane away from the algorithm any nuance that would sacrifice consistency of result. Many organizations—most famously, General Electric—promote Six Sigma techniques and reward managers who become Six Sigma “Black Belts.” These Black Belts are reliability masters.

  In even wider use than Six Sigma is a tool that was virtually unknown to corporate boardrooms just a generation ago: linear regression, a tool that is used for “proving” statistically the relationship between one factor (e.g., store hours) and another (e.g., sales per square foot). Managers prove the value of their ideas by invoking the size of their regression’s R2. Proficiency in regression analysis, as well as large-scale analytical systems such as ERP and CRM, are prerequisites for senior executives in corporations. When you consider the amount of resources that individuals and businesses invest to develop those analytical skills, compared to the relatively paltry resources invested in the intuitive skills that produce valid answers, it is easy to see why most corporations tilt so strongly in favor of reliability.

  Reinforcing that tilt are organizational norms that govern status and the style of reasoning that the organization considers acceptable. Rewards and high status flow to those managers who analyze past performance to refine heuristics and algorithms, and the highest status and biggest rewards accrue to the executive who reliably runs the most important heuristic or algorithm, importance being measured by revenue and profit. Think of Goldman Sachs’s sales and trading heuristic or McDonald’s U.S. business algorithm. Managers do their best to dodge tricky smaller businesses that face complicated mysteries, which are seen as detours to advancement, if not career dead ends.

  Counterproductive Pressure from the Public Capital Markets

  All too often, companies mismanage the resources freed up by movement along the knowledge funnel. Tragically, the public capital markets encourage this inefficiency, which can be fatal. The public capital markets are reliability-oriented and encourage excessive exploitation, though not necessarily by intention.

  The capital markets reward certainty. Nothing is surer to win analysts’ favor than a record of delivering predictable revenue and earnings, and nothing is surer to arouse their ire than a failure to meet earnings forecasts. Even a penny’s shortfall in quarterly earnings per share can trigger negative analyst reports, downgrades, and sell-offs. For example, on September 25, 2008, Research In Motion announced that its second-quarter profit had risen to 86 cents per share from 50 cents per share the previous year. Profits were $496 million for the quarter; revenue was $2.6 billion. The earnings-per-share results were just one cent below the consensus analyst estimate. How did the market respond? The stock dropped by almost 30 percent, destroying some $16.1 billion in value in a single day. 5

  Analysts don’t see the consequences of elevating precision and certainty as the be-all and end-all of business. They fail to recognize their own demand that businesses cease investing in innovative, validity-oriented activities. Remember that mysteries have no production process. Not even the most plugged-in analyst can predict with any certainty when a mystery will yield to a heuristic, or a heuristic to an algorithm. Validity can be demonstrated only by the passage of time. It doesn’t happen on strict quarterly schedules, unlike investments in exploiting the current heuristic or algorithm.

  The longer-term effect of the capital markets’ preference for remaining at the same knowledge stage is stagnation. At some point, exploitation activities will run out of steam, and the company will be outflanked by competitors taking more exploratory approaches. Earnings will stop growing or even decline, and the analysts will savage the company for its lack of innovation. As James March points out, “An organization that engages exclusively in exploitation will ordinarily suffer from obsolescence.” 6

  Publicly traded companies have great difficulty resisting the capital markets’ pressure to hone and refine within a single knowledge stage. Companies that balance exploitation with exploration, reliability with validity, and refinement with innovation will find themselves targets of heavy criticism from analysts. These analysts think they are being constructive. They’re not. They’re discouraging the very activity—moving knowledge through the funnel faster than competitors, driving down costs of current activities, and freeing up time and capital to engage in new activities—that creates enduring competitive advantage.

  The public capital markets also discourage innovation by demanding that companies divert the savings generated by advancing across the funnel to shareholders. Of course, shareholders have legitimate claim on corporate cash, whether it takes the form of dividends or stock buybacks. But by demanding that they be served first, they work against their own long-term interests. Like the analysts, they prevent the company from achieving the competitive advantage gained from advancing knowledge faster than the competition.

  The private capital markets have the opposite effect on companies. The private capital markets like nothing better than a company that relentlessly advances knowledge from one stage to the next, as long as the advance creates value that is captured at the end point of private capital investment, the highly coveted “liquidity event.” Yes, private capital seldom avoids failures and write-offs. But there’s a reason that the private capital markets are growing much more quickly than the public capital markets. Private capital embraces knowledge advance, while public capital—knowingly or unknowingly—discourages it.

  In such an organization, personal success is achieved by running existing heuristics and algorithms. Self-interest dictates that managers refrain from cycling back to the first stage of the knowledge funnel. The organization’s own reward systems and processes practically dictate that it exploit knowledge at its current stage in the funnel, particularly, perhaps, if it is at the heuristic stage.

  In corporate settings, high-level heuristics are generally in the hands of highly paid executives or specialists. Out of sheer self-interest, they are reluctant to relinquish their enigmatic and valuable capability. Whether they are brand managers, investment bankers, acquisitions editors, CFOs, research scientists, or star salespeople, they are in a constant tug-of-war with the owners of their company over the spoils of their work. They have the skill—the heuristic inside their heads—and the company has the capital. The company would like maximum compensation for providing the capital. The talent would like maximum compensation for running the heuristic. As long as the talent keeps its heuristic shrouded in priestly secrecy, it can bargain successfully for a bigger share of the value it creates. If the talent were to advance the heuristic to the algorithm stage, the company could hand the specialist’s job to a much less expensive person.

  In many organizations, including professional service organizations such as law firms, consulting firms, investment firms, and most entertainment and media firms, talent is winning this battle. And the price of maintaining an ongoing monopoly on important heuristics is high. These heuristic-running high priests create a big bottleneck in the middle of the knowledge funnel, blocking the movement forward to algorithm. Their desire to collect monopoly rents shar
ply limits the speed at which the organization can advance knowledge.

  No organization sets out to limit its ability to innovate and create additional value. No board would vote to drive out movement along the knowledge funnel. But to paraphrase Winston Churchill, first we shape our tools and then our tools shape us. 7 The structures, processes, and norms of the contemporary business organization all but condemn it to remain within a single knowledge stage. When a validity-oriented advance comes to an important organizational decision gate, someone in authority inevitably asks reliability-oriented questions: “But can we prove this will work?” or, “How can we be sure of the outcome?” Typically the answers are no, it cannot be proven, and we cannot be sure. So design thinking is suppressed without explicit intent, a victim of organizational bias toward reliability.

  Making Room for Validity

  Both reliability and validity are important for an organization. Without validity, an organization has little chance of moving knowledge across the funnel. Without reliability, an organization will struggle to exploit the rewards of its advances. As with exploration and exploitation, the optimal approach to validity and reliability is not to choose but to seek a balance of both. (See figure 2-1.)

  The precise method for balancing validity and reliability will vary from situation to situation and from organization to organization. It may be that some areas of an organization (accounting, for example) will emphasize reliable measures, while others (R&D) will embrace valid ones. Still other departments, marketing for instance, may seek to design new measures that in themselves strike a balance, incorporating reliable structures around qualitative research methods, for example. But, given that companies have very real and powerful reasons to favor reliability over validity, and that this preference for reliability is often enshrined in the organization’s structures, processes, and norms, the challenge will typically be to incorporate a validity orientation into a reliability culture. To do so, the organization must open up new definitions of proof, embrace some degree of subjectivity as not just inevitable but valuable, and acknowledge that getting the right answer is worth taking a little more time. It must open itself up to a new way of thinking. (See “Reliability Versus Validity: A Note on Prediction.”) The next chapter will introduce the often-overlooked reasoning skill that is crucial to redressing the imbalance toward reliability and to achieving a productive balance of exploration and exploitation. That skill is abductive reasoning, which drives the intuitive spark that leaps across the gap separating the world as it is from the world as it might be.

 

‹ Prev