Book Read Free

Here Comes Everybody

Page 22

by Clay Shirky


  Cheap failure, valuable as it is on its own, is also a key part of a more complex advantage: the exploration of multiple possibilities. Imagine a vast, unmapped desert with a handful of oases randomly scattered throughout. Traveling through such a place, you would be likely to stick with the first oasis you found, simply because the penalty for leaving it and not finding another oasis would be quite high. You’d like to have several people explore the landscape simultaneously and communicate their findings to one another, but you’d need lots of resources and would have to be able to tolerate vastly different success rates between groups. This metaphorical environment is sometimes called a “fitness landscape”—the idea is that for any problem or goal, there is a vast area of possibilities to explore but few valuable spots within that environment to discover. When a company or indeed any organization finds a strategy that works, the drive to adopt it and stick with it is strong. Even if there is a better strategy out there, finding it can be prohibitively expensive.

  For work that relies on newly collapsed transaction costs, however, providing basic resources to the groups exploring the fitness landscape costs little, and the failure of even a sizable number of groups also carries little penalty. Don Tapscott and Anthony Williams tell a story of an almost literal fitness landscape in Wikinomics. The mining firm Goldcorp made its proprietary data about a mining site in Ontario public, then challenged outsiders to tell them where to dig next, offering prize money. The participants in the contest suggested more than a hundred possible sites to explore, many of which had not been mined by Goldcorp and many of which yielded new gold. Harnessing the participation of many outsiders was a better way to explore the fitness landscape than relying on internal experts.

  Meetup reaps the benefit of this kind of exploration by enlisting its users in finding useful new offerings. By not committing to helping any individual group succeed, and by not directing users in their exploration of possible topics, Meetup has been consistently able to find those groups without needing to predict their existence in advance and without having to bear the cost of experimentation. By creating an enabling service that lets groups set out on their own, Meetup is able to explore a greater section of the fitness landscape, at less cost, than any institution could do by hiring and directing its employees. As with the weblog world operating as an entire ecosystem, services that tolerate failure as a normal case create a kind of value that is simply unreachable by institutions that try to ensure the success of most of their efforts.

  The cost of trying things is where Coasean theory about transaction costs and power law distributions of participation intersect. Institutions exist because they lower transaction costs, relative to what a market could support. However, because every institution requires some formal structure to remain coherent, and because this formal structure itself requires resources, there are a considerable number of potentially valuable actions that no institution can afford to undertake. For these actions, the resources invested in trying them will often cost more than the outcome. This in turn means that there are many actions that might pay off but won’t be tried, even for innovative firms, because their eventual success is not predictable enough.

  It is this gap that distributed exploration takes advantage of: in a world where anyone can try anything, even the risky stuff can be tried eventually. If a large enough population of users is trying things, then the happy accidents have a much higher chance of being discovered.

  This presents a conundrum for business. Coasean economics being what they are, a firm cannot try everything. Management overhead is real, and the costs of failures can’t simply be laid at the feet of the employees; the firm has to absorb them somehow. As a result, peer production must necessarily go on outside of any firm’s ability to either direct or capture all of its value.

  This happens in part because the respective costs of filtering versus publishing have reversed. In the traditional world, the cost of publishing anything creates not just an incentive but a requirement to filter the good from the bad in advance. In the open source world, trying something is often cheaper than making a formal decision about whether to try it.

  In business, the investment cost of producing anything can create a bias toward accepting the substandard. You have experienced this effect if you have ever sat through a movie you didn’t particularly like in order to “get your money’s worth.” The money is already gone, and whether you continue watching Rocky XVII or not won’t change that fact. By the time you are sitting in the theater, the only thing you can decide to spend or not spend is your time. Curiously, in that moment many people choose to keep watching the movie they’ve already decided they don’t like, partly as a way to avoid admitting that they’ve wasted their money.

  Because of transaction costs, organizations cannot afford to hire employees who only make one important contribution—they need to hire people who have good ideas day after day. Yet as we know, most people are not so prolific, and in any given field many people have only one or a few good ideas, just as most contributors documenting the Mermaid Parade or Hurricane Katrina contributors contribute only one photo each (the power law distribution again). The institutional response to this imbalance is to ignore the people with only one good contribution; the dictates of 80/20 optimization forces a firm to maximize its output by ignoring casual participants. As a result, many good ideas (or good photos or good music) are simply inaccessible in an institutional framework, because most of the time most institutions have to choose “steady performer” over “brilliant but erratic.” It’s not that organizations wouldn’t like to take advantage of the idea of the occasional participant—it’s that they can’t. Transaction costs make it too expensive.

  In 2005 Nick McGrath, a Microsoft executive in the U.K., had this to say about Linux:

  There is a myth in the market that there are hundreds of thousands of people writing code for the Linux kernel. This is not the case; the number is hundreds, not thousands. If you look at the number of people who contribute to the kernel tree [the core part of Linux], you see that a significant amount of the work is just done by a handful.

  If you listen carefully, you can hear McGrath outlining a power law distribution—only hundreds, not thousands, with the significant work being done by a handful of people.

  It’s easy to see, from McGrath’s point of view, why the open source model is the wrong way to design an operating system: when you hire programmers, they drain your resources through everything from salary to health care to free Cokes in the break room. In that kind of environment, a programmer who has only one good idea, ever, is a distinctly bad hire. But employees don’t drain Linux’s resources, because Linux doesn’t have employees, it just has contributors. Microsoft simply cannot afford to take any good idea wherever it finds it; the transaction costs that come from being Microsoft see to that. The seemingly obvious advantage of owning the source code carries with it all the overhead of managing that ownership. When Microsoft’s competitors were all commercial firms who faced the same problems, this overhead was just the cost of doing business, and bigger firms could rely on economies of scale to compete on overhead costs. The development of Linux, on the other hand, is not based on the idea of corporate ownership, which vastly reduces that overhead. Linux can take a good idea from anyone, and frequently does. It does more than give Microsoft a new competitor; it changes Microsoft’s competitive environment, as the disadvantages of the institutional dilemma are no longer uniformly borne by all entrants.

  In 2005 Microsoft was desperate to suggest that having an anointed group of professionals, paid to write software, was the only sensible model of development, largely because it had no real alternative. Microsoft operates in a world defined by the 80/20 rule; the cost of pursuing every possible idea is simply too high, so Microsoft must optimize the resources it has. The open source development model, on the other hand, turns the 80/20 rule on its head, asking, “Why forgo the last twenty percent?” If transaction costs are a barrie
r to taking advantage of the individual with one good idea (and in a commercial context they are), then one possible response is to lower the transaction costs by radically rearranging the relations between the contributors.

  The open source movement introduced this way of working, but the pattern of aggregating individual contributions into something more valuable has become general. One example of the expansion into other domains is Groklaw, a site for discussing legal issues related to the digital realm. When the Santa Cruz Organization (SCO), a software publisher, threatened a patent lawsuit against IBM, claiming that IBM’s offering Linux to its customers violated SCO’s patents, SCO clearly expected that IBM wouldn’t want to face either the cost of fighting the suit or the chance of losing and would either pay to license the patents or simply buy SCO outright. Instead, IBM took SCO to court and set about the complex process of uncovering and aggregating what was known about SCO’s patents and legal arguments. What SCO hadn’t counted on was that Groklaw, a site run by a paralegal named Pamela Jones, would become a kind of third party in the fight. When IBM called SCO’s bluff and the threatened suit went forward, Groklaw would post and then explain all the various legal documents being filed. This in turn made Groklaw required reading for everyone interested in the case. The knowledgeable audience that Jones assembled began to post comments related to the case, including, most damningly, comments from former SCO engineers that explicitly contradicted the version of events that SCO was alleging in the trial. Groklaw functioned as a kind of distributed and free friend-of-the-court brief, uncovering material that would have been too difficult and too expensive for IBM to get any other way. The normal course for such a lawsuit would have been that SCO and IBM fought the case in court, while the open source community looked on. What Groklaw did was assemble that community in a way that actually changed the landscape of the case.

  Cooperation as Infrastructure

  Emblematic of the dilemmas created by group life, the phrase “free-for-all” does not literally mean free for all but rather chaos. Too much freedom, with too little management, has generally been a recipe for a free-for-all. Now, however, it isn’t. With the right kinds of collaborative tools and the right sort of bargain with users, it is possible to get a large group working on a project that is free for all. McGrath should have been terrified that a handful of developers, working alongside a thousand casual contributors, could create an operating system at all, much less one successful enough to compete with Microsoft’s commercial offerings. What he misunderstood (or at least publicly misconstrued) was that the imbalance between a few highly active developers and a thousand casual contributors was possible only because Linux had lowered the threshold for finding and integrating good ideas (it reduced the cost of exploring the fitness landscape) in a way that Microsoft simply could not. (Microsoft’s Encarta failed to capture user contributions—compare that to Wikipedia.) This problem is not peculiar to Microsoft; as Bill Joy, one of the founders of Sun Microsystems, once put it, “No matter who you are, most of the smart people work for someone else.” What the open source model does is to allow those people to work together. This pattern is spreading to other domains; one of the most critical is public health.

  Sudden acute respiratory syndrome (SARS), a frequently fatal flu-like disease, first broke out in China in 2002. SARS was, in a way, the first “post-network” virus; enough was known about both the virus and about travel networks to allow airports to prevent travelers from taking the disease with them from continent to continent. These kinds of interdictions kept the disease localized, but they were mere holding actions. What was really needed was an understanding of the disease itself; the race was on to find the genetic sequence to SARS, as a precursor to a vaccine or a cure.

  The Chinese had the best chance of sequencing the virus; the threat of SARS was most significant in Asia, and especially in China, which had most of the world’s confirmed cases, and China is home to brilliant biologists, with significant expertise in distributed computing. Despite these resources and incentives, however, the solution didn’t come from China.

  On April 12, Genome Sciences Centre (GSC), a small Canadian lab specializing in the genetics of pathogens, published the genetic sequence of SARS. On the way, they had participated in not just one open network, but several. Almost the entire computational installation of GSC is open source; bioinformatics tools with names like BLAST, Phrap, Phred, and Consed, all running on Linux. GSC checked their work against Genbank, a public database of genetic sequences. They published their findings on their own site (run, naturally, using open source tools) and published the finished sequence to Genbank, for everyone to see. The story is shot through with involvement in various participatory networks.

  But if China had the superior intellectual throw-weight and biological research infrastructure, and a bigger incentive than any nation in the world to master the threat, what kept them from winning the race to sequence the virus? One clue came in the form of an interview with Yang Huanming, of the Beijing Genomics Institute, a month after GSC sequenced SARS. Yang said that the barriers in China were not limits on talent or resources, but obstacles to cooperation; the government simply put too many restrictions on sharing either samples of the virus, or on information about it. With considerably fewer resources, GSC outperformed their Chinese counterparts because they’d plugged into so many different cooperative and collaborative networks.

  “Do the People Who Like It Take Care of Each Other?”

  In the mid-1990s, at the dawn of the commercial use of the Web, I was in charge of technology for a small Web design firm in Manhattan called Site Specific—there were a dozen of us, working out of the founder’s living room. Like the proverbial dog that caught the bus, we landed AT&T as a client. After the ink dried on the contract, AT&T started bringing its engineers around to work with us on programming for the new sites; when we sat down to talk with them, the culture clash was immediate. Site Specific was mostly twentysomethings (at thirty-one, I was the oldest person in the company), and the AT&T guys (they were all guys) were grizzled veterans who’d been at AT&T longer than most of us had been out of college.

  The first real argument we had was around programming languages (a common source of disagreement among techies). AT&T used an industrial-strength language called C++. We used a much simpler language called Perl. The AT&T guys were aghast, and we argued the merits of the two languages, but for them the real sticking point was support. C++ had been invented at AT&T, and they had people paid to support software developers should they run into difficulties. Where, they asked, did we get our commercial support for Perl? We told them we didn’t have any, which brought on yet more shocked reactions: We didn’t have any support? “We didn’t say that,” we replied. “We just don’t have any commercial support. We get our support from the Perl community.”

  It was as if we’d told them, “We get our Thursdays from a banana”; putting “support” in the same sentence as “community” simply didn’t make any sense. Community was touchy-feely stuff; support was something you paid for. We explained that there was a discussion group for Perl programmers, called comp.lang.perl.misc, where the Perl community hung out, asking and answering questions. Commercial support was often slow, we pointed out, while there were people on the Perl discussion group all day and night answering questions. We explained that when newcomers had been around long enough to know what they were doing, they in turn started to answer questions, so although the system wasn’t commercial, it was self-sustaining. The AT&T guys didn’t believe us. We even showed them how it worked; we thought up a reasonably hard question and posted it to comp .lang.perl.misc. Someone answered it before the meeting with AT&T was over. But not even that could convince them. They didn’t care if it worked in fact, because they were already sure it wouldn’t work in theory. Support didn’t come from evanescent things like an unspoken bargain among a self-assembled community. Support came from solid things, like a contract with a company.

 
That fight took place a dozen years ago. What’s happening today? With the explosion of social tools, the Perl community now has many places to gather, so comp.lang.perl.misc is no longer the epicenter of the community, but it is still a place where people are asking and answering questions, so it’s doing fine. AT&T, on the other hand, is not doing as fine. Despite round after round of massive layoffs and alternative strategies, the company shrank to the point of irrelevance, selling itself off to another phone company for $16 billion in 2005, which was only a fifth of its value in 1995, the year it hired us. Perl is a viable programming language today because millions of people woke up today loving Perl and, more important, loving one another in the context of Perl. Members of the community listen to each other’s problems and offer answers as a way of taking care of one another. This is not pure altruism; the person who teaches learns twice, the person who answers questions gets an improved reputation in the community, and the overall pattern of distributed and delayed payback—if I take care of you now, someone will take care of me later—is a very practical way of creating the social capital that makes Perl useful in the first place. Between 1995 and 2005 Perl did better as a viable structure than AT&T did, because communal interest turned out to be a better predictor of longevity than commercial structure.

 

‹ Prev