Book Read Free

Here Comes Everybody

Page 21

by Clay Shirky


  Though it seems funny for a service business, Meetup actually does best not by trying to do things on behalf of its users, but by providing a platform for them to do things for one another. There are hundreds of thousands of Meetup users, and each is presented with many possible Meetups that they could attend. In a midsize city the potential combinations among people interested in Meetup groups are overwhelming. The only sensible way to solve this problem is to turn it over to the users.

  The most basic service that Meetup provides is to let its users propose groups and to let other users vote with their feet, like the apocryphal university that lets the students wear useful paths through the grass before it lays any walkways. Most proposed Meetup groups fail because they are too generic, or too specific, or too boring. Most of the rest have only moderate success, leaving only a relative handful of very popular groups, like Stay at Home Moms. This distribution—lots of failure, some modest success, and few extremely popular—is the same pattern (the power law distribution) that we have seen elsewhere. Since failure is normal and significant success rare, Meetup must continually readjust to its current context. It does this by deferring to its users’ judgment. The standing question that Meetup poses to its members is “What kind of group is a good idea right now?” Not in the twenty-first century generally, but right now, this month, today. The rise of new groups and the retiring of old ones is not a business decision, it’s a by-product of user behavior. Meetup didn’t have to establish or even predict the popularity of the Wiccan or LiveJournal groups; nor did it have to predict the time when those groups would be displaced as the most popular. Users are free to propose and pass judgment on groups, and this freedom gives Meetup a paradoxical aspect. First, it is host to thousands of successful groups, groups of between half a dozen and a couple dozen people who are willing to pay Meetup to help them meet regularly, usually monthly, with other people in their community. Second, most of the proposed Meetup groups never take off, or they meet once and never again.

  These two facts are not incompatible. Meetup is succeeding not in spite of the failed groups, but because of the failed groups. This sounds strange to our ears. Particularly in the world of business, with its Pollyanna-ish attitude toward all public pronouncements, we rarely hear about failure. Meetup’s core offer—an invitation for a group of people to get together at a particular place and time—fails with remarkable frequency, as user-proposed groups often don’t materialize. Yet Meetup, the company, is doing fine, because the successful groups meet regularly, gain more members, and often spawn new groups in new locations. Meetup is a giant information-processing tool, a kind of market where the groups are the products and where the market expresses its judgment not in cash but in expenditure of energy. Failure is free, high-quality research, offering direct evidence of what works and what doesn’t. Groups that people want to join are sorted from groups that people don’t want to join, every day. By dispensing with the right to direct what its users try to create, Meetup sheds the costs and distorting effects of managing each individual effort. Trial and error, in a system like Meetup, has both a lower cost and a higher value than in traditional institutions, where failure often comes with some employee’s name attached. From a conventional business perspective, Meetup has no quality control, but from another perspective Meetup is all quality control. All that’s required to take advantage of this sort of market are passionate users and an appetite for repeated public failure.

  Meetup shows that with low enough barriers to participation, people are not just willing but eager to join together to try things, even if most of those things end up not working. Meetup is not unusual here. Most pictures posted to Flickr get very few viewers. Most weblogs are abandoned within a year. Most weblog posts get very few readers. On YahooGroups, an enormous collection of mailing lists on topics from macramé to classic TV shows to geopolitics, about half the proposed mailing lists fail to get enough members to be viable. And so on. The power law distribution of many failures and a few remarkable successes is general. Like many of the effects of social tools, this pattern of experimentation appeared first not in services offered to the general public but among software programmers.

  The Global Talent Pool

  An interesting effect of digital archiving is that much casual conversation is now captured and stored for posterity, so it is possible to look back in time and find simple messages whose importance becomes obvious only with the passage of time. In the world of software programmers, one of the most important messages ever sent had exactly this casual feel, but it kicked off a revolution. In 1991 a young Finnish programmer named Linus Torvalds posted a note to a discussion group on the topic of operating systems, the basic software that runs computers. In his note he announced his intention to work on a simple and freely licensed system:

  I’m doing a (free) operating system (just a hobby, won’t be big and professional like gnu) . . . I’d like to know what features most people would want. Any suggestions are welcome, but I won’t promise I’ll implement them :-)

  The operating system Torvalds proposed that day went on to become Linux, which now runs something like 40 percent of the world’s servers (large-scale computers). The existence of Linux has almost single-handedly kept Microsoft from dominating the server market the way it dominates the PC market. Torvalds’s brief note contains hints of Linux’s future success, hints that are possible to read with the benefit of hindsight. He announced in the first sentence that his new project was to be free. (In a later message he specifically said he intended to use a special software license, the GNU Public License or GPL, to ensure that it stayed free.) The guarantee of freedoms contained in the GPL was critical for encouraging communal involvement; it provided a promise to anyone who wanted to help that their work could not later be taken away. It also ensured that, if Torvalds lost interest in the project, others could pick up where he left off. (He hasn’t lost interest, as it turns out, but no one knew what would happen in 1991, nor what will happen in the future.)

  Another essential component of Torvalds’s original message was that he disavowed world-changing goals. He did not say, “I intend to write software that will prevent Microsoft from monopolizing server operating systems.” Instead he made a plausible request—“Help me get this little project started.” Linux got to be world-changingly good not by promising to be great, or by bringing paid developers together under the direction of some master plan, but by getting incrementally better, through voluntary contributions, one version at a time.

  Finally, Torvalds opened the door, in his first public message, to user participation: “I’d like to know what features most people would want. Any suggestions are welcome, but I won’t promise I’ll implement them :-).” This kind of openness is the key to any project relying on peer production. The original message got only a few responses. (The population of the internet was only around a million total when Torvalds posted it, less than one-tenth of one percent its size today.) But an early response from someone at the University of Austria indicated some of what was to come.

  I am very interested in this OS. I have already thought of writing my own OS, but decided I wouldn’t have the time to write everything from scratch. But I guess I could find the time to help raising a baby OS :-)

  The number of people who are willing to start something is smaller, much smaller, than the number of people who are willing to contribute once someone else starts something. This pattern is the same as in the creation of Wikipedia articles, where a simple seven-word entry on asphalt can, through repeated improvement, become a pair of detailed and informative articles. Similarly, enough people have volunteered to help improve Linux that it has gone from a hobby project to an essential piece of digital infrastructure and has also helped propel the idea of collaboratively created (or “open source”) software into the world.

  Open source software has been one of the great successes of the digital age. The phrase refers to source code, the set of computer instructions written
by programmers that then gets turned into software. Because software exists as source code first, anyone distributing software has to decide whether to distribute the source code as well, in order to allow users to read and modify it. The alternate choice, of course, is to distribute only the software itself, without the source code, thus keeping the ability to read and modify the code with the original creators.

  Prior to the 1980s, software was something that generally came free with a computer, and much of it was distributed with the source code. As software sales become a business on its own, however, the economic logic shifted, and companies began distributing only the software. One of the first people to recognize this shift was Richard Stallman. In 1980 Stallman was working in an MIT lab that had access to Xerox’s first-ever laser printer, the 9700. The lab wanted to modify the printer to send a message to users when their document had finished printing. Xerox, however, had not sent the source code for the 9700, so no one at MIT could make the improvement. Recognizing a broader trend in the industry, Stallman started advocating for free software (“free as in speech,” as he puts it). He founded the Free Software Foundation (FSF) in 1983, with a twofold mission. First, he wanted to produce high-quality free software that was compatible with an operating system called Unix. (This project was playfully named GNU, for “GNU’s Not Unix.”) The second part of the FSF mission was to create a legal framework for ensuring that software stayed free. (This effort led to the GNU Public License, or GPL, which Torvalds was to adopt almost a decade later.)

  The year 1983 was a bad time to be arguing for this kind of freedom, as the big computing news was the advent of the personal computer, which was distributed under the “no source code included” model. In the first decade of its existence, FSF seemed to be fighting a losing battle. GPL-licensed software made up an insignificant fraction of the total software in the world, and all of it was used in small and technically adept user communities rather than in the rapidly growing population of home and business users. By the late 1980s it looked like the free software movement was going to be limited to a tiny niche.

  That didn’t happen, to put it mildly, because the GPL proved useful for holding together much looser groups of collaborators than had ever worked together before, groups like the global tribe now working on Linux. Almost a decade passed between the founding of the FSF and Torvalds’s original message. Why did Stallman’s vision not spread earlier? And why, after a decade of marginal adoption, did it become a global phenomenon in the 1990s? In that time not much about either software or arguments in favor of freedom had changed. What did change was that programmers had been given a global medium to communicate in. Linux is Exhibit A. When Torvalds announced the effort to build a tiny operating system, he received immediate responses from Austria, Iceland, the United States, Finland, and the U.K., a global collection of potential contributors assembled in twenty-four hours. Within months a simple version of the operating system was up and running, and by then conversations about Linux (as it came to be called) included people in Brazil, Canada, Australia, Germany, and the Netherlands. This had simply been less possible in the 1980s; while there were people online from all those places, they weren’t numerous. More is different, and the increased density of people using the internet made the early 1990s a much more fertile time for free software than any previous era.

  As Eric Raymond put it in “The Cathedral and the Bazaar,” the essay that introduced open source to the world:

  Linux was the first project to make a conscious and successful effort to use the entire world as its talent pool. I don’t think it’s a coincidence that the gestation period of Linux coincided with the birth of the World Wide Web, and that Linux left its infancy during the same period in 1993-1994 that saw . . . the explosion of mainstream interest in the Internet. Linus was the first person who learned how to play by the new rules that pervasive Internet made possible.

  What happened between the founding of the FSF and the creation of Linux, in other words, was a precursor to the things that happened between the two Catholic abuse scandals in Boston, or the stranded planes in 1999 and 2007. Some threshold of transaction cost for group coordination was crossed, and on the far side, a new way of working went from inconceivable to ridiculously easy. All that remains when costs fall is for someone to recognize what has become recently possible. And it was Torvalds who recognized it.

  Though the FSF pioneered many of the methods and tools adopted for the creation of Linux, the working methods of Linux were radically different from those of GNU. Stallman is one of the most brilliant programmers ever to have lived, and much of GNU was written by him, or with the help of a few others. Torvalds, by contrast, was crazily promiscuous in soliciting input, though quite judicious in which suggestions he would heed, as he noted in his very first message, “I won’t promise I’ll implement them.” This willingness to listen to a wide group of programmers, coupled with a brutally judgmental meritocracy as to which proposals were worth including, was a radical break with the FSF working method, a break occasioned by the changed transaction costs of gathering the like-minded without a traditional organizational structure. It wasn’t just the philosophical commitment to freedom but the scale of the collaboration that made Linux work as software and as a beacon for other open source projects.

  Lowering the Cost of Failure

  The Linux project, the most visible open source project in history, has turned the efforts of a distributed group of programmers, contributing their efforts for free, into world-class products. Over the years software produced in this manner has forced significant strategy changes on Microsoft and on other high-tech firms like IBM, Sun, Hewlett-Packard, and Oracle, all of whom have had to grapple not just with Linux but with other open source programs like Web servers and word processors that are freely available and, more important, freely improvable. But it would be a mistake to assume that because Linux is an open source project, all open source projects are like Linux. In fact, when we look closely at the open source ecosystem, the picture that emerges is characterized more by failure than by success. The largest collection of open source projects in the world is on SourceForge.net, which provides free hosting for software projects. SourceForge boasts more than a hundred thousand open source projects; the most popular of them have been downloaded millions of times cumulatively, and several of them are currently getting more than ten thousand downloads a day. This is the kind of popular attention that the press has focused on when covering open source.

  Just beneath these top-performing projects, however, the picture changes. SourceForge ranks hosted projects by order of activity. The projects in the ninety-fifth percentile of activity don’t get ten thousand downloads a day; in fact, most haven’t gotten even a thousand downloads, ever. These projects are more active than all but 5 percent of what’s hosted on SourceForge, and yet they are downloaded less than one tenth of 1 percent as often as the most popular ones.

  Projects below the seventy-fifth percentile of activity have no recorded downloads at all. None. Almost three-quarters of proposed open source projects on SourceForge have never gotten to the degree of completeness and utility necessary to garner even a single user. The most popular projects, with millions of users, are in fact so anomalous as to be flukes. (This is, yet again, a rough power law distribution.)

  Has the press, then, gotten it wrong about open source? Has it mischaracterized the movement, based on the successes like Linux, when the normal condition of an open source effort is failure? The answer is yes, obviously and measurably yes. The bulk of open source projects fail, and most of the remaining successes are quite modest. But does that mean the threat from open systems generally is overrated and the commercial software industry can breathe easy? Here the answer is no. Open source is a profound threat, not because the open source ecosystem is outsucceeding commercial efforts but because it is outfailing them. Because the open source ecosystem, and by extension open social systems generally, rely on peer production, the work on
those systems can be considerably more experimental, at considerably less cost, than any firm can afford. Why? The most important reasons are that open systems lower the cost of failure, they do not create biases in favor of predictable but substandard outcomes, and they make it simpler to integrate the contributions of people who contribute only a single idea.

  The overall effect of failure is its likelihood times its cost. Most organizations attempt to reduce the effect of failure by reducing its likelihood. Imagine that you are spearheading an effort for a firm that wants to become more innovative. You are given a list of promising but speculative ideas, and you have to choose some subset of them for investment. You thus have to guess the likelihood of success or failure for each project. The obvious problem is that no one knows for certain what will succeed and what will fail. A less obvious but potentially more significant problem is that the possible value of various projects is unconnected to anything their designers say about them. (Remember that Linus specifically stated that his operating system would be a hobby.) In these circumstances, you will inevitably green-light failures and pass on potential successes. Worse still, more people will remember you saying yes to a failure than saying no to a radical but promising idea. Given this asymmetry, you will be pushed to make safe choices, thus systematically undermining the rationale for trying to be more innovative in the first place.

  The open source movement makes neither kind of mistake, because it doesn’t have employees, it doesn’t make investments, it doesn’t even make decisions. It is not an organization, it is an ecosystem, and one that is remarkably tolerant of failure. Open source doesn’t reduce the likelihood of failure, it reduces the cost of failure; it essentially gets failure for free. This reversal, where the cost of deciding what to try is higher than the cost of actually trying them, is true of open systems generally. As with the mass amateurization of media, open source relies on the “publish-then-filter” pattern. In traditional organizations, trying anything is expensive, even if just in staff time to discuss the idea, so someone must make some attempt to filter the successes from the failures in advance. In open systems, the cost of trying something is so low that handicapping the likelihood of success is often an unnecessary distraction. Even in a firm committed to experimentation, considerable work goes into reducing the likelihood of failure. This doesn’t mean that open source communities don’t discuss—on the contrary, they have more discussions than in managed production, because no one is in a position to compel work on a particular project. Open systems, by reducing the cost of failure, enable their participants to fail like crazy, building on the successes as they go.

 

‹ Prev