“Minsky had assumed that the green blobs were pieces of food, placed throughout the turtles’ world. In fact, the green blobs were created by the turtles themselves. But Minsky didn’t see it that way. Instead of seeing creatures organizing themselves, he saw the creatures organized around some preexisting pieces of food. He assumed that the pattern of aggregation was determined by the placement of food. And he stuck with that interpretation—at least for a while—even after I told him that the program involved self-organization.”
Minsky had fallen for the myth of the ant queen: the assumption that collective behavior implied some kind of centralized authority—in this case, that the food was dictating the behavior of the slime mold cells. Minsky assumed that you could predict where the clusters would form by looking at where the food was placed when the simulation began. But there wasn’t any food. Nor was there anything dictating that clusters should form in specific locations. The slime mold cells were self-organizing, albeit within parameters that Resnick had initially defined.
“Minsky has thought more—and more deeply—about self-organization and decentralized systems than almost anyone else,” Resnick writes. “When I explained the rules underlying the slime mold program to him, he understood immediately what was happening. But his initial assumption was revealing. The fact that even Marvin Minsky had this reaction is an indication of the powerful attraction of centralized explanations.”
Of course, on the most fundamental level, StarLogo is itself a centralized system: it obeys rules laid down by a single authority—the programmer. But the route from Resnick’s code to those slime mold clusters is indirect. You don’t program the slime mold cells to form clusters; you program them to follow patterns in the trails left behind by their neighbors. If you have enough cells, and if the trails last long enough, you’ll get clusters, but they’re not something you can control directly. And predicting the number of clusters—or their longevity—is almost impossible without extensive trial-and-error experimentation with the system. Kevin Kelly called his groundbreaking book on decentralized behavior Out of Control, but the phrase doesn’t quite do justice to emergent systems—or at least the ones that we’ve deliberately set out to create on the computer screen. Systems like StarLogo are not utter anarchies: they obey rules that we define in advance, but those rules only govern the micromotives. The macrobehavior is another matter. You don’t control that directly. All you can do is set up the conditions that you think will make that behavior possible. Then you press play and see what happens.
That kind of oblique control is a funny thing to encounter in the world of software, but it is becoming increasingly common. Programming used to be thought of as a domain of pure control: you told the computer what to do, and the computer had no choice but to obey your orders. If the computer failed to do your bidding, it inevitably had to do with a bug in your code, and not the machine’s autonomy. The best programmers were the ones who had the most control of the system, the ones who could compel the machines to do the work with the least amount of code. It’s no accident that Norbert Wiener derived the term cybernetics from the Greek word for “steersman”: the art of software has from the beginning been about control systems and how best to drive them.
But that control paradigm is slowly giving way to a more oblique form of programming: software that you “grow” instead of engineer, software that learns to solve problems autonomously, the way Oliver Selfridge envisioned with his Pandemonium model. The new paradigm borrows heavily from the playbook of natural selection, breeding new programs out of a varied gene pool. The first few decades of software were essentially creationist in philosophy—an almighty power wills the program into being. But the next generation is profoundly Darwinian.
*
Consider the program for number sorting devised several years ago by supercomputing legend Danny Hillis, a program that undermines all of our conventional assumptions about how software should be produced. For years, number sorting has served as one of the benchmark tests for ingenious programmers, like chess-playing applications. Throw a hundred random numbers at a program and see how many steps it takes to sort the digits into the correct order. Using traditional programming techniques, the record for number sorting stood at sixty steps when Hillis decided to try his hand. But Hillis didn’t just sit down to write a number-sorting application. What Hillis created was a recipe for learning, a program for creating another program. In other words, he didn’t teach the computer how to sort numbers. He taught the computer to figure out how to sort numbers on its own.
Hillis pulled off this sleight of hand by connecting the formidable powers of natural selection to a massively parallel supercomputer—the Connection Machine that he himself had helped design. Instead of authoring a number-sorting program himself—writing out lines of code and debugging—Hillis instructed the computer to generate thousands of miniprograms, each composed of random combinations of instructions, creating a kind of digital gene pool. Each program was confronted with a disorderly sequence of numbers, and each tried its hand at putting them in the correct order. The first batch of programs were, as you might imagine, utterly inept at number sorting. (In fact, the overwhelming majority of the programs were good for nothing at all.) But some programs were better than others, and because Hillis had established a quantifiable goal for the experiment—numbers arranged in the correct order—the computer could select the few programs that were in the ballpark. Those programs became the basis for the next iteration, only Hillis would also mutate their code slightly and crossbreed them with the other promising programs. And then the whole process would repeat itself: the most successful programs of the new generation would be chosen, then subjected to the same transformations. Mix, mutate, evaluate, repeat.
After only a few minutes—and thousands of cycles—this evolutionary process resulted in a powerful number-sorting program, capable of arranging a string of random numbers in seventy-five steps. Not a record breaker, by any means, but impressive nonetheless. The problem, though, was that the digital gene pool was maxing out at the seventy-five-step mark. Each time Hillis ran the sequence, the computer would quickly evolve a powerful and efficient number sorter, but it would run out of steam at around seventy-five steps. After enough experimentation, Hillis recognized that his system had encountered a hurdle often discussed by evolutionary theorists: the software had stumbled across a local maximum in the fitness landscape.
Imagine the space of all possible number-sorting programs spread out like a physical landscape, with more successful programs residing at higher elevations, and less successful programs lurking in the valleys. Evolutionary software is a way of blindly probing that space, looking for gradients that lead to higher elevations. Think of an early stage in Hillis’s cycle: one evolved routine sorts a few steps faster than its “parent” and so it survives into the next round. That survival is the equivalent of climbing up one notch on the fitness landscape. If its “descendant” sorts even more efficiently, its “genes” are passed on to the next generation, and it climbs another notch higher.
The problem with this approach is that there are false peaks in the fitness landscape. There are countless ways to program a computer to sort numbers with tolerable efficiency, but only a few ways to sort numbers if you’re intent on setting a world record. And those different programs vary dramatically in the way they tackle the problem. Think of those different approaches as peaks on the fitness landscape: there are thousands of small ridges, but only a few isolated Everests. If a program evolves using one approach, its descendants may never find their way to another approach—because Hillis’s system only rewarded generations that improved on the work done by their ancestors. Once the software climbs all the way to the top of a ridge, there’s no reward in descending and looking for another, higher peak, because a less successful program—one that drops down a notch on the fitness landscape—would instantly be eliminated from the gene pool. Hillis’s software was settling in at the seventy-five-step
ridges because the penalty for searching out the higher points was too severe.
Hillis’s stroke of genius was to force his miniprograms out of the ridges by introducing predators into the mix. Just as in real-world ecosystems, predators effectively raised the bar for evolved programs that became lazy because of their success. Before the introduction of predators, a miniprogram that had reached a seventy-five-step ridge knew that its offspring had a chance of surviving if it stayed at that local maximum, but faced almost certain death if it descended to search out higher ground. But the predators changed all that. They hunted down ridge dwellers and forced them to improvise: if a miniprogram settled into the seventy-five-step range, it could be destroyed by predator programs. Once the predators appeared on the scene, it became more productive to descend to lower altitudes to search out a new peak than to stay put at a local maximum.
Hillis structured the predator-prey relationship as an arms race: the higher the sorting programs climbed, the more challenging the predators became. If the system stumbled across a seventy-step peak, then predators were introduced that hunted down seventy-step programs. Anytime the software climbers decided to rest on their laurels, a predator appeared to scatter them off to find higher elevations.
After only thirty minutes of this new system, the computer had evolved a batch of programs that could sort numbers in sixty-two steps, just two shy of the all-time record. Hillis’s system functioned, in biological terms, more like an environment than an organism: it created a space where intelligent programs could grow and adapt, exceeding the capacities of all but the most brilliant flesh-and-blood programmers. “One of the interesting things about the sorting programs that evolved in my experiment is that I do not understand how they work,” Hillis writes in his book The Pattern on the Stone. “I have carefully examined their instruction sequences, but I do not understand them: I have no simpler explanation of how the programs work than the instruction sequences themselves. It may be that the programs are not understandable.”
Proponents of emergent software have made some ambitious claims for their field, including scenarios where a kind of digital Darwinism leads to a simulated intelligence, capable of open-ended learning and complex interaction with the outside world. (Most advocates don’t think that such an intelligence will necessarily resemble human smarts, but that’s another matter, one that we’ll examine in the conclusion.) In the short term, though, emergent software promises to transform the way that we think about creating code: in the next decade, we may well see a shift from top-down, designed programs to bottom-up, evolved versions, like Hillis’s number-sorting applet—“less like engineering a machine,” Hillis says, “than baking a cake, or growing a garden.”
That transformation may be revolutionary for the programmers, but if it does its job, it won’t necessarily make much of a difference for the end users. We might notice our spreadsheets recalculating a little faster and our grammar checker finally working, but we’ll be dealing with the end results of emergent software, not the process itself. (The organisms, in Darwinian terms, and not the environment that nurtured them.) But will ordinary computer-users get a chance to experiment with emergent software firsthand, a chance to experiment with its more oblique control systems? Will growing gardens of code ever become a mainstream activity?
In fact, we can get our hands dirty already. And we can do it just by playing a game.
*
It’s probably fair to say that digital media has been wrestling with “control issues” from its very origins. The question of control, after all, lies at the heart of the interactive revolution, since making something interactive entails a shift in control, from the technology—or the puppeteers behind the technology—to the user. Most recurring issues in interactive design hover above the same underlying question: Who’s driving here, human or machine? Programmer or user? These may seem like esoteric questions, but they have implications that extend far beyond design-theory seminars or cybercafé philosophizing. I suspect that we’re only now beginning to understand how complicated these issues are, as we acclimate to the strange indirection of emergent software.
In a way, we’ve been getting our sea legs for this new environment for the past few years now. Some of the most interesting interactive art and games of the late nineties explicitly challenged our sense of control or made us work to establish it. Some of these designs belonged to the world of avant-garde or academic experimentation, while others had more mainstream appeal. But in all these designs, the feeling of wrestling with or exploring the possibilities of the software—the process of mastering the system—was transformed from a kind of prelude to the core experience of the design. It went from a bug to a feature.
There are different ways to go about challenging our sense of control. Some programs, such as the ingenious Tap, Type, Write—created by MIT’s John Maeda—make it immediately clear that the user is driving. The screen starts off with an array of letters; hitting a specific key triggers a sudden shift in the letterforms presented on-screen. The overall effect is like a fireworks show sponsored by Alphabet Soup. Press a key, and the screen explodes, ripples, reorders itself. It’s hypnotic, but also a little mystifying. What algorithm governs this interaction? Something happens on-screen when you type, but it takes a while to figure out what rules of transformation are at work here. You know you’re doing something, you just don’t know what it is.
The OSS code, created by the European avant-punk group Jodi.org, messes with our sense of control on a more profound—some would say annoying—level. A mix of anarchic screen-test patterns and eclectic viral programming, the Jodi software is best described as the digital equivalent of an aneurysm. Download the software and the desktop overflows with meaningless digits; launch one of the applications, and your screen descends instantly into an unstable mix of static and structure. Move the mouse in one direction, or double click, and there’s a fleeting sense of something changing. Did the flicker rate shift? Did those interlaced patterns reverse themselves? At hard-to-predict moments, the whole picture show shuts down—invariably after a few frantic keystrokes and command clicks—and you’re left wondering, Did I do that?
No doubt many users are put off by the dislocations of Tap, Type, Write and OSS, and many walk away from the programs feeling as though they never got them to work quite right, precisely because their sense of control remained so elusive. For me, I find these programs strangely empowering; they challenge the mind in the same way distortion challenged the ear thirty-five years ago when the Beatles and the Velvet Underground first began overloading their amps. We find ourselves reaching around the noise—the lack of structure—for some sort of clarity, only to realize that it’s the reaching that makes the noise redemptive. Video games remind us that messing with our control expectations can be fun, even addictive, as long as the audience has recognized that the confusion is part of the show. For a generation raised on MTV’s degraded images, that recognition comes easily. The Nintendo generation, in other words, has been well prepared for the mediated control of emergent software.
Take as example one of the most successful titles from the Nintendo64 platform, Shigeru Miyamoto’s Zelda: Ocarina of Time. Zelda embodies the uneven development of late-nineties interactive entertainment. The plot belongs squarely to the archaic world of fairy tales—a young boy armed with magic spells sets off to rescue the princess. As a control system, though, Zelda is an incredibly complex structure, with hundreds of interrelated goals and puzzles dispersed throughout the game’s massive virtual world. Moving your character around is simple enough, but figuring out what you’re supposed to do with him takes hours of exploration and trial and error. By traditional usability standards, Zelda is a complete mess: you need a hundred-page guidebook just to establish what the rules are. But if you see that opacity as part of the art—like John Cale’s distorted viola—then the whole experience changes: you’re exploring the world of the game and the rules of the game at the same time.
 
; Think about the ten-year-olds who willingly immerse themselves in Zelda’s world. For them, the struggle for mastery over the system doesn’t feel like a struggle. They’ve been decoding the landscape on the screen—guessing at causal relations between actions and results, building working hypotheses about the system’s underlying rules—since before they learned how to read. The conventional wisdom about these kids is that they’re more nimble at puzzle solving and more manually dexterous than the TV generation, and while there’s certainly some truth to that, I think we lose something important in stressing how talented this generation is with their joysticks. I think they have developed another skill, one that almost looks like patience: they are more tolerant of being out of control, more tolerant of that exploratory phase where the rules don’t all make sense, and where few goals have been clearly defined. In other words, they are uniquely equipped to embrace the more oblique control system of emergent software. The hard work of tomorrow’s interactive design will be exploring the tolerance—that suspension of control—in ways that enlighten us, in ways that move beyond the insulting residue of princesses and magic spells.
*
With these new types of games, a new type of game designer has arisen as well. The first generation of video games may have indirectly influenced a generation of artists, and a handful were adopted as genuine objets d’art, albeit in a distinctly campy fashion. (Tabletop Ms. Pac-Man games started to appear at downtown Manhattan clubs in the early nineties, around the time the Museum of the Moving Image created its permanent game collection.) But artists themselves rarely ventured directly into the game-design industry. Games were for kids, after all. No self-respecting artist would immerse himself in that world with a straight face.
But all this has changed in recent years, and a new kind of hybrid has appeared—a fusion of artist, programmer, and complexity theorist—creating interactive projects that challenge the mind and the thumb at the same time. And while Tap, Type, Write and Zelda were not, strictly speaking, emergent systems, the new generation of game designers and artists have begun explicitly describing their work using the language of self-organization. This too brings to mind the historical trajectory of the rock music genre. For the first fifteen or twenty years, the charts are dominated by lowest-common-denominator titles, rarely venturing far from the established conventions or addressing issues that would be beyond the reach of a thirteen-year-old. And then a few mainstream acts begin to push at the edges—the Beatles or the Stones in the music world, Miyamoto and Peter Molyneux in the gaming community—and the expectations about what constitutes a pop song or a video game start to change. And that transformation catches the attention of the avant-garde—the Velvet Underground, say, or the emergent-game designers—who suddenly start thinking of pop music or video games as a legitimate channel for self-expression. Instead of writing beat poetry or staging art happenings, they pick up a guitar—or a joystick.
Emergence Page 16