Emergence

Home > Other > Emergence > Page 6
Emergence Page 6

by Steven Johnson


  Running through one hundred generations took about two hours; Jefferson and Taylor rigged the system to give them realtime updates on the most talented ants of each generation. Like a stock ticker, the Connection Machine would spit out an updated number at the end of each generation: if the best trail-follower of one generation managed to hit fifteen squares in a hundred cycles, the Connection Machine would report that 15 was the current record and then move on to the next generation. After a few false starts because of bugs, Jefferson and Taylor got the Tracker system to work—and the results exceeded even their most optimistic expectations.

  “To our wonderment and utter joy,” Jefferson recalls, “it succeeded the first time. We were sitting there watching these numbers come in: one generation would produce twenty-five, then twenty-five, and then it would be twenty-seven, and then thirty. Eventually we saw a perfect score, after only about a hundred generations. It was mind-blowing.” The software had evolved an entire population of expert trail-followers, despite the fact that Jefferson and Taylor had endowed their first generation of ants with no skills whatsoever. Rather than engineer a solution to the trail-following problem, the two UCLA professors had evolved a solution; they had created a random pool of possible programs, then built a feedback mechanism that allowed more successful programs to emerge. In fact, the evolved programs were so successful that they’d developed solutions custom-tailored to their environments. When Jefferson and Taylor “dissected” one of the final champion ants to see what trail-following strategies he had developed, they discovered that the software had evolved a preference for making right-hand turns, in response to the three initial right turns that Jefferson had built into the John Muir Trail. It was like watching an organism living in water evolving gills: even in the crude, abstract grid of Tracker, the virtual ants evolved a strategy for survival that was uniquely adapted to their environment.

  By any measure, Tracker was a genuine breakthrough. Finally the tools of modern computing had advanced to the point where you could simulate emergent intelligence, watch it unfold on the screen in real time, as Turing and Selfridge and Shannon had dreamed of doing years before. And it was only fitting that Jefferson and Taylor had chosen to simulate precisely the organism most celebrated for its emergent behavior: the ant. They began, of course, with the most elemental form of ant intelligence—sniffing for pheromone trails—but the possibilities suggested by the success of Tracker were endless. The tools of emergent software had been harnessed to model and understand the evolution of emergent intelligence in real-world organisms. In fact, watching those virtual ants evolve on the computer screen, learning and adapting to their environments on their own, you couldn’t help wonder if the division between the real and the virtual was becoming increasingly hazy.

  *

  In Mitch Resnick’s computer simulation of slime mold behavior, there are two key variables, two elements that you can alter in your interaction with the simulation. The first is the number of slime mold cells in the system; the second is the physical and temporal length of the pheromone trail left behind by each cell as it crawls across the screen. (You can have long trails that take minutes to evaporate, or short ones that disappear within seconds.) Because slime mold cells collectively decide to aggregate based on their encounters with pheromone trails, altering these two variables can have a massive impact on the simulated behavior of the system. Keep the trails short and the cells few, and the slime molds will steadfastly refuse to come together. The screen will look like a busy galaxy of shooting stars, with no larger shapes emerging. But turn up the duration of the trails, and the number of agents, and at a certain clearly defined point, a cluster of cells will suddenly form. The system has entered a phase transition, moving from one discrete state to another, based on the “organized complexity” of the slime mold cells. This is not gradual, but sudden, as though a switch had been flipped. But there are no switch-flippers, no pacemakers—just a swarm of isolated cells colliding with one another, and leaving behind their pheromone footprints.

  Histories of intellectual development—the origin and spread of new ideas—usually come in two types of packages: either the “great man” theory, where a single genius has a eureka moment in the lab or the library and the world is immediately transformed; or the “paradigm shift” theory, where the occupants of the halls of science awake to find an entirely new floor has been built on top of them, and within a few years, everyone is working out of the new offices. Both theories are inadequate: the great-man story ignores the distributed, communal effort that goes into any important intellectual advance, and the paradigm-shift model has a hard time explaining how the new floor actually gets built. I suspect Mitch Resnick’s slime mold simulation may be a better metaphor for the way idea revolutions come about: think of those slime mold cells as investigators in the field; think of those trails as a kind of institutional memory. With only a few minds exploring a given problem, the cells remain disconnected, meandering across the screen as isolated units, each pursuing its own desultory course. With pheromone trails that evaporate quickly, the cells leave no trace of their progress—like an essay published in a journal that sits unread on a library shelf for years. But plug more minds into the system and give their work a longer, more durable trail—by publishing their ideas in bestselling books, or founding research centers to explore those ideas—and before long the system arrives at a phase transition: isolated hunches and private obsessions coalesce into a new way of looking at the world, shared by thousands of individuals.

  This is exactly what happened with the bottom-up mind-set over the past three decades. After years of disconnected investigations, the varied labors of Turing, Shannon, Wiener, Selfridge, Weaver, Jacobs, Holland, and Prigogine had started a revolution in the way we thought about the world and its systems. By the time Jefferson and Taylor started tinkering with their virtual ants in the mideighties, the trails of intellectual inquiry had grown long and interconnected enough to create a higher-level order. (Call it the emergence of emergence.) A field of research that had been characterized by a handful of early-stage investigations blossomed overnight into a densely populated and diverse landscape, transforming dozens of existing disciplines and inventing a handful of new ones. In 1969, Marvin Minsky and Seymour Papert published “Perceptrons,” which built on Selfridge’s Pandemonium device for distributed pattern recognition, leading the way for Minsky’s bottom-up Society of Mind theory developed over the following decade. In 1972, a Rockefeller University professor named Gerald Edelman won the Nobel prize for his work decoding the language of antibody molecules, leading the way for an understanding of the immune system as a self-learning pattern-recognition device. Prigogine’s Nobel followed five years later. At the end of the decade, Douglas Hofstadter published Gödel, Escher, Bach, linking artificial intelligence, pattern recognition, ant colonies, and “The Goldberg Variations.” Despite its arcane subject matter and convoluted rhetorical structure, the book became a bestseller and won the Pulitzer prize for nonfiction.

  By the mideighties, the revolution was in full swing. The Santa Fe Institute was founded in 1984; James Gleick’s book Chaos arrived three years later to worldwide adulation, quickly followed by two popular-science books each called Complexity. Artificial-life studies flourished, partially thanks to the success of software programs like Tracker. In the humanities, critical theorists such as Manuel De Landa started dabbling with the conceptual tools of self-organization, abandoning the then-trendy paradigm of post-structuralism or cultural studies. The phase transition was complete; Warren Weaver’s call for the study of organized complexity had been vigorously answered. Warren Weavers’s “middle region” had at last been occupied by the scientific vanguard.

  *

  We are now living through the third phase of that revolution. You can date it back to the day in the early nineties when Will Wright released a program called SimCity, which would go on to become one of the bestselling video-game franchises of all time. SimCity would a
lso inaugurate a new phase in the developing story of self-organizing: emergent behavior was no longer purely an object of study, something to interpret and model in the lab. It was also something you could build, something you could interact with, and something you could sell. While SimCity came out of the developing web of the bottom-up worldview, it suggested a whole new opening: SimCity was a work of culture, not science. It aimed to entertain, not explain.

  Ten years after Wright’s release of SimCity, the world now abounds with these man-made systems: online stores use them to recognize our cultural tastes; artists use them to create a new kind of adaptive cultural form; Web sites use them to regulate their online communities; marketers use them to detect demographic patterns in the general public. The video-game industry itself has exploded in size, surpassing Hollywood in terms of raw sales numbers—with many of the bestselling titles relying on the powers of digital self-organization. And with that popular success has come a subtle, but significant, trickle-down effect: we are starting to think using the conceptual tools of bottom-up systems. Just like the clock maker metaphors of the Enlightenment, or the dialectical logic of the nineteenth century, the emergent worldview belongs to this moment in time, shaping our thought habits and coloring our perception of the world. As our everyday life becomes increasingly populated by artificial emergence, we will find ourselves relying more and more on the logic of these systems—both in corporate America, where “bottom-up intelligence” has started to replace “quality management” as the mantra of the day, and in the radical, antiglobalization protest movements, who explicitly model their pacemakerless, distributed organizations after ant colonies and slime molds. Former vice president Al Gore is himself a devotee of complexity theory and can talk for hours about what the bottom-up paradigm could mean for reinventing government. Almost two centuries after Engels wrestled with the haunting of Manchester’s city streets, and fifty years after Turing puzzled over the mysteries of a flower’s bloom, the circle is finally complete. Our minds may be wired to look for pacemakers, but we are steadily learning how to think from the bottom up.

  PART TWO

  StarLogo slime mold simulation (Courtesy of Mitch Resnick)

  Look to the ant, thou sluggard;

  Consider her ways and be wise:

  Which having no chief, overseer, or ruler,

  Provides her meat in the summer,

  And gathers her food in the harvest.

  —PROVERBS 6:6–8

  2

  Street Level

  Say what you will about global warming or the Mona Lisa, Apollo 9 or the canals of Venice—human beings may seem at first glance to be the planet’s most successful species, but there’s a strong case to be made for the ants. Measured by sheer numbers, ants—and other social insects such as termites—dominate the planet in a way that makes human populations look like an evolutionary afterthought. Ants and termites make up 30 percent of the Amazonian rain forest biomass. With nearly ten thousand known species, ants rival modern humans in their global reach: the only large landmasses free of ant natives are Antarctica, Iceland, Greenland, and Polynesia. And while they have yet to invent aerosol spray, ant species have a massive environmental impact, moving immense amounts of soil and distributing nutrients even in the most hostile environments. They lack our advanced forebrains, of course, but human intelligence is only one measure of evolutionary success.

  All of which raises the question, if evolution didn’t see fit to endow ants with the computational powers of the human brain, how did they become such a dominant presence on the planet? While there’s no single key to the success of the social insects, the collective intelligence of the colony system certainly played an essential role. Call it swarm logic: ten thousand ants—each limited to a meager vocabulary of pheromones and minimal cognitive skills—collectively engage in nuanced and improvisational problem-solving. A harvester ant colony in the field will not only ascertain the shortest route to a food source, it will also prioritize food sources, based on their distance and ease of access. In response to changing external conditions, worker ants switch from nest-building to foraging to raising ant pupae. Their knack for engineering and social coordination can be downright spooky—particularly because none of the individual ants is actually “in charge” of the overall operation. It’s this connection between micro and macro organization that got Deborah Gordon into ants in the first place. “I was interested in systems where individuals who are unable to assess the global situation still work together in a coordinated way,” she says now. “And they manage to do it using only local information.”

  Local turns out to be the key term in understanding the power of swarm logic. We see emergent behavior in systems like ant colonies when the individual agents in the system pay attention to their immediate neighbors rather than wait for orders from above. They think locally and act locally, but their collective action produces global behavior. Take the relationship between foraging and colony size. Harvester ant colonies constantly adjust the number of ants actively foraging for food, based on a number of variables: overall colony size (and thus mouths needed to be fed); amount of food stored in the nest; amount of food available in the surrounding area; even the presence of other colonies in the near vicinity. No individual ant can assess any of these variables on her own. (I use her deliberately—all worker ants are females.) The perceptual world of an ant, in other words, is limited to the street level. There are no bird’s-eye views of the colony, no ways to perceive the overall system—and indeed, no cognitive apparatus that could make sense of such a view. “Seeing the whole” is both a perceptual and conceptual impossibility for any member of the ant species.

  Indeed, in the ant world, it’s probably misguided to talk about “views” at all. While some kinds of ants have surprisingly well-developed optical equipment (the South American formicine ant Gigantiops destructor has massive eyes), the great bulk of ant information-processing relies on the chemical compounds of pheromones, also known as semiochemicals for the way they create a functional sign system among the ants. Ants secrete a finite number of chemicals from their rectal and sternal glands—and occasionally regurgitate recently digested food—as a means of communicating with other ants. Those chemical signals turn out to be the key to understanding swarm logic. “The sum of the current evidence,” E. O. Wilson and Bert Holldobler write in their epic work, The Ants, “indicates that pheromones play the central role in the organization of colonies.”

  Compared to human languages, ant communication can seem crude, typically possessing only ten or twenty signs. Communication between workers in colonies of the fire ant Solenopsis invicta—studied intensely by Wilson in the early sixties—relies on a vocabulary of ten signals, nine of which are based on pheromones. (The one exception is tactile communication directly between ants.) Among other things, these semiochemicals code for task-recognition (“I’m on foraging duty”); trail attraction (“There’s food over here”); alarm behavior (“Run away!”); and necrophoric behavior (“Let’s get rid of these dead comrades”).

  While the vocabulary is simple, and complex syntactical structures impossible, the language of the ants is nevertheless characterized by some intriguing twists that add to its expressive capability. Many semiochemicals operate in a relatively simple binary fashion—signaling, for instance, whether another ant is a friend or a foe. But ants can also detect gradients in pheromones, revealing which way the scent is growing stronger, not unlike the olfactory skills of bloodhounds. Gradient detection is essential for forming those food delivery lines that play such a prominent role in the popular imagination of ant life: the seemingly endless stream of ants, each comically overburdened with seeds, marching steadily across sidewalk or soil. (As we will see in Chapter 5, Mitch Resnick’s program StarLogo can also model the way colonies both discover food sources and transport the goods back to the home base.) Gradients in the pheromone trail are the difference between saying “There’s food around here somewhere” and “There�
�s food due north of here.”

  Like most of their relatives, the harvester ants that Deborah Gordon studies are also particularly adept at measuring the frequency of certain semiochemicals, a talent that also broadens the semantic range of the ant language. Ants can sense the difference between encountering ten foraging ants in an hour and encountering a hundred. Gordon believes this particular skill is critical to the colony’s formidable ability to adjust task allocation according to colony size or food supply—a local talent, in other words, that engenders global behavior.

  “I don’t think that the ants are assessing the size of the colony,” she tells me, “but I think that the colony size affects what an ant experiences, which is different. I don’t think that an ant is keeping track of how big the whole colony is, but I think that an ant in a big colony has a different experience from an ant in a small colony. And that may account for why large old colonies act different than their small ones.” Ants, in Gordon’s view, conduct a kind of statistical sample of the overall population size, based on their random encounters with other ants. A foraging ant might expect to meet three other foragers per minute—if she encounters more than three, she might follow a rule that has her return to the nest. Because larger, older colonies produce more foragers, ants may behave differently in larger colonies because they are more likely to encounter other ants.

  This local feedback may well prove to be the secret to the ant world’s decentralized planning. Individual ants have no way of knowing how many foragers or nest-builders or trash collectors are on duty at any given time, but they can keep track of how many members of each group they’ve stumbled across in their daily travels. Based on that information—both the pheromone signal itself, and its frequency over time—they can adjust their own behavior accordingly. The colonies take a problem that human societies might solve with a command system (some kind of broadcast from mission control announcing that there are too many foragers) and instead solve it using statistical probabilities. Given enough ants moving randomly through a finite space, the colony will be able to make an accurate estimate of the overall need for foragers or nest-builders. Of course, it’s always possible that an individual ant might randomly stumble across a disproportionate number of foragers and thus overestimate the global foraging state and change her behavior accordingly. But because the decision-making process is spread out over thousands of individuals, the margin of error is vanishingly small. For every ant that happens to overestimate the number of foragers on duty, there’s one that underestimates. With a large enough colony, the two will eventually cancel each other out, and an accurate reading will emerge.

 

‹ Prev