The blind spot doesn’t jump out at us because the brain isn’t expecting information from that zone, and there’s no other signal struggling to fill in the blanks for us, or pointing out that there is a blank in the first place. As the philosopher Daniel Dennett describes it, there are no centers of the visual cortex “responsible for receiving reports from this area, so when no reports arrive, there is no one to complain. An absence of information is not the same as information about an absence.” We’re blind to our blindness.
Perhaps the same goes with the theory of other minds. Without that awareness of other mental states reminding us of our own limitations, we might well be aware of the world, yet unaware of our own mental life. The lack of self-awareness wouldn’t jump out at us for the same reason that the blind spot remains invisible: there’s no feedback mechanism to sound the alarm that something’s missing. Only when we begin to speculate on the mental life of others do we discover that we have a mental life ourselves.
If self-awareness is a by-product of our mind-reading skills, what propelled us to start building those theories of other minds in the first place? That answer comes more easily. The battle of nature-versus-nurture may have many skirmishes to come, but by now only the most blinkered anti-essentialist disagrees with the premise that we are social animals by nature. The great preponderance of human populations worldwide—both modern and “primitive”—live in extended bands and form complex social systems. Among the apes, we are an anomaly in this respect: only the chimps share our compulsive mixed-sex socializing. (Orangutans live mostly solitary lives; gibbons as isolated couples; gorillas travel in harems dominated by a single male.) That social complexity demands formidable mental skills: instead of outfoxing a single predator, or caring for a single infant, humans mentally track the behavior of dozens of individuals, altering their own behavior based on that information. Some evolutionary psychologists believe that the extraordinary expansion of brain size between Homo habilis and Homo sapiens (brain mass trebled over the 2-million-year period that separates the two species) was at least in part triggered by an arms race between Pleistocene-era extroverts. If successfully passing on your genes to another generation depended on a nuanced social intelligence that competed with other social intellects for reproductive privileges, then it’s not hard to imagine natural selection generating a Machiavellian mental toolbox in a surprisingly short period.
The group element may even explain the explosion in sheer cranial size: social complexity is a problem that scales well—build a module that can analyze one person’s mind, and all you need to do is throw more resources at the problem, and you can analyze a dozen minds with the same tools. The brain didn’t need to invent any complicated new routines once it figured out how to read a single mind—it just needed to devote more processing power. That power came in the form of brain mass: more neurons to model the behavior of other brains, which themselves contained more neurons, for the same reason. It’s a classic case of positive feedback, only it seems to have run into a ceiling of 150 people, according to the latest anthropological studies. We have a natural gift for building theories of other minds, so long as there aren’t too many of them.
Perhaps if human evolution had continued on for another million years or so, we’d all be mentally modeling the behavior of entire cities. But for whatever reason, we stopped short at 150, and that’s where we remained—until the new technologies of urban living pushed our collectivities beyond the magic number. Those oversize communities appeared too quickly for our minds to adapt to them using the tools of natural selection, and so we hit upon another solution, one engineered by the community itself, and not by its genes. We started building neighborhoods, groups within groups. When our lived communities extended beyond the ceiling of human comprehension, we started building new floors.
*
Mirror neurons and mind reading have an immense amount to teach us about our talents and limitations as a species, and there’s no doubt we’ll be untangling the “theory of other minds” for years to come. Whatever the underlying mechanism turns out to be, the faculty of mind reading—and its close relation, self-awareness—is clearly an emergent property of the brain’s neural networks. We don’t know precisely how that higher-level behavior comes into being, but we do know that it is conjured up by the local, feedback-heavy interactions of unwitting agents, by the complex adaptive system that we call the human mind. No individual neuron is sentient, and yet somehow the union of billions of neurons creates self-awareness. It may turn out that the brain gets to that self-awareness by first predicting the behavior of neurons residing in other brains—the way, for instance, our brains are hardwired to predict the behavior of light particles and sound waves. But whichever one came first—the extroverted chicken or the self-aware egg—those faculties are prime examples of emergence at work. You wouldn’t be able to read these words, or speculate about the inner workings of your mind, were it not for the protean force of emergence.
But there are limits to that force, and to its handiwork. Natural selection endowed us with cognitive tools uniquely equipped to handle the social complexity of Stone Age groups on the savannas of Africa, but once the agricultural revolution introduced the first cities along the banks of the Tigris-Euphrates valley, the Homo sapiens mind naturally recoiled from the sheer scale of those populations. A mind designed to handle the maneuverings of less than two hundred individuals suddenly found itself immersed in a community of ten or twenty thousand. To solve that problem, we once again leaned on the powers of emergence, although the solution resided one level up from the individual human brain: instead of looking to swarms of neurons to deal with social complexity, we looked to swarms of individual humans. Instead of reverberating neuronal circuits, neighborhoods emerged out of traffic patterns. By following the footprints, and learning from their behavior, we built another ceiling on top of the one imposed on us by our frontal lobes. Managing complexity became a problem to be solved on the level of the city itself.
Over the last decade we have run up against another ceiling. We are now connected to hundreds of millions of people via the vast labyrinth of the World Wide Web. A community of that scale requires a new solution, one beyond our brains or our sidewalks, but once again we look to self-organization for the tools, this time built out of the instruction sets of software: Alexa, Slashdot, Epinions, Everything2, Freenet. Our brains first helped us navigate larger groups of fellow humans by allowing us to peer into the minds of other individuals and to recognize patterns in their behavior. The city allowed us to see patterns of group behavior by recording and displaying those patterns in the form of neighborhoods. Now the latest software scours the Web for patterns of online activity, using feedback and pattern-matching tools to find neighbors in an impossibly oversize population. At first glance, these three solutions—brains, cities, and software—would seem to belong to completely different orders of experience. But as we have seen over the preceding pages, they are all instances of self-organization at work, local interactions leading to global order. They exist on a continuum of sorts. The materials change as you jump from the scale of a hundred humans to a million to 100 million. But the system remains the same.
Amazingly, this process has come full circle. Hundreds of thousands—if not millions—of years ago, our brains developed a feedback mechanism that enabled them to construct theories of other minds. Today, we are beginning to create software applications that are capable of developing a theory of our minds. All those fluid, self-organizing programs tracking our tastes and interests, and measuring them against the behavior of larger populations—these programs are the beginning of a progression that will, in a matter of years, lead to a world where we regularly interact with media that seems to know us in some fundamental way. Software will recognize our habits, anticipate our needs, adapt to our changing moods. The first generation of emergent software—programs like SimCity and StarLogo—displayed a captivatingly organic quality; they seemed more like life-for
ms than the sterile instruction sets and command lines of early code. The next generation will take that organic feel one step further: the new software will use the tools of self-organization to build models of our own mental states. These programs won’t be self-aware, and they won’t pass any Turing tests, but they will make the media experiences we’ve grown accustomed to seem autistic in comparison. They will be mind readers.
From a certain angle, this is an old story. The great software revolution of the seventies and eighties—the invention of the graphic interface—was itself predicated on a theory of other minds. The design principles behind the graphic interface were based on predictions about the general faculties of the human perceptual and cognitive systems. Our spatial memory, for instance, is more powerful than our textual memory, so graphic interfaces emphasize icons over commands. We have a natural gift for associative thinking, thanks to the formidable pattern-matching skills of the brain’s distributed network, so the graphic interface borrowed visual metaphors from the real world: desktops, folders, trash cans. Just as certain drugs are designed specifically as keys to unlock the neurochemistry of our gray matter, the graphic interface was designed to exploit the innate talents of the human mind and to rely as little as possible on our shortcomings. If the ants had been the first species to invent personal computers, they would have no doubt built pheromone interfaces, but because we inherited the exceptional visual skills of the primate family, we have adopted spatial metaphors on our computer screens.
To be sure, the graphic interface’s mind-reading talents are ruthlessly generic. Scrolling windows and desktop metaphors are based on predictions about a human mind, not your mind. They’re one-size-fits-all theories, and they lack any real feedback mechanism to grow more familiar with your particular aptitudes. What’s more, their predictions are decidedly the product of top-down engineering. The software didn’t learn on its own that we’re a visual species; researchers at Xerox-PARC and MIT already knew about our visual memory, and they used that knowledge to create the first generation of spatial metaphors. But these limitations will soon go the way of vacuum tubes and punch cards. Our software will develop nuanced and evolving models of our individual mental states, and that learning will emerge out of a bottom-up system. And while this software will deliver information tailored to our interests and appetites, its mind-reading skills will be far less insular than today’s critics would have us believe. You may read something like the Daily Me in the near future, but that digital newspaper will be compiled by tracking the interests and reading habits of millions of other humans. Interacting with emergent software is already more like growing a garden than driving a car or reading a book. In the near future, though, you’ll be working alongside a million other gardeners. We will have more powerful personalization tools than we ever thought possible—but those tools will be created by massive groups scattered all across the world. When Patti Maes first began developing recommendation software at MIT in the early nineties, she called it collaborative filtering. The term has only grown more resonant. In the next few years, we will have personalized filters beyond our wildest dreams. But we will also be collaborating on a scale rivaled only by the cities we first started building six thousand years ago.
Those collaborations will build more than just music-recommendation tools and personalized newspapers. Our new ability to capture the power of emergence in code will be closer to the revolution unleashed when we figured out how to distribute electricity a century ago. Almost every region of our cultural life was transformed by the power grid; the power of self-organization—coupled with the connective technology of the Internet—will usher in a revolution every bit as significant. Applied emergence will go far beyond simply building more user-friendly applications. It will transform our very definition of a media experience and challenge many of our habitual assumptions about the separation between public and private life. A few decades from now, the forces unleashed by the bottom-up revolution may well dictate that we redefine intelligence itself, as computers begin to convincingly simulate the human capacity for open-ended learning. But in the next five years alone, we’ll have plenty of changes to keep us busy. Our computers and television sets and refrigerators won’t be thinking themselves, but they’ll have a pretty good idea what we’re thinking about.
*
Technology analysts never tire of reminding us that pornography is the ultimate early adopter. New technologies, in other words, are assimilated by the sex industries more quickly than by the mainstream—it was true for the printing press, for the VCR, for Web-based broadband. But video games are challenging that old adage. Because part of their appeal lies in their promise of new experiences, and because their audience is willing to scale formidable learning curves in pursuit of those new experiences, games often showcase cutting-edge technology before the tech makes its way over to the red-light district. Certainly that has been the case with emergent software. Gamers have been experimenting with self-organizing systems at least since SimCity’s release in 1990, but the digital porn world remains, as it were, a top-down affair—despite the hype about putatively “interactive” DVDs.
In fact, video-game culture is the one arena today where you can see the “theory of other minds” integrated into a genuinely engaging media experience. Play any advanced first-person-shooter such as Quake or Unreal against computer opponents and you’ll witness astonishingly lifelike behavior from the simulated gunslingers battling against you. They’ll learn to anticipate your idiosyncrasies as a player; they’ll form complicated flocking patterns with other computer “bots”; they’ll grow familiar with new environments as they explore them. There’s not much art to these talents, since they are mostly in service of blowing things up, but there is an undeniable intelligence to those computer opponents—an intelligence that is only indirectly controlled by the games’ original programmers.
Will Wright’s games have historically been the first to embrace bottom-up routines, but even an advanced game like The Sims falls short of his own ambitions in this arena. The residents of Simsville may display remarkably lifelike personality and behavioral traits, but they are unlikely to spontaneously develop a new skill that Wright didn’t program into the game originally. You’ll see them fall in love or zone out in front of the television, but you won’t see one start yodeling or become a serial killer unless a human programmer has specifically added that behavior to the system. But Wright’s dream is to have Sims that do develop unique behavior on their own, Sims that exceed the imagination of their creators. “I’ve been fascinated with adaptive computing for some time now,” he says. “There are some rather hard problems to overcome, however. Some of the most promising technologies seem to also be the most parallel, like genetic algorithms, neural networks. These systems tend to learn by accumulating experience over a wide number of individual cases.” Think here of Danny Hillis’s number-sorting program. Hillis did manage to coax an ingenious and unplanned solution from the software, but it took thousands of iterations (not to mention a Connection Machine supercomputer). No gameplayer wants to sit around waiting for his on-screen characters to finish their simulated evolution before they start acting naturally. “In a game like The Sims,” Wright says, “learning in ‘user time’ might best be accomplished by giving the characters a form of hypothetical modeling. In other words they might constantly be running ‘microsimulations’ in their little heads—simulating a subset of the main simulation—to find ways of improving their decision-making.”
But that learning need not be limited to the fictional universe of the game itself. “Another possibility would be to give the game some sense of how much the user is engaged and having fun,” Wright speculates. “If we could measure this in some way—perhaps by analyzing the input stream and comparing it to a historical user profile—then we could design the game to learn what you like and enjoy. Each copy of the game would learn and evolve to fit each individual player. Maybe you’re getting bored with the original gamep
lay; the game would detect this and try adding new elements to the game, getting more radical each time, until it hits on something you like. It would then take this and continue to evolve and refine it in directions that you find entertaining.” Introduce real feedback into the equation—beyond the simple input of joysticks and trackballs—and suddenly the genre grows more flexible, more other-minded. The game becomes much more like a live performer, adapting to its audience, punching up certain routines while toning others down. Wright’s vision is a significant step beyond the “choose your own path” vision of hypertext fiction championed in the early nineties. The “author” isn’t presenting the “reader” with a selection of prefab threads to follow; the reader’s interests and inclinations generate entirely novel threads, to the extent that the rules of the game vary from player to player. The first-generation interactive narratives were finally all about choosing one of several sanctioned links, picking one path over the others. The future that Wright envisions will be about creating a new path—or eliminating paths altogether.
Could such a model be applied to television? Not in the sense of growing your own sitcom, or choosing the ending of ER—but rather in the sense of growing your own network of programming. In the summer of 2000, a national ad campaign began running on the major networks, starring, for probably the first time in TV history, an office full of television programmers. “Look at these guys,” the voice-over says contemptuously as the camera swoops through a workspace bustling with suits, casually canceling sitcoms and flirting with their personal assistants. “Network TV programmers. They decide what we watch and when we watch it.” The camera tracks through an office door and homes in on an executive leaning back at his desk, contemplating the view from his corner office. Suddenly, two burly guys in black T-shirts appear at the corners of the screen. They pull the head programmer out of his chair and unceremoniously toss him out the window while the voice-over notes, “Who needs ’em?”
Emergence Page 19