If you leave “Free will” off, The Sims quickly disintegrates into a nightmare of round-the-clock maintenance, requiring the kind of constant attention you’d expect in a nursery or a home for Alzheimer’s patients. Without free will, your Sims simply sit around, waiting for you to tell them what to do. They may be starving, but unless you direct them to the fridge, they’ll just sit out their craving for food like a gang of suburban hunger artists. Even the neatest of the Sims will tolerate piles of rotting garbage until you specifically order them to take out the trash. Without a helpful push toward the toilet, they’ll even relieve themselves right in the middle of the living room.
Playing The Sims without free will selected is a great reminder that too much control can be a disastrous thing. But the opposite can be even worse. Early in the design of The Sims, Wright recognized that his virtual people would need a certain amount of autonomy for the game to be fun, and so he and his team began developing a set of artificial-intelligence routines that would allow the Sims to think for themselves. That AI became the basis for the character’s “free will,” but after a year of work, the designers found that they’d been a little too successful in bringing the Sims to life.
“One of our biggest problems here was that our AI was too smart,” Wright says now. “The characters chose whichever action would maximize their happiness at any given moment. The problem is that they’re usually much better at this than the player.” The fun of The Sims comes from the incomplete information that you have about the overall system: you don’t know exactly what combination of actions will lead to a maximum amount of happiness for your characters—but the software behind the AI can easily make those calculations, because the happiness quota is built out of the game’s rules. In Wright’s early incarnations of the game, once you turned on free will, your characters would go about maximizing their happiness in perfectly rational ways. The effect was not unlike hiring Deep Blue to play a game of chess for you—the results were undeniably good ones, but where was the fun?
And so Wright had to dumb down his digital creations. “We did it in two ways,” he says. “First, we made them focus on immediate gratification rather than long-term goals—they’d rather sit in front of the TV and be couch potatoes than study for a job promotion. Second, we gave their personality a very heavy weight on their decisions, to an almost pathological degree. A very neat Sim will spend way too much time picking up—even after other Sims—while a sloppy Sim will never do this. These two things were enough to ensure that the player was a sorely needed component—ambition? balance?—of their world.” In other words, Wright made their decisions local ones and made the rules that governed their behavior more intransigent. For the emergent system of the game to work, Wright had to make the Sims more like ants than people.
I think there is something profound, and embryonic, in that “free will” button, and in Wright’s battle with the autonomy of his creations—something both like and unlike the traditional talents that we expect from our great storytellers. Narrative has always been about the mix of invention and repetition; stories seem like stories because they follow rules that we’ve learned to recognize, but the stories that we most love are ones that surprise us in some way, that break rules in the telling. They are a mix of the familiar and the strange: too much of the former, and they seem stale, formulaic; too much of the latter, and they cease to be stories. We love narrative genres—detective, romance, action-adventure—but the word generic is almost always used as a pejorative.
It misses the point to think of what Will Wright does as storytelling—it doesn’t do justice to the novelty of the form, and its own peculiar charms. But that battle over control that underlies any work of emergent software, particularly a work that aims to entertain us, runs parallel to the clash between repetition and invention in the art of the storyteller. A good yarn surprises us, but not too much. A game like The Sims gives its on-screen creatures some autonomy, but not too much. Emergent systems are not stories, and in many ways they live by very different rules, for both creator and consumer. (For one, emergent systems make that distinction a lot blurrier.) But the art of the storyteller can be enlightening in this context, because we already accept the premise that storytelling is an art, and we have a mature vocabulary to describe the gifts of its practitioners. We are only just now developing such a language to describe the art of emergence. But here’s a start: great designers like Wright or Resnick or Zimmerman are control artists—they have a feel for that middle ground between free will and the nursing home, for the thin line between too much order and too little. They have a feel for the edges.
PART THREE
Screenshot from SimCity 3000 (Courtesy of Maxis)
Can a selectional system be simulated? The answer must be split into two parts. If I take a particular animal that is the result of evolutionary and developmental selection, so that I already know its structure and the principles governing its selective processes, I can simulate the animal’s structure in a computer. But a system undergoing selection has two parts: the animal or organ, and the environment or world. No instructions come from events of the world. No instructions come from events of the world to the system on which selection occurs. Moreover, events occurring in an environment or a world are unpredictable. How then do we simulate events and their effects on selection? One way is as follows:
1. Simulate the organ or the animal as described above, making provision for the fact that, as a selective system, it contains a generator of diversity—mutations, alterations in neural wiring, or synaptic changes that are unpredictable.
2. Independently simulate a world or environment constrained by known physical principles, but allow for the occurrence of unpredictable events.
3. Let the simulated organ or animal interact with the simulated world or the real world without prior information transfer, so that selection can take place.
4. See what happens.
—GERALD EDELMAN
6
The Mind Readers
What are you thinking about right now? Because my words are being communicated to you via the one-way medium of the printed page, this is a difficult question for me to answer. But if I were presenting this argument while sitting across a table from you, I’d already have an answer, or at least an educated guess—even if you’d been silent the entire time. Your facial gestures, eye movements, body language, would all be sending a steady stream of information about your internal state—signals that I would intuitively pick up and interpret. I’d see your eyelids droop during the more contorted arguments, note the chuckle at one of my attempts at humor, register the way you sit upright in the chair when my words get your attention. I could no more prohibit my mind from making those assessments than you could stop your mind from interpreting my spoken words as language. (Assuming you’re an English speaker, of course.) We are both locked in a communicational dance of extraordinary depth—and yet, amazingly, we’re barely aware of the process at all.
Human beings are innate mind readers. Our skill at imagining other people’s mental states ranks up there with our knack for language and our opposable thumbs. It comes so naturally to us and has engendered so many corollary effects that it’s hard for us to think of it as a special skill at all. And yet most animals lack the mind-reading skills of a four-year-old child. We come into the world with a genetic aptitude for building “theories of other minds,” and adjusting those theories on the fly, in response to various forms of social feedback.
In the mideighties, the UK psychologists Simon Baron-Cohen, Alan Leslie, and Uta Frith conducted a landmark experiment to test the mind-reading skills of young children. They concealed a set of pencils within a box of Smarties, the British candy. They asked a series of four-year-olds to open the box and make the unhappy discovery of the pencils within. The researchers then closed the box up and ushered a grown-up into the room. The children were then asked what the grown-up was expecting to find within the Smarties box—not what they would
find, mind you, but what they were expecting to find. Across the board, the four-year-olds gave the right answer: the clueless grown-up was expecting to find Smarties, not pencils. The children were able to separate their own knowledge about the contents of the Smarties box from the knowledge of another person. They grasped the distinction between the external world as they perceived it, and the world as perceived by others. The psychologists then conducted the same experiment with three-year-olds, and the exact opposite result came back. The children consistently assumed that the grown-up would expect to find pencils in the box, not candy. They had not yet developed the faculty for building models of other people’s mental states—they were trapped in a kind of infantile omniscience, where the knowledge you possess is shared by the entire world. The idea of two radically distinct mental states, each containing different information about the world, exceeded the faculties of the three-year-old mind, but it came naturally to the four-year-olds.
Our closest evolutionary cousins, the chimpanzees, share our aptitude for mind reading. The Dutch primatologist Frans de Waal tells a story of calculating sexual intrigue in his engaging, novel-like study, Chimpanzee Politics. A young, low-ranking male (named, appropriately enough, Dandy) decides to make a play for one of the females in the group. Being a chimpanzee, he opts for the usual chimpanzee method of expressing sexual attraction, which is to sit with your legs apart within eyeshot of your objet de désir and reveal your erection. (Try that approach in human society, of course, and you’ll usually end up with a restraining order.) During this particular frisky display, Luit, one of the high-ranking males, happens upon the “courtship” scene. Dandy deftly uses his hands to conceal his erection so that Luit can’t see it, but the female chimp can. It’s the chimp equivalent of the adulterer saying, “This is just our little secret, right?”
De Waal’s story—one of many comparable instances of primate intrigue—showcases our close cousins’ ability to model the mental states of other chimps. As in the Smarties study, Dandy is performing a complicated social calculus in his concealment strategy: he wants the female chimp to know that he’s enamored of her, but wants to hide that information from Luit. That kind of thinking seems natural to us (because it is!), but to think like that you have to be capable of modeling the contents of other primate minds. If Dandy could speak, his summary of the situation might read something like this: she knows what I’m thinking; he doesn’t know what I’m thinking; she knows that I don’t want him to know what I’m thinking. In that crude act of concealment, Dandy demonstrates that he possesses a gift for social imagination missing in 99.99 percent of the world’s living creatures. To make that gesture, he must somewhere be aware that the world is full of imperfectly shared information, and that other individuals may have a perspective on the world that differs from his. Most important (and most conniving), he’s capable of exploiting that difference for his own benefit. That exploitation—a furtive pass concealed from the alpha male—is only possible because he is capable of building theories of other minds.
Is it conceivable that this skill simply derives from a general increase in intelligence? Could it be that humans and their close cousins are just smarter than all those other species who flunk the mind-reading test? In other words, is there something specific to our social intelligence, something akin to a module hardwired into the brain’s CPU—or is the theory of minds just an idea that inevitably occurs to animals who reach a certain threshold of general intelligence? We are only now beginning to build useful maps of the brain’s functional topography, but already we see signs that “mind reading” is more than just a by-product of general intelligence. Several years ago, the Italian neuroscientist Giaccamo Rizzollati discovered a region of the brain that may well prove to be integral to the theory of other minds. Rizzollati was studying a section of the ventral premotor area of the monkey brain, a region of the frontal lobe usually associated with muscular control. Certain neurons in this field fired when the monkey performed specific activities, like reaching for an object or putting food in its mouth. Different neurons would fire in response to different activities. At first, this level of coordination suggested that these neurons were commanding the appropriate muscles to perform certain tasks. But then Rizzollati noticed a bizarre phenomenon. The same neurons would fire when the monkey observed another monkey performing the task. The pound-your-fist-on-the-floor neurons would fire every time the monkey saw his cellmate pounding his fist on the floor.
Rizzollati called these unusual cells “mirror neurons,” and since his announcement of the discovery, the neuroscience community has been abuzz with speculation about the significance of the “monkey see, monkey do” phenomenon. It’s conceivable that mirror neurons exist for more subtle, introspective mental states—such as desire or rage or tedium—and that those neurons fire when we detect signs of those states in others. That synchronization may well be the neurological root of mind reading, which would mean that our skills were more than just an offshoot of general intelligence, but relied instead on our brains’ being wired a specific way. We know already that specific regions are devoted to visual processing, speech, and other cognitive skills. Rizzollati’s discovery suggests that we may also have a module for mind reading.
The modular theory is also supported by evidence of what happens when that wiring is damaged. Many neuroscientists now believe that autistics suffer from a specific neurological disorder that inhibits their ability to build theories of other minds—a notion that will instantly ring true for anyone who has experienced the strange emotional distance, the radical introversion, that one finds in interacting with an autistic person. Autism, the argument goes, stems from an inability to project outside one’s own head and imagine the mental life of others. And yet autistics regularly fare well on many tests of general intelligence and often display exceptional talents at math and pattern recognition. Their disorder is not a disorder of lowered intellect. Rather, autistics lack a particular skill, the way others lack the faculty of sight or hearing. They are mind blind.
*
Still, it can be hard to appreciate how rare a gift our mind reading truly is. For most of us, that we are aware of other minds seems at first blush like a relatively simple achievement—certainly not something you’d need a special cognitive tool for. I know what it’s like inside my head, after all—it’s only logical that I should imagine what’s inside someone else’s. If we’re already self-aware, how big a leap is it to start keeping track of other selves?
This is a legitimate question, and like almost any important question that has to do with human consciousness, the jury is still out on it. (To put it bluntly, the jury hasn’t even been convened yet.) But some recent research suggests that the question has it exactly backward—at least as far as the evolution of the brain goes. We’re conscious of our own thoughts, the argument suggests, only because we first evolved the capacity to imagine the thoughts of others. A mind that can’t imagine external mental states is like that of a three-year-old who projects his or her own knowledge onto everyone in the room: it’s all pencils, no Smarties. But as philosophers have long noted, to be self-aware means recognizing the limits of selfhood. You can’t step back and reflect on your own thoughts without recognizing that your thoughts are finite, and that other combinations of thoughts are possible. We know both that the pencils are in the box, and that newcomers will still expect Smarties. Without those limits, we’d certainly be aware of the world in some basic sense—it’s just that we wouldn’t be aware of ourselves, because there’d be nothing to compare ourselves to. The self and the world would be indistinguishable.
The notion of being aware of the world and yet not somehow self-aware seems like a logical impossibility. It feels as if our own selfhood would scream out at us after a while, “Hey, look at me! Forget about those Smarties—I’m thinking here! Pay attention to me!” But without any recognition of other thoughts to measure our own thoughts against, our own mental state wouldn’t even register as someth
ing to think about. It may well be that self-awareness only jumps out to us because we’re naturally inclined to project into the minds of others. But in a mind incapable of imagining the contents of other minds, that self-reflection wouldn’t be missed. It would be like being raised on a planet without satellites, and missing the moon.
We all have a region of the retina where the optic nerve connects the visual cortex to the back of the retina. No rods or cones are within this area, so the corresponding area of our visual field is incapable of registering light. While this blind spot has a surprisingly large diameter (about six degrees across), its effects are minimal because of our stereo vision: the blind spots in each eye don’t overlap, and so information from one eye fills in the information lacking in the other. But you can detect the existence of the blind spot by closing one eye and focusing the other on a specific word in this sentence. Place your index finger over the word, and then slowly move your finger to the right, while keeping your gaze locked on the word. After a few inches, you’ll notice that the tip of your finger fades from view. It’s an uncanny feeling, but what’s even more uncanny is that your visual field suffers from this strange disappearing act anytime you close one eye. And yet you don’t notice the absence at all—there’s no sense of information being lost, no dark patch, no blurriness. You have to do an elaborate trick with your finger to notice that something’s missing. It’s not the lack of visual information that should startle us; it’s that we have such a hard time noticing the lack.
Emergence Page 18