But others argued that the Singularity was already well under way. In this view, computers across the Internet were already busy recording our movements and shopping preferences, suggesting music and diets, and replacing traditional brain functions such as information recall and memory storage. Gregory Stock, a biophysicist, echoed Butler as he placed technology in an evolutionary context. “Lines are blurring between the living and the not-living, between the born and the made,” he said. The way he described it, life leapt every billion years or so into greater levels of complexity. It started with simple algaelike cells, advanced to complex cells and later to multicellular organisms, and then to an explosion of life during the Cambrian period, some five hundred fifty million years ago. This engendered new materials within earth’s life forms, including bone. Stock argued that humans, using information technology, were continuing this process, creating a “planetary superorganism”—a joint venture between our cerebral cortex and silicon. He said that this global intelligence was already transforming and subjugating us, much the way our ancestors tamed the gray wolf to create dogs. He predicted that this next step of evolution would lead to the demise of “free-range humans,” and that those free of the support and control of the planetary superorganism would retreat to back eddies. “I hate to see them disappear,” he said.
The crowd at the Singularity Summit was by no means united in these visions. A biologist from Cambridge University, Dennis Bray, described the daunting complexity of a single cell and cautioned that the work of modeling the circuitry and behavior of even the most basic units of life remained formidable. “The number of distinguishable proteins that a human makes is essentially uncountable,” he said. So what chance was there to model the human brain, with its hundred billion neurons and quadrillions of connections?
In the near term, it was academic. No one was close to replicating the brain in form or function. Still, the scientists at the conference were busy studying it, hoping to glean from its workings single applications that could be taught to computers. The brain, they held, would deliver its treasures bit by bit. Tenenbaum was of this school.
And so was Demis Hassabis. A diminutive thirty-four-year-old British neuroscientist, Hassabis told the crowd that technology wasn’t the only thing growing exponentially. Research papers on the brain were also doubling every year. Some fifty thousand academic papers on neuroscience had been published in 2008 alone. “If you looked at neuroscience in 2005, or before that, you’re way out of date now,” he said. But which areas of brain research would lead to the development of Artificial General Intelligence?
Hassabis had followed an unusual path toward AI research. At thirteen, he was the highest ranked chess player of his age on earth. But computers were already making inroads in chess. So why dedicate his brain, which he had every reason to believe was exceptional, to a field that machines would soon conquer? (From the perspective of futurists, chess was an early sighting of the Singularity.) Even as he played chess, Hassabis said later, he was interested in what was going on in his head—and how to transmit those signals to machines. “I knew then what I wanted to do, and I had a plan for getting there.”
The first step was to drop chess and dive into an area that was attracting (and arguably, shaping) the brains of many in his generation: video games. By seventeen, he was the lead developer on the game “Theme Park.” It sold millions of copies and won industry awards. He went on to Cambridge for a degree in computer science and then founded a video game company, Elixir Studios, when he was twenty-two. While running the company, Hassabis participated in the British “Mind Sports Olympiad” every year. This was where brain games aficionados gathered to compete in all kinds of contests, including chess, poker, bridge, go, and backgammon. In six years, he won the championship five times.
The way Hassabis described it, this was all leading to his current research. The video game experience gave him a grounding in software, hardware, and an understanding of how humans and computers interacted (known in the industry as man-machine interface). The computer science delivered the tools for AI. And in 2005 he went for the last missing piece, a doctorate in neuroscience.
In his current research at the Gatsby Computational Neuroscience Unit at University College, London, Hassabis focuses on the hippocampus. This is the part of the brain that consolidates memories, sifting through the torrents of thoughts, dialogues, sounds, and images pouring into our minds, and dispatches selected ones into long-term memory. Something singular occurs during that process, he believes. He thinks that it leads to the creation of concepts, a hallmark of human cognition.
“Knowledge in the brain can be separated into three levels,” he said: perceptual, conceptual, and symbolic. Computers can master two of the three, perceptions and symbols. A computer with vision and audio software can easily count the number of dogs in a kennel or measure the decibel level of their barks. That’s perception. And as Watson and others demonstrate, machines can associate symbols with definitions. That’s their forte. Formal ontologies no doubt place “dog” in the canine group and link to dozens of subgroups, from chihuahuas to schnauzers.
But between perceiving the dog and identifying its three-letter symbol, there’s a cognitive gap. Deep down, computers don’t know what dogs are. They cannot create “dog” concepts. A two-year-old girl, in that sense, is far smarter. She can walk into that same kennel, see a Great Dane standing next to a toy poodle, and say, “Doggies!” Between seeing them and naming them, she has already developed a concept of them. It’s remarkably subtle, and she might be hard-pressed, even as she grows older, to explain exactly how she figured out that other animals, like groundhogs or goats, didn’t fit in the dog family. She just knew. Philosophers as far back as Plato have understood that concepts are essential to human thought and human society. And concepts stretch far beyond the kennel. Time, friendship, fairness, work, play, love, cruelty, peace—these are more than words. Until computers can grasp them, they will remain stunted.
The concept generator in the brain, Hassabis believes, is the hippocampal neocortex consolidation system. It has long been known that the hippocampus sifts through traces of memories, or episodes. But studies in recent years in Belgium and at MIT have probed the mechanisms involved. When rats follow a trail of food through a maze, it triggers a sequence of neurons firing in their hippocampus. Later, during the dream-filled stage known as slow-wave sleep, that same sequence is replayed repeatedly, backward and forward—and at speeds twenty times faster. Experiments on humans reveal similar patterns.
This additional speed, Hassabis believes, is critical to choosing memories and, perhaps, refining them into concepts. “This gives the high-level neocortex a tremendous number of samples to learn from,” he says, “even if you experience that one important thing only once. Salient memories are biased to be replayed more often.”
It isn’t only that brains focus on the important stuff during dreams—a tear-filled discussion about marriage, a tense showdown with the boss. It’s that they’re able to race through these scenes again and again and again. It’s as if TV news editors got hold of the seventeen hours of each person’s waking life, promptly threw out all the boring material, and repeated the highlights ad nauseam. This isn’t unlike what many experienced in the days after September 11, 2001, when the same footage of jets flying into skyscrapers was aired repeatedly. And if those images now bring to mind certain concepts, such as terrorism, perhaps it’s because the hippocampus, on those late summer nights of 2001, was carrying on additional screenings, racing through them at twenty times the speed, and searing them into our long-term memories.
Even if Hassabis is right about the storage of memories and the development of concepts, transferring this process to computers is sure to be a challenge. He and his fellow researchers in London have to distill the brain’s editing process into algorithms. They will instruct computers to select salient lessons, lead them to experience them repeatedly, and—it’s hoped—learn from them and use
them to develop concepts. This is only one of many efforts to produce a cognitive leap in computing. None of them promises rapid results. The technical obstacles are daunting, and they require the very brand of magic—breakthrough ideas—that scientists are hoping to pass on to computers. Hassabis predicts that the process will take five years. “I think we’ll have something by then,” he said.
8. A Season of Jitters
FROM THE VERY FIRST meeting at Culver City, back in the spring of 2007, through all the discussions about a man-machine Jeopardy showdown, one technical issue weighed on Jeopardy executives above all others: Watson’s blazing speed to the buzzer. In a game featuring information hounds who knew most of the answers, the race to the signaling device was crucial. Ken Jennings had proven as much. He wouldn’t have had a chance to show off his lightning fast mind without the support of his equally prodigious thumb. To Harry Friedman and his associate, the producer Rocky Schmidt, it didn’t seem fair that the machine could buzz without pressing a button. They looked at it, naturally enough, from a human perspective. Precious milliseconds ticked by as the command to buzz made its way from the player’s brain through the network of neurons down the arm and to the hand. At that point, if you watched the process in super slow motion, the button would appear to sink into the flesh of the thumb until—finally—the pressure triggered an electronic pulse, identical to Watson’s, asking for the chance to respond to the clue. In this aspect of the game, humans were dragged down by the physical world. It was as if they were fiddling with fax machines while Watson sent e-mails. So in a contentious conference call one morning in March 2010, the Jeopardy contingent laid down the law: To play against humans, Watson would also have to press the button. The computer would need a finger.
Later that day, a visibly perturbed David Ferrucci arrived late for lunch at an Italian restaurant, Il Tramonto, just down the hill from the Hawthorne labs. He joined Watson’s hardware chief, Eddie Epstein, and J. Michael Loughran, the press officer who had played a major role in negotiating the Jeopardy challenge. Ferrucci insisted that he understood the logic behind the demand for a new appendage. And he knew that if his machine benefited from what appeared to be an unfair advantage, any victory would be tainted. What bugged him was that the Jeopardy team could shift the terms of the match as they saw fit, and at such a late hour.
Where would it stop? If IBM’s engineers fashioned a mechanical finger that worked at ten times the speed of a human digit, would Jeopardy ask them to slow it down? Ferrucci didn’t think so. But it was a concern. “There are deep philosophical issues in all of this,” he said. “They’re getting in there and deciding to graft human limitations onto the machine in order to balance things.”
While the two companies shared the same broad goals, they addressed different constituencies and had different jewels to protect. If Harry Friedman and company focused first on great entertainment, Ferrucci worried, they might tinker with the rules through the rest of the year, making adjustments as one side or the other, either human or machine, appeared to gain a decisive edge. In that case, the basis for the science of the Jeopardy challenge was out the window. Science demanded consistent, verifiable data, all of it produced under rigorous and unchanging conditions. For IBM researchers to publish academic papers on Watson as a specimen of Q-A, they would need such data. For Ferrucci’s team, building the machine alone was a career-making opportunity. But creating the scientific record around it justified the effort among their peers. This was no small consideration for a team of Ph.D.s, especially on a project whose promotional pizzazz raised suspicion, and even resentment, in the computer science community.
In these early months of 2010, tension between the two companies, and between the dictates of entertainment and those of science, was ratcheting up. As the Jeopardy challenge started its stretch run, IBM and Jeopardy entered a period of fears and jitters, marked by sudden shifts in strategy, impasses, and a rising level of apprehension.
In this unusual marriage of convenience, such friction was to be expected, and it was only normal that it would be coming to the surface at this late juncture. For two years, both Jeopardy and IBM had put aside many of the most contentious issues. Why bother hammering out the hard stuff—the details and conditions of the match and the surrounding media storm—when it was no sure thing that an IBM machine would ever be ready to play the game?
That was then. Now the computer in question, the speedy version of Watson, was up the road in Yorktown thrashing humans on a weekly basis. The day before the finger conversation, it had won four of six matches and put up a good fight in the other two. Watson, while still laughably oblivious in entire categories, was emerging as a viable player. The match, which had long seemed speculative, was developing momentum. A long-gestating cover story on the machine in the New York Times Magazine would be out in the next month or so. Watson’s turn on television was going to take place unless someone called a halt. IBM certainly wasn’t about to. But Jeopardy was another matter. Jeopardy’s executives now had to consider how the game might play on TV. They had to envision worst-case scenarios and what impact they might have on their golden franchise. As they saw it, they had to take steps to protect themselves. Adding the finger was just one example. It wasn’t likely to be the last.
Ferrucci ordered chicken escarole soup and a salmon panini. He had the finger on his mind. “So, they come in and say, ‘You know, we don’t like how you’re buzzing. We’re going to give you a human hand,’” he said. “This is like going to Brad Rutter or Ken Jennings and saying, ‘We’re going to cut your hands off and give you someone else’s hands.’ That guy’s going to have to retrain. It’s a whole new game, because now you’re going to have to be a different player. We’ve got to retune everything. Everything changes. You want to give me another nine months? You give me nine months at this stage and … I don’t know if I have the stomach.”
From Ferrucci’s perspective, the match was intriguing precisely because the contestants were different. Each side had its own strengths. The computer could rearrange numbers and letters in certain puzzle clues with astonishing speed. The human understood jokes. The computer flipped through millions of possibilities in a second; the human, with a native grasp of language, didn’t need to. Trying to bring them into synch with each other would be impossible. What’s more, he suspected that any handicapping would target only one of the parties: his machine. Just imagine, he said, laughing, if they decided that the humans had an unfair advantage in language. “They could give them the clues in ones and zeros!”
Nonetheless, the Jeopardy crew seemed intent on balancing the two sides. Another buzzer issue had come up earlier in the month. In order to keep players from buzzing too quickly, before the light came on, Jeopardy had long instituted a quarter-second penalty for early buzzers. The player’s buzzer remained locked out during that period—a relative eternity in Jeopardy—and gave other, more patient rivals a first crack at the clue. But Watson, whose response was activated by the light, never fell into that trap. Its entire Jeopardy existence was engineered to be penalty free. So shouldn’t Jeopardy remove the penalty for the human players as well?
For Ferrucci, this change spelled potential disaster. Humans could already beat Watson to the buzzer by anticipating the light, he said. Jennings was a master at it, and plenty of humans in sparring sessions had proven that Watson, while fast, was beatable. The electrical journey from brain to finger took humans two hundred milliseconds, about ten times as long as Watson. But by anticipating, many humans in the sparring sessions had buzzed within single milliseconds of the light. Greg Lindsay had demonstrated the technique in the three consecutive sparring sessions he’d won. If Jeopardy lifted the quarter-second penalty, humans could buzz repeatedly as early as they wanted while Watson waited for the light to come on. Picture a street corner in Manhattan where one tourist waits obediently for the traffic light to change while masses of New Yorkers blithely jaywalk, barely looking left or right. In a Jeopardy game without a
penalty for early buzzing, Watson might similarly find itself waiting at the corner—and lose every buzz.
The IBM researchers could, of course, teach Watson to anticipate the buzz. But it would be a monumental task. It might require outfitting Watson with ears. Then they’d have to study the patterns of Alex Trebek’s voice, the time it took him to read clues of differing lengths, the average gap in milliseconds between his last syllable and the activation of the light. It would require the efforts of an entire team and exhaustive testing during the remaining sparring sessions, made more difficult because Trebek, raised in Canada, had different voice patterns than his IBM fill-in, Todd Crain, from Illinois. It would amount to an entire research project—which would likely be useless to IBM outside the narrow confines of a specific game show. Ferrucci wouldn’t even consider it.
Loughran thought Ferrucci and Friedman could iron out many of these points with a one-on-one conversation. “Why don’t you pick up the phone and call Harry?” he said. “You negotiate. If they get the finger, you get rid of the anticipatory buzzing.”
Ferrucci shrugged. His worries ran deeper than the finger and the buzzer. He was far more concerned about the clues Watson would face. Unlike chess, Jeopardy was a game created, week by week, by humans. A team of ten writers formulated the clues and the categories. If they knew that their clues would be used in the man-machine match, mightn’t they be tempted, perhaps unconsciously, to test the machine? “As soon as you create a situation in which the human writer, the person casting the questions, knows there’s a computer behind the curtain, it’s all over. It’s not Jeopardy anymore,” Ferrucci said. Instead of a game for humans in which a computer participates, it’s a test of the computer’s mastery of human skills. Would a pun trip up the computer? How about a phrase in French? “Then it’s a Turing test,” he said. “We’re not doing the Turing test!”
Final Jeopardy Page 16