Machines of Loving Grace
Page 4
Now, just a little more than a year later, Thrun was behind the wheel in a second-generation robot contestant. It felt like the future had arrived sooner than expected. It took only a dozen miles, however, to realize that techno-enthusiasm is frequently premature. Stanley crested a rise in the desert and plunged smartly into a swale. Then, as the car tilted upward, its laser guidance system swept across an overhanging tree limb. Without warning the robot navigator spooked, the car wrenched violently first left, then right, and instantly plunged off the road. It all happened faster than Thrun could reach over and pound the large red E-Stop button.
Luckily, the car found a relatively soft landing. The Touareg had been caught by an immense desert thornbush just off the road. It cushioned the crash landing and the car stopped slowly enough that the air bags didn’t deploy. When the occupants surveyed the road from the crash scene, it was obvious that it could have been much worse. Two imposing piles of boulders bracketed the bush, but the VW had missed them.
The passengers stumbled out and Thrun scrambled up on top of the vehicle to reposition the sensors bent out of alignment by the crash. Then everyone piled back into Stanley, and Montemerlo removed an offending block of software code that had been intended to make the ride more comfortable for human passengers. Thrun restarted the autopilot and the machine once again headed out into the Arizona desert. There were other mishaps that day, too. The AI controller had no notion of the consequence of mud puddles and later in the day Stanley found itself ensnared in a small lake in the middle of the road. Fortunately there were several human-driven support vehicles nearby, and when the car’s wheels began spinning helplessly, the support team of human helpers piled out to push the car out of the goo.
These were small setbacks for Thrun’s team, a group of Stanford University professors, VW engineers, and student hackers among more than a dozen teams competing for a multimillion-dollar cash prize. The day was a low point after which things improved dramatically. Indeed, the DARPA contest would later prove to be a dividing line between a world in which robots were viewed as toys or research curiosities and one in which people began to accept that robots could move about freely.
Stanley’s test drive was a harbinger of technology to come. The arrival of machine intelligence had been forecast for decades in the writings of science-fiction writers, so much so that when the technology actually began to appear, it seemed anticlimactic. In the late 1980s, anyone wandering through the cavernous Grand Central Station in Manhattan would have noticed that almost a third of the morning commuters were wearing Sony Walkman headsets. Today, of course, the Walkmans have been replaced by Apple’s iconic bright white iPhone headphones, and there are some who believe that technology haute couture will inevitably lead to a future version of Google Glass—the search engine maker’s first effort to augment reality—or perhaps more ambitious and immersive systems. Like the frog in the pot, we have been desensitized to the changes wrought by the rapid increase and proliferation of information technology.
The Walkman, the iPhone, and Google Glass all prefigure a world where the line between what is human and who is machine begins to blur. William Gibson’s Neuromancer, the science-fiction novel that popularized the idea of cyberspace, drew a portrait of a new cybernetic territory composed of computers and networks. It also painted a future in which computers were not discrete boxes, but would be woven together into a dense fabric that was increasingly wrapped around human beings, “augmenting” their senses.
It is not such a big leap to move from the early-morning commuters wearing Sony Walkman headsets, past the iPhone users wrapped in their personal sound bubbles, directly to Google Glass–wearing urban hipsters watching tiny displays that annotate the world around them. They aren’t yet “jacked into the net,” as Gibson foresaw, but it is easy to assume that computing and communication technology is moving rapidly in that direction.
Gibson was early to offer a science-fiction vision of what has been called “intelligence augmentation.” He imagined computerized inserts he called “microsofts”—with a lowercase m—that could be snapped into the base of the human skull to instantly add a particular skill—like a new language. At the time—several decades ago—it was obviously an impossible bit of science fiction. Today his cyborg vision is something less of a wild leap.
In 2013 President Obama unveiled the BRAIN initiative, an effort to simultaneously record the activities of one million neurons in the human brain. But one of the major funders of the BRAIN initiative is DARPA, and the agency is not interested in just reading from the brain. BRAIN scientists will patiently explain that one of the goals of the plan is to build a two-way interface between the human brain and computers. On its face, such an idea seems impossibly sinister, conjuring up images of the ultimate Big Brother and thought control. At the same time there is a utopian implication inherent in the technology. The potential future is perhaps the inevitable trajectory of human-computer interaction design, implicit in J. C. R. Licklider’s 1960 manifesto, “Man-Computer Symbiosis,” where he foretold a more intimate collaboration between humans and machines.
While the world of Neuromancer was wonderful science fiction, actually entering the world that Gibson portrayed presents a puzzle. On one hand, the arrival of cyborgs poses the question of what it means to be human. By itself that isn’t a new challenge. While technology may be evolving increasingly rapidly today, humans have always been transformed by technology, as far back as the domestication of fire or the invention of the wheel (or its eventual application to luggage in the twentieth century). Since the beginning of the industrial era machines have displaced human labor. Now with the arrival of computing and computer networks, for the first time machines are displacing “intellectual” labor. The invention of the computer generated an earlier debate over the consequences of intelligent machines. The new wave of artificial intelligence technologies has now revived that debate with a vengeance.
Mainstream economists have maintained that over time the size of the workforce has continued to grow despite the changing nature of work driven by technology and innovation. In the nineteenth century, more than half of all workers were engaged in agricultural labor; today that number has fallen to around 2 percent—and yet there are more people working than ever in occupations outside of agriculture. Indeed, even with two recessions, between 1990 and 2010 the overall workforce in the United States increased by 21 percent. If the mainstream economists are correct, there is no economic cataclysm on a societal level due to automation in the offing.
However, today we are entering an era where humans can, with growing ease, be designed in or out of “the loop,” even in formerly high-status, high-income, white-collar professional areas. On one end of the spectrum smart robots can load and unload trucks. On the other end, software “robots” are replacing call center workers and office clerks, as well as transforming high-skill, high-status professions such as radiology. In the future, how will the line be drawn between man and machine, and who will draw it?
Despite the growing debate over the consequences of the next generation of automation, there has been very little discussion about the designers and their values. When pressed, the computer scientists, roboticists, and technologists offer conflicting views. Some want to replace humans with machines; some are resigned to the inevitability—“I for one, welcome our insect overlords” (later “robot overlords”) was a meme that was popularized by The Simpsons—and some of them just as passionately want to build machines to extend the reach of humans. The question of whether true artificial intelligence—the concept known as “Strong AI” or Artificial General Intelligence—will emerge, and whether machines can do more than mimic humans, has also been debated for decades. Today there is a growing chorus of scientists and technologists raising new alarms about the possibility of the emergence of self-aware machines and their consequences. Discussions about the state of AI technology today veer into the realm of science fiction or perhaps religion. However, the reali
ty of machine autonomy is no longer merely a philosophical or hypothetical question. We have reached the point where machines are capable of performing many human tasks that require intelligence as well as muscle: they can do factory work, drive vehicles, diagnose illnesses, and understand documents, and they can certainly control weapons and kill with deadly accuracy.
The AI versus IA dichotomy is nowhere clearer than in a new generation of weapons systems now on the horizon. Developers at DARPA are about to cross a new technological threshold with a replacement for today’s cruise missiles, the Long Range Anti-Ship Missile, or LRASM. Developed for the navy, it is scheduled for the U.S. fleet in 2018. Unlike its predecessors, this is a new weapon in the U.S. arsenal with the ability to make targeting decisions autonomously. The LRASM is designed to fly to an enemy fleet while out of contact with human controllers and then use artificial intelligence technologies to decide which target to kill.
The new ethical dilemma is, will humans allow their weapons to pull triggers on their own without human oversight? Variations of that same challenge are inherent in rapid computerization of the automobile, and indeed transportation in general is emblematic of the consequences of the new wave of smart machines. Artificial intelligence is poised to have an impact on society that will be greater than the effect that personal computing and the Internet have had beginning in the 1990s. Significantly, the transformation is being shepherded by a group of elite technologists.
Several years ago Jerry Kaplan, a Silicon Valley veteran who began his career as a Stanford artificial intelligence researcher and then became one of those who walked away from the field during the 1980s, warned a group of Stanford computer scientists and graduate student researchers: “Your actions today, right here in the Artificial Intelligence Lab, as embodied in the systems you create, may determine how society deals with this issue.” The imminent arrival of the next generation of AI is a crucial ethical challenge, he contended: “We’re in danger of incubating robotic life at the expense of our own life.”1 The dichotomy that he sketched out for the researchers is the gap between intelligent machines that displace humans and human-centered computing systems that extend human capabilities.
Like many technologists in Silicon Valley, Kaplan believes we are on the brink of the creation of an entire economy that runs largely without human intervention. That may sound apocalyptic, but the future Kaplan described will almost certainly arrive. His deeper point was that today’s technology acceleration isn’t arriving blindly. The engineers who are designing our future are each—individually—making choices.
On an abandoned military base in the California desert during the fall of 2007 a short, heavyset man holding a checkered flag stepped out onto a dusty makeshift racing track and waved it energetically as a Chevrolet Tahoe SUV glided past at a leisurely pace. The flag waver was Tony Tether, the director of DARPA.
There was no driver behind the wheel of the vehicle, which sported a large GM decal. Closer examination revealed no passengers in the car, and none of the other cars in the “race” had drivers or passengers either. Viewing the event, in which the cars glided seemingly endlessly through a makeshift town previously used for training military troops in urban combat, it didn’t seem to be a race at all. It felt more like an afternoon of stop-and-go Sunday traffic in a science-fiction movie like Blade Runner.
Indeed, by almost any standard it was an odd event. The DARPA Urban Challenge pitted teams of roboticists, artificial intelligence researchers, students, automotive engineers, and software hackers against each other in an effort to design and build robot vehicles capable of driving autonomously in an urban traffic setting. The event was the third in the series of contests that Tether organized. At the time military technology largely amplified a soldier’s killing power rather than replacing the soldier. Robotic military planes were flown by humans and, in some cases, by extraordinarily large groups of soldiers. A report by the Defense Science Board in 2012 noted that for many military operations it might take a team of several hundred personnel to fly a single drone mission.2
Unmanned ground vehicles were a more complicated challenge. The problem in the case of ground vehicles was, as one DARPA manager would put it, that “the ground was hard”—“hard” as in “hard to drive on,” rather than as in “rock.” Following a road is challenging enough, but robot car designers are confronted with an endless array of special cases: driving at night, driving into the sun, driving in rain, on ice—the list goes on indefinitely.
Consider the problem of designing a machine that knows how to react to something as simple as a plastic bag in a lane on the highway. Is the bag hard, or is it soft? Will it damage the vehicle? In a war zone, it might be an improvised explosive device. Humans can see and react to such challenges seemingly without effort, when driving at low speed with good visibility. For AI researchers, however, solving that problem is the holy grail in computer vision. It became one of a myriad of similar challenges that DARPA set out to solve in creating the autonomous vehicle Grand Challenge events. In the 1980s roboticists in both Germany and the United States had made scattered progress toward autonomous driving, but the reality was that it was easier to build a robot to go to the moon than to build one that could drive by itself in rush-hour traffic. And so Tony Tether took up the challenge. The endeavor was risky: if the contests failed to produce results, the series of Grand Challenge self-driving contests would become known as Tether’s Folly. Thus the checkered flag at the final race proved to be as much a victory lap for Tether as for the cars.
There had been darker times. Under Tether’s directorship the agency hired Admiral John Poindexter to build the system known as Total Information Awareness. A vast data-mining project that was intended to hunt terrorists online by collecting and connecting the dots in oceans of credit card, email, and phone records, the project started a privacy firestorm and was soon canceled by Congress in May of 2003. Although Total Information Awareness vanished from public view, it in fact moved into the nation’s intelligence bureaucracy only to become visible again in 2013 when Edward Snowden leaked hundreds of thousands of documents that revealed a deep and broad range of systems for surveillance of any possible activity that could be of interest. In the pantheon of DARPA directors, Tether was also something of an odd duck. He survived the Total Information Awareness scandal and pushed the agency ahead in other areas with a deep and controlling involvement in all of the agency’s research projects. (Indeed, the decision by Tether to wave the checkered flag was emblematic of his tenure at DARPA—Tony Tether was a micromanager.)
DARPA was founded in response to the Soviet Sputnik, which was like a thunderbolt to an America that believed in its technological supremacy. With the explicit mission of ensuring the United States was never again technologically superseded by another power, the directors of DARPA—at birth more simply named the Advanced Research Projects Agency—had been scientists and engineers willing to place huge bets on blue-sky technologies, with close relationships and a real sense of affection for the nation’s best university researchers.
Not so with Tony Tether, who represented the George W. Bush era. He had worked for decades as a program manager for secretive military contractors and, like many surrounding George W. Bush, was wary of the nation’s academic institutions, which he thought were too independent to be trusted with the new mission. Small wonder. Tether’s worldview had been formed when he was an electrical engineering grad student at Stanford University during the 1960s, where there was a sharp division between the antiwar students and the scientists and engineers helping the Vietnam War effort by designing advanced weapons.
After arriving as director he went to work changing the culture of the agency that had gained a legendary reputation for the way it helped invent everything from the Internet to stealth fighter technology. He rapidly moved money away from the universities and toward classified work done by military contractors supporting the twin wars in Iraq and Afghanistan. The agency moved away from “blue s
ky” toward “deliverables.” Publicly Tether made the case that it was still possible to innovate in secret, as long as you fostered the competitive culture of Silicon Valley, with its turmoil of new ideas and rewards for good tries even if they failed.
And Tether certainly took DARPA in new technology directions. His concern for the thousands of maimed veterans coming back without limbs and with increasing the power and effectiveness of military decision-makers inspired him to push agency dollars into human augmentation projects as well as artificial intelligence. That meant robotic arms and legs for wounded soldiers, and an “admiral’s advisor,” a military version of what Doug Engelbart had set out to do in the 1960s with his vision of intelligence augmentation, or IA. The project was referred to as PAL, for Perceptive Assistant that Learns, and much of the research would be done at SRI International, which dubbed the project CALO, or Cognitive Assistant that Learns and Organizes.
It was ironic that Tether returned to the research agenda originally promoted during the mid-1960s by two visionary DARPA program managers, Robert Taylor and J. C. R. Licklider. It was also bittersweet, although few mentioned it, that despite Doug Engelbart’s tremendous early success in the early 1970s, his project had faltered and fallen out of favor at SRI. He ended up being shuffled off to a time-sharing company for commercialization, where his project sat relatively unnoticed and underfunded for more than a decade. The renewed DARPA investment would touch off a wave of commercial innovation—CALO would lead most significantly to Apple’s Siri personal assistant, a direct descendant of the augmentation approach originally pioneered by Engelbart.