Machines of Loving Grace
Page 11
It is undeniable that AI and machine-learning algorithms have already had world-transforming application in areas as diverse as science, manufacturing, and entertainment. Examples range from machine vision and pattern recognition essential in improving quality in semiconductor design and so-called rational drug discovery algorithms, which systematize the creation of new pharmaceuticals, to government surveillance and social media companies whose business model is invading privacy for profit. The optimists hope that potential abuses will be minimized if the applications remain human-focused rather than algorithm-centric. The reality is that, until now, Silicon Valley has not had a track record that is morally superior to any earlier industries. It will be truly remarkable if any Silicon Valley company actually rejects a profitable technology for ethical reasons.
Setting aside the philosophical discussion about self-aware machines, and in spite of Gordon’s pessimism about productivity increases, it is clearly becoming increasingly possible and “rational” to design humans out of systems for both performance and cost reasons. Google, which can alternatively be seen as either an IA or AI company, seems to be engaged in an internal tug-of-war over this dichotomy. The original PageRank algorithm that the company is based on can perhaps be construed as the most powerful example in the history of human augmentation. The algorithm systematically mined human decisions about the value of information and pooled and ranked those decisions to prioritize Web search results. While some have chosen to criticize this as a systematic way to siphon intellectual value from vast numbers of unwitting humans, there is clearly an unstated social contract between user and company. Google mines the wealth of human knowledge and returns it to society, albeit with a monetization “catch.” The Google search dialog box has become the world’s most powerful information monopoly.
Since then, however, Google has yo-yoed back and forth in designing both IA and AI applications and services, whichever works best to solve the problem at hand. For example, for all of the controversy surrounding it, the Google Glass reality augmentation system clearly has the potential to be what the name promises—a human augmentation tool—while the Google car project represents the pros and cons of a pure AI system replacing human agency and intelligence with a machine. Indeed, Google as a company has become a de facto experiment about the societal consequences of AI-based technologies deployed on a massive scale. In a 2014 speech to the group of NASA scientists, Peter Norvig, Google’s director of research, was clear that the only reasonable solution to AI advances would lie in designing systems in which humans partner with intelligent machines. His solution was a powerful declaration of intent about the need to converge the separate AI and IA communities.
Given the current rush to build automated factories, such a convergence seems unlikely on a broad societal basis. However, the dark fears that have surfaced recently about job-killing manufacturing robots are perhaps likely to soon be supplanted by a more balanced view of our relationship with machines beyond the workplace. Consider Terry Gou, the chief executive of Foxconn, one of the largest Chinese manufacturers and makers of the Apple iPhone. The company had already endured global controversy for labor conditions in its factories when, at the beginning of 2012, Gou declared that Foxconn was now planning a significant commitment to robots to replace his workers. “As human beings are also animals, to manage one million animals gives me a headache,” he said during a business meeting.52
Although the statement drew global attention, his vision of a factory without workers is only one of the ways in which robotics will transform society in the next decade. Although job displacement is currently seen as a bleak outcome for humanity, other forces now at play will reshape our relations with robots in more positive ways. The specter of disruption driven by technological unemployment in China, for example, could conceivably be even more dramatic than that in the United States. As China has industrialized in the past two decades, significant parts of its rural population urbanized. How will China adapt to lights-out consumer electronics manufacturing?
Probably with ease, as it turns out. The Chinese population is aging dramatically, fast enough that they will soon be under significant pressure to automate their manufacturing industries. As a consequence of China’s one-child policy, governmental decisions made in the late 1970s and early 1980s have now resulted in a rapidly growing elderly population. In 2050, China will have the largest number of people over 80 years old in the world. There will be 90 million elderly Chinese compared to the United States with 32 million.53
Europe is also aging quickly. According to European Commission data, in 2050 there will be only 2 (reduced from 4 today) people of working age in Europe for each person over 65, and an estimated 84 million people with age-related health problems.54 The European Union views the demographic shift as a significant one and projects the emergence of a $17.6 billion market for elder-care robots in Europe by as early as 2016. The United States faces an aging scenario that is in many ways similar, although not as extreme, to Asian and European societies. Despite the fact that the United States is aging more slowly than some other countries—in part because of continuing significant immigration inflow—the “dependency ratio” will continue to rise. That means that the number of children and elderly will shift from 59 youngsters and elderly people per 100 work-age adults in 2005 to 72 per 100 in 2050.55 The retirement of baby boomers in the United States—people who turn 65—is now taking place at the rate of roughly 10,000 each day, and that rate will continue for the next 19 years.56
How will the world’s industrial societies care for their aging populations? An aging world will dramatically transform the conversation about robotics during the next decade from the fears about automation to new hope for augmentation. Robot & Frank is an amusing, thoughtful, and possibly prophetic 2012 film set in the near future, where it depicts the relationship between a retired ex-convict in the first stages of dementia and his robot caregiver. How ironic if caregiving robots like Frank’s were to arrive just in time to provide a technological safety net for the world’s previously displaced, now elderly population.
4|THE RISE, FALL, AND RESURRECTION OF AI
Sitting among musty boxes in an archive at Stanford University in the fall of 2010, David Brock felt his heart stop. A detail-oriented historian specializing in the semiconductor industry, Brock was painstakingly poring over the papers of William Shockley for his research project on the life of Intel Corp. cofounder Gordon Moore. After leading the team that coinvented the transistor at Bell Labs, Shockley had moved back to Santa Clara County in 1955, founding a start-up company to make a new type of more manufacturable transistor. What had been lost, until Brock found it hidden among Shockley’s papers, was a bold proposal the scientist had made in an effort to persuade Bell Labs, in 1951 the nation’s premier scientific research institution, to build an “automatic trainable robot.”
For decades there have been heated debates about what led to the creation of Silicon Valley, and one of the breezier explanations is that Shockley, who had grown up near downtown Palo Alto, decided to return to the region that was once the nation’s fruit capital because his mother was then in ill health. He located Shockley Semiconductor Laboratory on San Antonio Road in Mountain View, just south of Palo Alto and across the freeway from where Google’s sprawling corporate campus is today. Moore was one of the first employees at the fledgling transistor company and would later become a member of the “traitorous eight,” the group of engineers who, because of Shockley’s tyrannical management style, defected from his start-up to start a competing firm. The defection is part of the Valley’s most sacred lore as an example of the intellectual and technical freedom that would make the region an entrepreneurial hotbed unlike anything the world had previously seen. Many have long believed that Shockley’s decision to locate his transistor company in Mountain View was the spark that ignited Silicon Valley. However, it is more interesting to ask what Shockley was trying to accomplish. He has long been viewed as an early
entrepreneur, fatally flawed as a manager. Still, his entrepreneurial passion has served as a model for generations of technologists. But that was only part of the explanation.
Brock sat in the Stanford archives staring at a yellowing single-page proposal titled the “A.T.R. Project.” Shockley, true to his temper, didn’t mince words: “The importance of the project described below is probably greater than any previously considered by the Bell System,” he began. “The foundation of the largest industry ever to exist may well be built upon its development. It is possible that the progress achieved by industry in the next two or three decades will be directly dependent upon the vigor with which projects of this class are developed.” The purpose of the project was, bluntly, “the substitution of machines for men in production.” Robots were necessary because generalized automation systems lacked both the dexterity and the perception of human workers. “Such mechanization will achieve the ultimate conceivable economy on very long runs but will be impractical on short runs,” he wrote. Moreover, his original vision was not just about creating an “automatic factory,” but a trainable robot that could be “readily modified to perform any one of a wide variety of operations.” His machine would be composed of “hands,” “sensory organs,” “memory,” and a “brain.”1
Shockley’s inspiration for a humanlike factory robot was that assembly work often consists of a myriad of constantly changing unique motions performed by a skilled human worker, and that such a robot was the breakthrough needed to completely replace human labor. His insight was striking because it came at the very dawn of the computer age, before the impact of the technology had been grasped by most of the pioneering engineers. At the time it was only a half decade since ENIAC, the first general purpose digital computer, had been heralded in the popular press as a “giant brain,” and just two years after Norbert Wiener had written his landmark Cybernetics, announcing the opening of the Information Age.
Shockley’s initial insight presaged the course that automation would take decades later. For example, Kiva Systems, a warehouse automation system acquired in 2012 by Amazon for $775 million, had the insight that the most difficult functions to automate in the modern warehouse were ones that required human eyes and hands, like identifying and grasping objects. Without perception and dexterity, robotic systems are limited to the most repetitive jobs, and so Kiva took the obvious intermediate step and built mobile robots that carried items to stationary human workers. Once machine perception and robotic hands became better and cheaper, humans could disappear entirely.
Indeed, Amazon made an exception to its usual policy of secrecy and invited the press to tour one of its distribution facilities in Tracy, California, during the Christmas buying season in December of 2014. What those on the press tour did not see was the development of an experimental station inside the facility where a robot arm performed the “piece pick” operations—the work now reserved for humans. Amazon is experimenting with a Danish robot arm designed to do the remaining human tasks.
In the middle of the last century, while Shockley expressed no moral qualms about using trainable robots to displace humans, Wiener saw a potential calamity. Two years after writing Cybernetics he wrote The Human Use of Human Beings, an effort to assess the consequences of a world full of increasingly intelligent machines. Despite his reservations, Wiener had been instrumental in incubating what Brock describes as an “automation movement” during the 1950s.2 He traces the start of what would become a national obsession with automation to February 2, 1955, when Wiener and Gordon Brown, the chair of the MIT electrical engineering department, spoke to an evening panel in New York City attended by five hundred members of the MIT Alumni Association on the topic of “Automation: What is it?”
On the same night, on the other side of the country, electronics entrepreneur Arnold O. Beckman chaired a banquet honoring Shockley alongside Lee de Forest, inventor of the triode, a fundamental vacuum tube. At the event Beckman and Shockley discovered they were both “automation enthusiasts.”3 Beckman had already begun to refashion Beckman Instruments around automation in the chemical industries, and at the end of the evening Shockley agreed to send Beckman a copy of his newly issued patent for an electro-optical eye. That conversation led to Beckman funding Shockley Semiconductor Laboratory as a Beckman Instruments subsidiary, but passing on the opportunity to purchase Shockley’s robotic eye. Shockley had written his proposal to replace workers with robots amid the nation’s original debate over “automation,” a term popularized by John Diebold in his 1952 book Automation: The Advent of the Automatic Factory.
Shockley’s prescience was so striking that when Rodney Brooks, himself a pioneering roboticist at the Stanford Artificial Intelligence Laboratory in the 1970s, read Brock’s article in IEEE Spectrum in 2013, he passed Shockley’s original 1951 memo around his company, Rethink Robotics, and asked his employees to guess when the memo had been written. No one came close. That memo predates by more than a half century Rethink’s Baxter robot, introduced in the fall of 2012. Yet Baxter is almost exactly what Shockley proposed in the 1950s—a trainable robot with an expressive “face” on an LCD screen, “hands,” “sensory organs,” “memory,” and, of course, a “brain.”
The philosophical difference between Shockley and Brooks is that Brooks’s intent has been for Baxter to cooperate with human workers rather than replace them, taking over dull, repetitive tasks in a factory and leaving more creative work for humans. Shockley’s original memo demonstrates that Silicon Valley had its roots in the fundamental paradox that technology both augments and dispenses with humans. Today the paradox remains sharper than ever. Those who design the systems that increasingly reshape and define the Information Age are making choices to build humans in or out of the future.
Silicon Valley’s hidden history presages Google’s more recent “moon shot” effort to build mobile robots. During 2013 Google quietly acquired many of the world’s best roboticists in an effort to build a business claiming leadership in the next wave of automation. Like the secretive Google car project, the outlines of Google’s mobile robot business have remained murky. It is still unclear whether Google as a company will end up mostly augmenting or replacing humans, but today the company is dramatically echoing Shockley’s six-decade-old trainable robot ambition.
The dichotomy between AI and IA had been clear for many years to Andy Rubin, a robotics engineer who had worked for a wide range of Silicon Valley companies before coming to Google to build the company’s smartphone business in 2005. In 2013 Rubin had left his post as head of the company’s Android phone business and begun quietly acquiring some of the best robotics companies and technologists in the world. He found a new home for the business on California Ave., on the edge of Stanford Industrial Park just half a block away from the original Xerox PARC laboratory where the Alto, the first modern personal computer, was designed. Rubin’s building was unmarked, but an imposing statue of a robot in an upstairs atrium was visible from the street below. That is, until one night the stealthy roboticists received an unhappy call from the neighbor directly across the street. The eerie-looking robot was giving their young son nightmares. The robot was moved inside where it was no longer visible to the outside world.
Years earlier, Rubin, who was also a devoted robot hobbyist, had helped fund Stanford AI researcher Sebastian Thrun’s effort to build Stanley, the autonomous Volkswagen that would eventually win a $2 million DARPA prize for navigating unaided through more than a hundred miles of California desert. “Personal computers are growing legs and beginning to move around in the environment,” Rubin said in 2005.4 Since then there has been a growing wave of interest in robotics in Silicon Valley. Andy Rubin was simply an early adopter of Shockley’s original insight.
However, during the half decade after Shockley’s 1955 move to Palo Alto, the region became ground zero for social, political, and technological forces that would reshape American society along lines that to this day define the modern world. Palo Alto would
be transformed from its roots as a sleepy college town into one of the world’s wealthiest communities. However, during the 1960s and 1970s, the Vietnam War, civil rights movement, and rise of the counterculture all commingled with the arrival of microprocessors, personal computing, and computer networking.5 In a handful of insular computer laboratories, hackers and engineers found shelter from a fractious world. By 1969, the year Richard Nixon was inaugurated president, Seymour Hersh reported the My Lai massacre, and astronauts Neil Armstrong and Buzz Aldrin walked on the moon. Americans had for the first time traveled to another world, but the nation was at the same time mired in disastrous foreign conflict.
The year 1968 had seen the premiere of the movie 2001: A Space Odyssey painting a stark view of both the potential and pitfalls of artificial intelligence. HAL—the computer that felt impelled to violate Asimov’s laws of robotics, a 1942 dictum forbidding machines to injure humans, even to ensure their survival—had defined the robot in popular culture. By the late 1960s, science-fiction writers were the nation’s technology seers and AI had become a promising new technology in the form of computing and robotics—playing out both in visions of technological paradise and as populist paranoia. The future seemed almost palpable in a nation that had literally gone from The Flintstones to The Jetsons between 1960 and 1963.