by John Markoff
This touched off a political kerfuffle about the impact of automation. The reality is that despite the rise of ATMs, bank tellers have not gone away. In 2004 Charles Fishman reported in Fast Company that in 1985, relatively early in the deployment of ATMs, there were about 60,000 ATMs and 485,000 bank tellers; in 2002 that number had increased to 352,000 ATMs and 527,000 bank tellers. In 2011 the Economist cited 600,500 bank tellers in 2008, while the Bureau of Labor Statistics was projecting that number would grow to 638,000 by 2018. Furthermore the Economist pointed out that there were an additional 152,900 “computer, automated teller, and office machine repairers” in 2008.30 Focusing on ATMs in isolation doesn’t begin to touch the complexity of the way in which automated systems are weaving their way into the economy.
Bureau of Labor Statistics data reveal that the real transformation has been in the “back office,” which in 1972 made up 70 percent of the banking workforce: “First, the automation of a major customer service task reduced the number of employees per location to 75% of what it was. Second, the [ATM] machines did not replace the highly visible customer-facing bank tellers, but instead eliminated thousands of less-visible clerical jobs.”31 The impact of back-office automation in banking is difficult to estimate precisely, because the BLS changed the way it recorded clerk jobs in banking in 1982. However, it is indisputable that banking clerks’ jobs have continued to vanish.
Looking forward, the consequences of new computing technology on bank tellers might anticipate the impact of driverless delivery vehicles. Even if the technology can be perfected—and that is still to be determined, because delivery involves complex and diverse contact with human business and residential customers—the “last mile” delivery personnel will be hard to replace.
Despite the challenges of separating the impact of the recession from the implementation of new technologies, increasingly the connection between new automation technologies and rapid economic change has been used to imply that a collapse of the U.S. workforce—or at least a prolonged period of dislocation—might be in the offing. Brynjolfsson and McAfee argue for the possibility in a much expanded book-length version of “Race Against the Machine,” entitled The Second Machine Age: Work, Progress, and Prosperity in a Time of Brilliant Technologies. Similar sentiments are offered by Jaron Lanier, a well-known computer scientist now at Microsoft Research, in the book Who Owns the Future? Both books draw a direct link between the rise of Instagram, the Internet photo-sharing service acquired by Facebook for $1 billion in 2012, and the decline of Kodak, the iconic photographic firm that declared bankruptcy that year. “A team of just fifteen people at Instagram created a simple app that over 130 million customers use to share some sixteen billion photos (and counting),” wrote Brynjolfsson and McAfee. “But companies like Instagram and Facebook employ a tiny fraction of the people that were needed at Kodak. Nonetheless, Facebook has a market value several times greater than Kodak ever did and has created at least seven billionaires so far, each of whom has a net worth ten times greater than [Kodak founder] George Eastman did.”32
Lanier makes the same point about Kodak’s woes even more directly: “They even invented the first digital camera. But today Kodak is bankrupt, and the new face of digital photography has become Instagram. When Instagram was sold to Facebook for a billion dollars in 2012, it employed only thirteen people. Where did all those jobs disappear to? And what happened to the wealth that those middle-class jobs created?”33
The flaw in their arguments is that they mask the actual jobs equation and ignore the reality of Kodak’s financial turmoil. First, even if Instagram did actually kill Kodak—it didn’t—the jobs equation is much more complex than the cited 13 versus 145,000 disparity. Services like Instagram didn’t spring up in isolation, but were made possible after the Internet had reached a level of maturity that had by then created millions of mostly high-quality new jobs. That point was made clearly by Tim O’Reilly, the book publisher and conference organizer: “Think about it for a minute. Was it really Instagram that replaced Kodak? Wasn’t it actually Apple, Samsung, and the other smartphone makers who have replaced the camera? And aren’t there network providers, data centers, and equipment suppliers who provide the replacement for the film that Kodak once sold? Apple has 72,000 employees (up from 10,000 in 2002). Samsung has 270,000 employees. Comcast has 126,000. And so on.”34 And even O’Reilly’s point doesn’t begin to capture the positive economic impact of the Internet. A 2011 McKinsey study reported that globally the Internet created 2.6 new jobs for every job lost, and that it had been responsible for 21 percent of GDP growth in the five previous years in developed countries.35 The other challenge for the Kodak versus Instagram argument is that while Kodak suffered during the shift to digital technologies, its archrival FujiFilm somehow managed to prosper through the transition to digital.36
The reason for Kodak’s decline was more complex than “they missed digital” or “they failed to buy (or invent) Instagram.” The problems included scale, age, and abruptness. The company had a massive burden of retirees and an internal culture that lost talent and could not attract more. It proved to be a perfect storm. Kodak tried to get into pharmaceuticals in a big way but failed, and it failed in its effort to enter the medical imaging business.
The new anxiety about AI-based automation and the resulting job loss may eventually prove well founded, but it is just as likely that those who are alarmed have in fact just latched onto the right backward-facing snapshots. If the equation is framed in terms of artificial intelligence–oriented technologies versus those oriented toward augmenting humans, there is hope that humans still retain an unbounded ability to both entertain and employ themselves doing something marketable and useful.
If the humans are wrong, however, 2045 could be a tough year for the human race.
Or it could mark the arrival of a technological paradise.
Or both.
The year 2045 is when Ray Kurzweil predicts humans will transcend biology, and implicitly, one would presume, destiny.37
Kurzweil, the serial artificial intelligence entrepreneur and author who joined Google as a director of engineering in 2012 to develop some of his ideas for building an artificial “mind,” represents a community of many of Silicon Valley’s best and brightest technologists. They have been inspired by the ideas of computer scientist and science-fiction author Vernor Vinge about the inevitability of a “technological singularity” that would mark the point in time at which machine intelligence will surpass human intelligence. When he first wrote about the idea of the singularity in 1993, Vinge framed a relatively wide span of years—between 2005 and 2030—during which computers might become “awake” and superhuman.38
The singularity movement depends on the inevitability of mutually reinforcing exponential improvements in a variety of information-based technologies ranging from processing power to storage. In one sense it is the ultimate religious belief in the power of technology-driven exponential curves, an idea that has been explored by Robert Geraci in Apocalyptic AI: Visions of Heaven in Robotics, Artificial Intelligence, and Virtual Reality. There he finds fascinating sociological parallels between singularity thinking and a variety of messianic religious traditions.39
The singularity hypothesis also builds on the emergent AI research pioneered by Rodney Brooks, who first developed a robotics approach based on building complex systems out of collections of simpler parts. Both Kurzweil in How to Create a Mind: The Secret of Human Thought Revealed and Jeff Hawkins in his earlier On Intelligence: How a New Understanding of the Brain Will Lead to the Creation of Truly Intelligent Machines attempt to make the case that because the simple biological “algorithms” that are the basis for human intelligence have been discovered, it is largely a matter of “scaling up” to engineer intelligent machines. These ideas have been tremendously controversial and have been criticized by neuroscientists, but are worth mentioning here because they are an underlying argument in the new automation debate. What is most stri
king today is the extreme range of opinions about the future of the workforce emerging from different interpretations of the same data.
Moshe Vardi is a Rice University computer scientist who serves as editor-in-chief of the Communications of the Association for Computing Machinery. In 2012 he began to argue publicly that the rate of acceleration in AI was now so rapid that all human labor will become obsolete within just over three decades. In an October 2012 Atlantic essay, “The Consequences of Human Intelligence,”40 Vardi took a position that is becoming increasingly representative of the AI research community: “The AI Revolution, however, is different, I believe, than the Industrial Revolution. In the 19th century machines competed with human brawn. Now machines are competing with human brain. Robots combine brain and brawn. We are facing the prospect of being completely out-competed by our own creations.”41
Vardi believes that the areas where new job growth is robust—for example, in the Web search engine economy, where new categories of workers such as those who perform tasks like search engine optimization, or SEO—are inherently vulnerable in the very near term. “If I look at search engine optimization, yes, right now they are creating jobs in doing this,” he said. “But what is it? It is learning how search engines actually work and then applying this to the design of Web pages. You could say that is a machine-learning problem. Maybe right now we need humans, but these guys [software automation designers] are making progress.”42
The assumption of many like Vardi is that a market economy will not protect a human labor force from the effects of automation technologies. Like many of the “Singularitarians,” he points to a portfolio of social engineering options for softening the impact. Brynjolfsson and McAfee in The Second Machine Age sketch out a broad set of policy options that have the flavor of a new New Deal, with examples like “teach the children well,” “support our scientists,” “upgrade infrastructure.” Others like Harvard Business School professor Clayton Christensen have argued for focusing on technologies that create rather than destroy jobs (a very clear IA versus AI position).
At the same time, while many who believe in accelerating change agonize about its potential impact, others have a more optimistic perspective. In a series of reports issued beginning in 2013, the International Federation of Robotics (IFR), established in 1987 with headquarters in Frankfurt, Germany, self-servingly argued that manufacturing robots actually increased economic activity and therefore, instead of causing unemployment, both directly and indirectly increased the total number of human jobs. One February 2013 study claims the robotics industry would directly and indirectly create 1.9 million to 3.5 million jobs globally by 2020.43 A revised report the following year argued that for every robot deployed, 3.6 jobs were created.
But what if the Singularitarians are wrong? In the spring of 2012 Robert J. Gordon, a self-described “grumpy” Northwestern University economist rained on the Silicon Valley “innovation creates jobs and progress” parade by noting that the claims for gains did not show up in conventional productivity figures. In a widely cited National Bureau of Economic Research white paper in 2012 he made a series of points contending that the productivity bubble in the twentieth century was a one-time event. He also noted that the automation technologies cited by those he would later describe as “techno-optimists” had not had the same kind of productivity impact as earlier nineteenth-century industrial innovations. “The computer and Internet revolution (IR3) began around 1960 and reached its climax in the dot-com era of the late 1990s, but its main impact on productivity has withered away in the past eight years,” he wrote. “Many of the inventions that replaced tedious and repetitive clerical labour with computers happened a long time ago, in the 1970s and 1980s. Invention since 2000 has centered on entertainment and communication devices that are smaller, smarter, and more capable, but do not fundamentally change labour productivity or the standard of living in the way that electric light, motor cars, or indoor plumbing changed it.”44
In one sense it was a devastating critique of the Silicon Valley faith in “trickle down” from exponential advances in integrated circuits, for if the techno-optimists were correct, the impact of new information technology should have resulted in a dramatic explosion of new productivity, particularly after the deployment of the Internet. Gordon pointed out that unlike the earlier industrial revolutions, there has not been a comparable productivity advance tied to the computing revolution. “They remind us Moore’s Law predicts endless exponential growth of the performance capability of computer chips, without recognizing that the translation from Moore’s Law to the performance-price behavior of ICT equipment peaked in 1998 and has declined ever since,” he noted in a 2014 rejoinder to his initial paper.45
Gordon squared off with his critics, most notably with MIT economist Erik Brynjolfsson, at the TED Conference in the spring of 2013. In a debate moderated by TED host Chris Anderson, the two jousted over the future impact of robotics and whether the supposed exponentials would continue or were rather the peak of an “S curve” with a decline on the way.46 The techno-optimists believe that a lag between invention and adoption of technology simply delays the impact of productivity gains and even though exponentials inevitably taper off, they spawn successor inventions—for example the vacuum tube was followed by the transistor, which in turn was followed by the integrated circuit.
Gordon, however, has remained a consistent thorn in the side of the Singularitarians. In a Wall Street Journal column, he asserted that there are actually relatively few productivity opportunities in driverless cars. Moreover, he argued, they will not have a dramatic impact on safety either—auto fatalities per miles traveled have already declined by a factor of ten since 1950, making future improvements less significant.47 He also cast a skeptical eye on the notion that a new generation of mobile robots would make inroads into both the manufacturing and service sectors of the economy: “This lack of multitasking ability is dismissed by the robot enthusiasts—just wait, it is coming. Soon our robots will not only be able to win at Jeopardy! but also will be able to check in your bags at the skycap station at the airport, thus displacing the skycaps. But the physical tasks that humans can do are unlikely to be replaced in the next several decades by robots. Surely multiple-function robots will be developed, but it will be a long and gradual process before robots outside of the manufacturing and wholesaling sectors become a significant factor in replacing human jobs in the service or construction sectors.”48
His skepticism unleashed a torrent of criticism, but he has refused to back down. His response to his critics is, in effect, “Be careful what you wish for!” Gordon has also pointed out that Norbert Wiener may have had the most prescient insight into the potential impact of the “Third Industrial Revolution” (IR3), of computing and the Internet beginning in about 1960, when he argued that automation for automation’s sake would have unpredictable and quite possibly negative consequences.
The productivity debate has continued unabated. It has recently become fashionable for technologists and economists to argue that the traditional productivity benchmarks are no longer appropriate for measuring an increasingly digitized economy in which information is freely shared. How do you measure the economic value of a resource like Wikipedia, they ask? If the Singularitarians are right, however, the transformation in the form of an unparalleled economic crisis as human labor becomes surplus should be obvious soon. Indeed, the outcome might be quite gloomy; there will be fewer and fewer places for humans in the resulting economy.
That has certainly not happened yet in the industrialized world. However, one intriguing shift that suggests there are limits to automation was the recent decision by Toyota to systematically put working humans back into the manufacturing process. In quality and manufacturing on a mass scale, Toyota has been a global leader in automation technologies based on the corporate philosophy of kaizen (Japanese for “good change”) or continuous improvement. After pushing its automation processes toward lights-out manufacturing,
the company realized that automated factories do not improve themselves. Once Toyota had extraordinary craftsmen that were known as Kami-sama, or “gods” who had the ability to make anything, according to Toyota president Akio Toyoda.49 The craftsmen also had the human ability to act creatively and thus improve the manufacturing process. Now, to add flexibility and creativity back into their factories, Toyota chose to restore a hundred “manual-intensive” workspaces.
The restoration of the Toyota gods is evocative of Stewart Brand’s opening line to the 1968 Whole Earth Catalog: “We are as gods and might as well get good at it.” Brand later acknowledged that he had borrowed the concept from British anthropologist Edmund Leach, who wrote, also in 1968: “Men have become like gods. Isn’t it about time that we understood our divinity? Science offers us total mastery over our environment and over our destiny, yet instead of rejoicing we feel deeply afraid. Why should this be? How might these fears be resolved?”50
Underlying both the acrimonious productivity debate and Toyota’s rebalancing of craft and automation is the deeper question about the nature of the relationship between humans and smart machines. The Toyota shift toward a more cooperative relationship between human and robot might alternatively suggest a new focus on technology for augmenting humans rather than displacing them. Singularitarians, however, argue that such human-machine partnerships are simply an interim stage during which human knowledge is transferred and at some point creativity will be transferred to or will even arise on its own in some future generation of brilliant machines. They point to small developments in the field of machine learning that suggest that computers will exhibit humanlike learning skills at some point in the not-too-distant future. In 2014, for example, Google paid $650 million to acquire DeepMind Technologies, a small start-up with no commercial products that had shown machine-learning algorithms with the ability to play video games, in some cases better than humans. When the acquisition was first reported it was rumored that because of the power and implications of the technology Google would set up an “ethics board” to evaluate any unspecified “advances.”51 It has remained unclear whether such oversight will be substantial or whether it was just a publicity stunt to hype the acquisition and justify its price.