Machines of Loving Grace
Page 9
Wiener went on to mention the emergence of factories that were “substantially without employees” and the rise of the importance of “taping.” He also presented more than a glimmer of the theoretical possibility and practical impact of machine learning: “The limitations of such a machine are simply those of an understanding of the objects to be attained, and of the potentialities of each stage of the processes by which they are to be attained, and of our power to make logically determinate combinations of those processes to achieve our ends. Roughly speaking, if we can do anything in a clear and intelligible way, we can do it by machine.”12
At the dawn of the computer age, Wiener could see and clearly articulate that automation had the potential of reducing the value of a “routine” factory employee to where “he is not worth hiring at any price,” and that as a result “we are in for an industrial revolution of unmitigated cruelty.”
Not only did he have early dark forebodings of the computer revolution, but he foresaw something else that was even more chilling: “If we move in the direction of making machines which learn and whose behavior is modified by experience, we must face the fact that every degree of independence we give the machine is a degree of possible defiance of our wishes. The genie in the bottle will not willingly go back in the bottle, nor have we any reason to expect them to be well disposed to us.”13
In the early 1950s Reuther and Wiener agreed on the idea of a “Labor-Science-Education Association,” but the partnership did not have an immediate impact, in part because of Wiener’s health issues and in part because Reuther represented a faction of the U.S. labor movement that viewed automation as unavoidable progress—the labor leader was intent on forging an economic bargain with management around the forces of technology: “In the final analysis, modern work processes had to be endured, offset by the reward of increased leisure and creative relaxation. In his embrace of automation and new technology, he often seemed to be wholly taken by the notion of efficiency as a desirable and essentially neutral condition.”14
Wiener’s warning would eventually light a spark—but not during the 1950s, a Republican decade when the labor movement did not have many friends in government. Only after Kennedy’s election in 1960 and his succession by Lyndon Johnson would the early partnership between Wiener and Reuther lead to one of the few serious efforts on the part of the U.S. government to grapple with automation, when in August of 1964 Johnson established a blue-ribbon panel to explore the impact of technology on the economy.
Pressure came in part from the Left in the form of an open letter to the president from a group that called itself the Ad Hoc Committee on the Triple Revolution, including Democratic Socialists of America head Michael Harrington, Students for a Democratic Society cofounder Tom Hayden, biologist Linus Pauling, Swedish economist Gunnar Myrdal, pacifist A. J. Muste, economic historian Robert Heilbroner, social critic Irving Howe, civil rights activist Bayard Rustin, and Socialist Party presidential candidate Norman Thomas, among many others.
The first revolution they noted was the emergence of the “Cybernation”: “A new era of production has begun. Its principles of organization are as different from those of the industrial era as those of the industrial era were different from the agricultural. The cybernation revolution has been brought about by the combination of the computer and the automated self-regulating machine. This results in a system of almost unlimited productive capacity which requires progressively less human labor.”15 The resulting National Commission on Technology, Automation, and Economic Progress would include a remarkable group ranging from Reuther, Thomas J. Watson Jr. of IBM, and Edwin Land of Polaroid, to Robert Solow, the MIT economist, and Daniel Bell, the Columbia sociologist.
When the 115-page report appeared at the end of 1966 it was accompanied by 1,787 pages of appendices including special reports by outside experts. The 232-page analysis of the impact of computing by Paul Armer of the RAND Corporation did a remarkable job of predicting the impact of information technology. Indeed, the headings in the report have proven true over the years: “Computers Are Becoming Faster, Smaller, and Less Expensive”; “Computing Power Will Become Available Much the Same as Electricity and Telephone Service Are Today”; “Information Itself Will Become Inexpensive and Readily Available”; “Computers Will Become Easier to Use”; “Computers Will Be Used to Process Pictorial Images and Graphic Information”; and “Computers Will Be Used to Process Language,” among others. Yet the consensus that emerged from the report would be the traditional Keynesian view that “technology eliminated jobs, not work.” The report concluded that technological displacement would be a temporary but necessary stepping-stone for economic growth.
The debate over the future of technological unemployment dissipated as the economy heated up, in part as a consequence of the Vietnam War, and the postwar civil strife in the late 1960s further sidelined the question. A decade and a half after he had issued his first warnings about the consequences of automated machines, Wiener turned his thoughts to religion and technology while remaining a committed humanist. In his final book, God & Golem, Inc., he explored the future human relationship with machines through the prism of religion. Invoking the parable of the golem, he pointed out that despite best intentions, humans are incapable of understanding the ultimate consequences of their inventions.16
In his 1980 dual biography of John von Neumann and Wiener, Steven Heims notes that in the late 1960s he had asked a range of mathematicians and scientists about Wiener’s philosophy of technology. The general reaction of the scientists was as follows: “Wiener was a great mathematician, but he was also eccentric. When he began talking about society and the responsibility of scientists, a topic outside of his area of expertise, well, I just couldn’t take him seriously.”17
Heims concludes that Wiener’s social philosophy hit a nerve with the scientific community. If scientists acknowledged the significance of Wiener’s ideas, they would have to reexamine their deeply held preconceived notions about personal responsibility, something they were not eager to do. “Man makes man in his own image,” Wiener notes in God and Golem, Inc. “This seems to be the echo or the prototype of the act of creation, by which God is supposed to have made man in His image. Can something similar occur in the less complicated (and perhaps more understandable) case of the nonliving systems that we call machines?”18
Shortly before his death in 1964, Wiener was asked by U.S. News & World Report: “Dr. Wiener, is there any danger that machines—that is, computers—will someday get the upper hand over men?” His answer was: “There is, definitely, that danger if we don’t take a realistic attitude. The danger is essentially intellectual laziness. Some people have been so bamboozled by the word ‘machine’ that they don’t realize what can be done and what cannot be done with machines—and what can be left, and what cannot be left to the human beings.”19
Only now, six and a half decades after Wiener wrote Cybernetics in 1948, is the machine autonomy question becoming more than hypothetical. The Pentagon has begun to struggle with the consequences of a new generation of “brilliant” weapons,20 while philosophers grapple with the “trolley problem” in trying to assign moral responsibility for self-driving cars. Over the next decade the consequences of creating autonomous machines will appear more frequently as manufacturing, logistics, transportation, education, health care, and communications are increasingly directed and controlled by learning algorithms rather than humans.
Despite Wiener’s early efforts to play a technological Paul Revere, after the automation debates of the 1950s and 1960s tailed off, fears of unemployment caused by technology would vanish from the public consciousness until sometime around 2011. Mainstream economists generally agreed on what they described as the “Luddite fallacy.” As early as 1930, John Maynard Keynes had articulated the general view on the broad impact of new technology: “We are being afflicted with a new disease of which some readers may not yet have heard the name, but of which they will hear a great deal in th
e years to come—namely, technological unemployment. This means unemployment due to our discovery of means of economizing the use of labor outrunning the pace at which we can find new uses for labor. But this is only a temporary phase of maladjustment.”21
Keynes was early to point out that technology was a powerful generator of new categories of employment. Yet what he referred to as a “temporary phase” is certainly relative. After all, he also famously noted that in “the long run” we are all dead.
In 1995, economist Jeremy Rifkin wrote The End of Work: The Decline of the Global Labor Force and the Dawn of the Post-Market Era. The decline of the agricultural economy and the rapid growth of new industrial employment had been a stunning substantiation of Keynes’s substitution argument, but Rifkin argued that the impact of new information technologies would be qualitatively different from that of previous waves of industrial automation. He began by noting that in 1995 unemployment globally had risen to its highest level since the depression of the 1930s and that globally eight hundred million people were unemployed or underemployed. “The restructuring of production practices and the permanent replacement of machines for human laborers has begun to take a tragic toll on the lives of millions of workers,” he wrote.22
The challenge to his thesis was that employment in the United States actually grew from 115 million to 137 million during the decade following the publication of his book. That meant that the size of the workforce would grow by over 19 percent while the nation’s population grew by only 11 percent. Moreover, key economic indicators such as the labor force participation rate, employment to working population ratio, and the unemployment rate showed no evidence of technological unemployment. The situation, then, was more nuanced than the impending black-and-white labor calamity Rifkin had forecast. For example, from the 1970s, the outsourcing of jobs internationally, as multinational corporations fled to low-cost manufacturing regions and used telecommunications networks to relocate white-collar jobs, had a far more significant impact on domestic employment than the deployment of automation technologies. And so Rifkin’s work, substantially discredited, also went largely unnoticed.
In the wake of the 2008 recession, there were indications of a new and broader technology transformation. White-collar employment had been the engine of growth for the U.S. economy since the end of World War II, but now cracks began to appear. What were once solid white-collar jobs began disappearing. Routinized white-collar work was now clearly at risk as the economy began to recover in 2009 in the form of what was described as a “jobless recovery.” Indications were that knowledge workers’ jobs higher up in the economic pyramid were for the first time vulnerable. Economists such as MIT’s David Autor began to pick apart the specifics of the changing labor force and put forward the idea that the U.S. economy was being “hollowed out.” It might continue to grow at the bottom and the top, but middle-class jobs, essential to a modern democracy, were evaporating, he argued.
There was mounting evidence that the impact of technology was not just a hollowing out but a “dumbing down” of the workforce. In some cases specific high-prestige professions began to show the impact of automation based on the falling costs of information and communications technologies, such as new global computer networks. Moreover, for the first time artificial intelligence software was beginning to have a meaningful impact on certain highly skilled jobs, like $400-per-hour lawyers and $175-per-hour paralegals. As the field of AI once again gathered momentum beginning in 2000, new applications of artificial intelligence techniques based on natural language understanding, such as “e-discovery,” or the automated processing of the relevance of legal documents required to disclose in litigation, emerged. The software would soon go beyond just finding specific keywords in email. E-discovery software evolved quickly, so that it became possible to scan millions of documents electronically and recognize underlying concepts and even find so-called smoking guns—that is, evidence of illegal or improper behavior.
In part, the software had become essential as litigation against corporations routinely involved the review of millions of documents for relevance. Comparative studies showed that the machines could do as well or better than humans in analyzing and classifying documents. “From a legal staffing viewpoint, it means that a lot of people who used to be allocated to conduct document review are no longer able to be billed out,” said Bill Herr, who as a lawyer at a major chemical company used to muster auditoriums of lawyers to read documents and correspondence for weeks on end. “People get bored, people get headaches. Computers don’t.”23
Observing the impact of technologies such as e-discovery software, which is now dramatically eliminating the jobs of lawyers, led Martin Ford, an independent Silicon Valley engineer who owned a small software firm, to self-publish The Lights in the Tunnel: Automation, Accelerating Technology and the Economy of the Future at the end of 2009. Ford had come to believe that the impact of information technology on the job market was moving much more quickly than was generally understood. With a professional understanding of software technologies, he was also deeply pessimistic. For a while he stood alone, much in the tradition of Rifkin’s 1995 The End of Work, but as the recession dragged on and mainstream economists continued to have trouble explaining the absence of job growth, he was soon joined by an insurgency of technologists and economists warning that technological disruption was happening full force.
In 2011, two MIT Sloan School economists, Erik Brynjolfsson and Andrew McAfee, self-published an extended essay titled “Race Against the Machine: How the Digital Revolution Is Accelerating Innovation, Driving Productivity, and Irreversibly Transforming Employment and the Economy.” Their basic theme was as follows: “Digital technologies change rapidly, but organizations and skills aren’t keeping pace. As a result, millions of people are being left behind. Their incomes and jobs are being destroyed, leaving them worse off . . . than before the digital revolution.”24 The “Race Against the Machine” essay was passed around samizdat-style over the Internet and was instrumental in reigniting the debate over automation. The basic theme of the discussion was around the notion that this time—because of the acceleration of computing technologies in the workplace—there would be no Keynesian solution in which the economy created new job categories.
Like Martin Ford, Brynjolfsson and McAfee chronicled a growing array of technological applications that were redefining the workplace, or seemed poised on the brink of doing so. Of the wave of new critiques, David Autor’s thesis was perhaps the most compelling. However, even he began to hedge in 2014, based on a report that indicated a growing “deskilling” of the U.S. workforce and a declining demand for jobs that required cognitive skills. He worried that the effect was creating a downward ramp. The consequence, argued Paul Beaudry, David A. Green, and Ben Sand in a National Bureau of Economic Research (NBER) working paper, was that higher-skilled workers tended to push lower-skilled workers out of the workforce.25 Although they have no clear evidence directly related to the deployment of particular types of technologies, the analysis of the consequences for the top of the workforce is chilling. They reported: “Many researchers have documented a strong, ongoing increase in the demand for skills in the decades leading up to 2000. In this paper, we document a decline in that demand in the years since 2000, even as the supply of high education workers continues to grow. We go on to show that, in response to this demand reversal, high-skilled workers have moved down the occupational ladder and have begun to perform jobs traditionally performed by lower-skilled workers.”26 Yet despite fears of a “job apocalypse” based on machines that can see, hear, speak, and touch, once again the workforce has not behaved as if there will be a complete collapse precipitated by technological advance in the immediate future. Indeed, in the decade from 2003 to 2013, the size of the U.S. workforce increased by more than 5 percent, from 131.4 million to 138.3 million—although, to be sure, this was a period during which the population grew by more than 9 percent.
If not comple
te collapse, the slowing growth rate suggested a more turbulent and complex reality. One possibility is that rather than a pure deskilling, the changes observed may represent a broader “skill mismatch,” an interpretation that is more consistent with Keynesean expectations. For example, a recent McKinsey report on the future of work showed that between 2001 and 2009, jobs related to transactions and production both declined, but more than 4.8 million white-collar jobs were created relating to interactions and problem-solving.27 What is clear is that both blue-collar and white-collar jobs involving routinized tasks are at risk. The Financial Times reported in 2013 that between 2007 and 2012 the U.S. workforce gained 387,000 managers while losing almost two million clerical jobs.28 This is an artifact of what is popularly described as the Web 2.0 era of the Internet. The second generation of commercial Internet applications brought the emergence of a series of software protocols and product suites that simplified the integration of business functions. Companies such as IBM, HP, SAP, PeopleSoft, and Oracle, helped corporations to relatively quickly automate repetitive business functions. The consequence has been a dramatic loss of clerical jobs.
However, even within the world of clerical labor there are subtleties that suggest that predictions of automation and job destruction across the board are unlikely to prove valid. The case of bank tellers and the advent of automated teller machines is a particularly good example of the complex relationship between automation technologies, computer networks, and workforce dynamics. In 2011, while discussing the economy, Barack Obama used this same example: “There are some structural issues with our economy where a lot of businesses have learned to become much more efficient with a lot fewer workers. You see it when you go to a bank and you use an ATM; you don’t go to a bank teller. Or you go to the airport, and you’re using a kiosk instead of checking in at the gate.”29