End Times: A Brief Guide to the End of the World

Home > Other > End Times: A Brief Guide to the End of the World > Page 25
End Times: A Brief Guide to the End of the World Page 25

by Bryan Walsh


  Turing always knew his test was more a game than it was an exacting measurement of intelligence, but he did believe that the creation of authentic AI was possible. And he predicted that when that happened, it would forever alter our place on this planet. In 1951 Turing wrote: “Once the machine thinking method had started, it would not take long to outstrip our feeble powers. There would be no question of the machines dying, and they would be able to converse with each other to sharpen their wits. At some stage therefore we should have to expect the machines to take control.”18

  Turing wouldn’t live to see the first AI boom. He died in 1954, persecuted by British authorities because of his homosexuality.19 But Turing’s visions of ultraintelligent machines endured. In 1965 his former colleague I. J. Good furthered Turing’s line of thought, outlining a concept now called the “intelligence explosion.” Good wrote:

  Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any man however clever. Since the design of machines is one of these intellectual activities, an ultraintelligent machine could design even better machines; there would then unquestionably be an “intelligence explosion,” and the intelligence of man would be left far behind. Thus the first ultraintelligent machine is the last invention that man need ever make, provided that the machine is docile enough to tell us how to keep it under control.20

  This paragraph contains everything you need to know about AI as an existential risk—and as an existential hope. A machine is invented that is more intelligent than human beings, its creators. Just as human beings strive to improve, to make ourselves smarter and better, so will this machine. The difference is that the machine can do this by upgrading its software and hardware directly, by writing better code and building better versions of itself, while humans are stuck with 73-cubic-inch brains21 that can currently be upgraded only by the very slow process of evolution. The AI would be engaging in what is called recursive self-improvement—the process of improving one’s ability to make self-improvements—and it would lead to smarter and smarter machines, at a speed faster than human thought, with no foreseeable end point. That is an intelligence explosion. And as Good wrote, if we can keep the machine under control, we’ll never need to make anything else, because our ever-improving AI could do it far better than we mere humans ever could.

  Good began his original article with these words: “The survival of man depends on the early construction of an ultraintelligent machine” (emphasis added). Good was writing just a few months after the Cuban Missile Crisis, when the extinction of man by his own hand seemed not unlikely and handing off control to an ultraintelligent machine might have seemed prudent. There’s your existential hope. But it’s noteworthy that when Good wrote an unpublished memoir in 1998—as James Barrat discovered in his book Our Final Invention—he added that perhaps the line should now read: “The extinction of man depends on the early construction of an ultraintelligent machine” (emphasis added).22 And there’s your existential risk.

  Many AI researchers believed at the time that such an ultraintelligent machine might be just around the corner. But as the 1960s gave way to the ’70s, AI research hit dead end after dead end. The money dried up, leading to the first “AI winter”—a period of shriveled funding and interest. The field experienced a renaissance in the 1980s, only for the bubble to pop again—a second “AI winter.” Nearly all technological fields are subject to hype cycles followed by disappointment, as early research inevitably fails to fulfill its promise,23 but AI has been especially marked by extreme booms and busts.

  This is due in part to the difficulty of measuring just what AI actually is. Researchers ruefully note that as soon as an AI achieves something that had been considered a mark of real intelligence, that achievement is suddenly downgraded precisely because a machine can now do it.24 Take chess—no less than Turing thought that only a truly intelligent AI could defeat the world’s top chess players.25 Yet while IBM’s Deep Blue beat world champion Garry Kasparov more than twenty years ago, we’re still waiting for the robopocalypse. As the AI researcher Rodney Brooks said in 2002: “Every time we figure out a piece of it, it stops being magical. We say, ‘Oh, that’s just a computation.’”26

  That has changed somewhat in recent years, however, thanks to advances in computation that really do seem indistinguishable from magic. Take machine learning, which is when algorithms (a set of rules followed by a computer) soak in data from the world, analyze the information, and learn from it. The recommendation engine that drives Netflix is one example. The streaming service’s machine-learning algorithm compiles the data you generate by watching movies and TV shows, weighs it, and then spits out recommendations for, say, Goofy Dance Musicals or Latin American Forbidden-Love Movies. (Both are actual Netflix categories.27) The more often you watch—and the streaming service’s average American customer spends ten hours a week on Netflix28—the more data you generate, and the smarter the Netflix recommendation engine becomes for you.

  The key here is data. A machine-learning algorithm depends on data to become smarter—lots and lots and lots of data, terabytes and petabytes of data. Until recently, data was scarce. It was either locked away in media like books that couldn’t easily be scanned by a computer, or it simply went unrecorded. But the internet—and especially the mobile internet created by smartphones—has changed all that. As we now know, sometimes to our chagrin, very little that we do as individuals, as companies, and as countries goes unrecorded by the internet. As of 2018, 2.5 quintillion bytes of data were being produced every day, as much data as you could store on 3.6 billion old CD-ROM disks.29 And the amount of data being produced is constantly increasing—by some counts we generate more data in a single year now than we did over the cumulative history of human civilization.30 What oil was to the twentieth century, data is to the twenty-first century, the one resource that makes the world go—which is why tech companies like Facebook and Google are willing to go to any lengths to get their hands on it.

  Raw machine learning has its limits, however. For one thing, the data an algorithm takes in often needs to be labeled by human beings. The Netflix algorithm isn’t able to micro-categorize all the movies and TV shows in the service’s catalog on its own—that required the labor of human beings Netflix paid to watch each and every one of their offerings, as Alexis Madrigal reported in a 2014 story for The Atlantic.31 If a machine-learning AI makes a mistake, it usually needs to be corrected by a human engineer. Machine learning can produce remarkable results, especially if the algorithm can draw from a well-stocked pool of properly labeled data. But it can’t be said to produce true intelligence.

  Deep learning—a subset of machine learning—is something else. Fully explaining deep learning would take more space than we have (and, probably, more IQ points than this author possesses), but it involves filtering data through webs of mathematics called artificial neural networks that progressively detect features, and eventually produces an output—the identification of an image, perhaps, or a chess move in a game-playing program. Over time—and with a wealth of data—the AI is able to learn and improve. The difference is that a deep-learning neural network is not shaped by human programmers so much as by the data itself. That autodidactic quality allows an AI to improve largely on its own, incredibly fast and in ways that can be startlingly unpredictable. Deep learning results in artificial intelligence that appears actually intelligent, and it powers some of the most remarkable results in the field.

  In 2016, the AI start-up DeepMind, now owned by Google, shocked the world when its AlphaGo program beat the South Korean master Lee Sedol in a series of the ancient board game of Go. Go has rules that are simpler than chess but is far more complex in practice, with more possible positions in a game than there are atoms in the universe. Unlike chess, Go can’t be brute-forced by a machine’s superior memory and computation speed, which is what made AlphaGo’s victory so stunning. (IBM’s Deep Blue was able to defeat Garry Kasparov becau
se it could quickly simulate all potential moves and choose the best one, something that not even a human chess grandmaster can do.) But AlphaGo—which had been trained using deep-learning methods—also demonstrated what appeared to be true creativity. In the second game against Lee the program pulled off a move so unexpected that the human master had to leave the room for fifteen minutes to compose himself. AlphaGo went on to win four out of the five games in the series.

  AlphaGo’s victory represented what the computer scientist Stuart Russell terms a “holy shit” moment for AI, but it’s far from the only one.32 While the original AlphaGo was programmed with millions of existing Go games—meaning its success was built on the foundation of human experience—in 2017 DeepMind produced a program that began with the rules of Go and nothing more. AlphaGo Zero improved by playing itself over and over and over again millions of times—a process called reinforcement learning—until after just three days of training it proved capable of beating the original, human-trained AlphaGo, 100 to 0.33 For good measure, DeepMind created AlphaZero, a generalized version of the program that taught itself the rules of chess and shogi (Japanese chess) in less than a day of real time and easily beat the best existing computer programs in both games.34 The accomplishments were a textbook example of the power and the speed of recursive self-improvement and deep learning. “AIs based on reinforcement learning can perform much better than those that rely on human expertise,” computer scientist Satinder Singh wrote in Nature.35

  That same year a program called Libratus, designed by a team from Carnegie Mellon University in Pittsburgh, trounced four professional poker players.36 Poker was another game that was thought to be beyond the capabilities of current AIs, not necessarily because of the level of intelligence it demands but because unless they’re counting cards—meaning cheating—poker players have to compete with incomplete information of the playing field. That’s closer to the way the real world works, as is the occasional bluff, which should advantage team human. As recently as 2015 the world’s best human poker players beat Claudico, at the time the top AI player.37 Yet in 2017 Libratus cleaned house. And in 2019, another DeepMind AI, called AlphaStar, beat top-level humans in StarCraft II, a highly complex computer strategy game that requires players to make multiple decisions at multiple time frames all at once, while dealing with imperfect information.38

  It’s not just fun and games. Medical algorithms can detect disease, and police algorithms can predict crime. Personal digital assistants like Siri and Alexa are integrating themselves into our daily lives, getting smarter the more we use them (and the more they listen to us). Autonomous cars—which demand precise image recognition and decision making from AIs—are edging closer to reality. Actual robots out in the real world are navigating terrain and mastering physical challenges with a fluidity that would have seemed preposterous a few years ago. In a 2017 survey, hundreds of AI experts predicted that machines would be better than humans at translation by 2024, writing high school essays by 2026, driving a truck by 2027, working in retail by 2031, writing a bestselling book by 2049 (uh-oh), and surgery by 2053. The experts gave a 50 percent chance that machines would outperform humans in all tasks within 45 years, and that all human jobs would be automated within the next 120 years. Sooner or later we all may be Lee Sedol.39

  Every great technological leap has been accompanied by social and economic disruption. The original Luddites were nineteenth-century British weavers and textile workers who revolted over the introduction of automated looms that threatened their artisanal livelihoods. Mechanization reduced the number of farmers, sending agricultural workers streaming into the cities. The first cars caused so much panic in the countryside that anti-automobile societies sprang up to resist them; one group in Pennsylvania even proposed a law requiring that cars traveling through the country at night send up a rocket every mile and stop for ten minutes to let the road clear.40 Some people were initially wary of telephones, fearing they could transmit electric shocks through their wiring.41

  Sooner or later, however, we adjust to new technology, and then we take it for granted. The short-term job loss that causes so much fear is usually eased by the productivity growth enabled by labor-saving advances. We end up richer and better off—for the most part.

  AI, though, could be different. Existential risk experts may obsess over superintelligence, the possibility that AI will become immensely smarter than us, but AI doesn’t have to become superintelligent—assuming that’s even possible—to create more upheaval than any other technological advance that has preceded it. AI could exacerbate unemployment, worsen inequality, poison the electoral process, and even make it much easier for governments to kill their own citizens. “You’ve got job loss,” Andrew Maynard, the director of the Risk Innovation Lab at Arizona State University, told me. “You’ve got privacy. You’ve got loss of autonomy with AI systems. I think there are a lot of much clearer issues than superintelligence that are going to impact people quite seriously, and that we have to do something about.”

  Despite one of the longest economic expansions in U.S. history, real wages for the working class have largely remained stagnant—and the spread of AI may be playing a role. One 2018 survey found that wages slipped in job areas where automation and AI was taking hold42, and a survey by the McKinsey Global Institute estimated that 800 million jobs worldwide could be lost to automation by 2030.43 Past experience suggests that broad job loss should be temporary—and indeed the McKinsey report found that only 6 percent of all jobs were at risk of total automation—as investment shifts from declining sectors to rising ones. If AI is as revolutionary as its most ardent advocates claim, however, then the past is no longer a reliable guide to the future, and we may be in for flat wages followed by widespread job loss. AI may do work, but it isn’t labor—it’s owned and controlled by capital. As AI becomes a more integral part of our economy, its gains seem likely to come at the expense of workers, and accrue to owners, intensifying inequality.

  If little is done to share the wealth generated by AI, the results could be socially explosive. The San Francisco Bay area, home to tech companies like Google and Facebook, can claim to be the world center of artificial intelligence research, a region that sets the terms of the future. It’s also a place of striking income inequality, where some of the richest people in the world live side by side with the most destitute. A 2018 report by the Brookings Institution ranked San Francisco as the sixth-most unequal city in the United States. The raw dollar gap between the poorest and the richest is so high there largely because the very rich are richer than almost anywhere else.44 And that’s because in the tech economy, rewards overwhelmingly flow to owners versus a broad class of workers. In 2016 Apple, Alphabet (the parent company of Google), and Facebook all made well above $1 million in revenue per worker. Their total combined revenues of $336 billion were actually much less than what the top three Detroit car companies of Chrysler, General Motors, and Ford made in 1996, adjusted for inflation. But the tech companies—in addition to producing much less physical stuff—employ far fewer workers. That means less of the value of these companies—some of it generated by AI—has to be paid out to labor and more can be kept for owners and investors.45

  It may not be machine overlords we need to fear, but the human overlords who own the machines.46 There’s a reason why some of Silicon Valley’s most voracious capitalists have begun putting their weight behind plans to give people universal basic income (UBI)—a way for the average person to share, at least a little bit, in an increasingly capital- and tech-driven economy. (There’s also a self-serving reason—without UBI, consumers may no longer have the money to keep businesses operating.) Christine Peterson of the Foresight Institute—a Silicon Valley–based think tank that examines emerging technologies—even has a novel idea for something called Inheritance Day, which would give every human alive an equal share of an unclaimed portion of the universe, with the assumption that these shares will eventually become valuable as we spread into
space. Think of it as homesteading for the AI age.47 It’s extreme—but then so is the thought of an economy utterly hollowed out by machines.

  The danger from near-term AI isn’t only economic. The same machine-learning capabilities that can make AIs better at recommending movies or playing games can also enable them to become unparalleled killers. In 2017 Stuart Russell and the Future of Life Institute produced a striking video dramatizing a near-future scenario where swarms of tiny mechanical drones use AI and facial recognition technology to target and kill autonomously—so-called slaughterbots.48 The video is terrifying, and almost certainly a glimpse of the near future. We already have remote-piloted drone aircraft that are capable of delivering lethal payloads, but autonomous weapons would outsource the final decision to pull the trigger to an AI. In the future drones might be programmed to assassinate a particular target, like a politician, or even carry out ethnic cleansing by hunting a specific racial group. The development of such weapons would represent the third great revolution in warfare, after gunpowder and nuclear arms. They threaten to reduce the restraints on war and make it far easier to kill at will—for states at first, but eventually for criminal gangs, terrorists, and even lone individuals.

  So it should be worrying that it’s not just businesses pouring money into AI research—it’s militaries, too. In 2018 the Pentagon launched a $1.7 billion Joint Artificial Intelligence Center, while the Defense Advanced Research Projects Agency (DARPA)—the military think tank that helped develop the internet—announced its own $2 billion AI campaign the same year.49 One cutting-edge military program, Project Maven, was designed to aid computer systems in identifying targets from aerial footage. This would be very useful if you wanted to, say, develop a drone that could pick out targets and fire on them autonomously. The Defense Department had partnered with Google on Project Maven, but when news of that collaboration became public, the company faced a backlash from its own workforce. Thousands of Google employees signed an open letter protesting the company’s involvement with the project, and in May 2018 Google announced that it would not renew the contract.50 A few days later Google’s Sundar Pichai released an open letter promising that the company’s AI projects will not include weapons or technologies that cause or are likely to cause overall harm.51

 

‹ Prev