End Times: A Brief Guide to the End of the World

Home > Other > End Times: A Brief Guide to the End of the World > Page 26
End Times: A Brief Guide to the End of the World Page 26

by Bryan Walsh


  That was a positive sign for the future of AI, and in keeping with the sensibilities of the majority of artificial intelligence researchers. So was an open letter sent in 2017 to the United Nations and signed by tech luminaries like Elon Musk and Demis Hassabis, the cofounder of DeepMind, that called for a ban on lethal autonomous weapons—killer robots, in other words.52 But with a $700 billion budget, the U.S. military has deep pockets, and if companies like Google eschew working on AI programs that could be used for defense—or offense—you can bet other firms will be more than willing. Both Microsoft and Amazon have made it clear that they will continue to work with the Department of Defense on tech and AI, with Amazon CEO Jeff Bezos saying in October 2018 that “if big tech companies are going to turn their back on the Department of Defense, then this country is going to be in trouble.”53

  Bezos has a point. Whatever American tech companies choose to do, China has made it clear that it is determined to close its artificial intelligence gap with the United States—and then surpass it. By one estimate China spent $12 billion on AI in 2017, and is poised to spend at least $70 billion by 2020.54 In 2017 China attracted half of all AI capital in the world,55 and by the same year the country was producing more highly cited AI research papers than any other nation, including the United States.56

  Some of that money has been dedicated to building the world’s most powerful—and intrusive—facial recognition software. On a 2018 reporting trip to China, I was struck by how ubiquitous facial recognition systems had become there, whether in security lines at airports or at hotel front desks, where guests could use their faces to pay their bills.57 In cities across China, facial recognition cameras scan crowds, searching for both wanted criminals and those who “undermine stability.”58 China is also using AI to build an unprecedented social credit system that will rank the country’s 1.4 billion citizens not just according to their financial trustworthiness, but also their “social integrity.”59

  If Chinese officials use AI to turn China into a technologically Orwellian state—as seems likely—that’s a tragedy for China, and potentially a “mortal danger” for more open societies if those tools are exported, as the financier and philanthropist George Soros put it in a speech in 2019.60 But even more worrying for the world is the growing likelihood that the United States and China will embark on a twenty-first-century arms race, only one involving AI instead of nuclear weapons. Such a race would pose an existential risk in itself—not just because artificial intelligence might threaten humans, but also because of the possibility that sudden leaps in AI capability might upset the balance of power needed to keep peace between nuclear-armed states.

  Imagine, for instance, if the United States were to develop sophisticated military AI that could remotely disable China’s nuclear forces. Even without using such technology—even without threatening to use it—Beijing may be put in a position where it feels it needs to launch a strike against the United States before Washington could mobilize its new AI weapon. Mutually assured destruction only works if both countries maintain rough technological parity. AI threatens to overthrow that parity. “Weaponized AI is a weapon of mass destruction,” said Roman Yampolskiy, a computer scientist at the University of Louisville who has closely studied the risks of AI. “An AI arms race is likely to lead to an existential catastrophe for humanity.”

  Weapons are just the most obvious ways that AI can break bad, however—and it’s not always easy to tell the dangerous uses of AI from the beneficial ones. In February 2018, fourteen institutions—including the Future of Humanity Institute and the Centre for the Study of Existential Risk—collaborated on a hundred-page report outlining malicious uses of AI. The research singled out the way AI can accelerate the process of phishing through automation—instead of one person pretending to be a Nigerian prince dying to give you his money, an AI phishing program can send out endless emails, and use reinforcement learning to refine them until someone bites. AI will soon be able to hack faster and better than humans, and spread propaganda by creating fake images and videos that will soon be indistinguishable from reality. And like biotechnology, AI is a dual-use technology—the same advances can be employed for good or ill—which makes regulation challenging. In the near future—perhaps as soon as 2020—we’ll look back on the Russian election hacking efforts in 2016 as a beta test, a Trinity before an AI Hiroshima.61

  The possibility of an AI-induced accidental nuclear war is nightmarish. But what should truly frighten us is that we don’t understand the AI systems that we’re using—not really. And I don’t mean ordinary people googling “artificial intelligence takeover”—I mean the experts. The very characteristics that make deep learning so powerful also makes it opaque. An artificial neural network can have hundreds of layers, adding up to millions and even billions of parameters, each tuned on the fly by the algorithm as it tries to draw conclusions from the data it takes in. The process is so complex that even the creators of the algorithms often can’t say why a particular AI makes a particular decision. Recall AlphaGo’s match against Lee Sedol, when the program produced a move on the Go board that utterly shocked the human grandmaster. No one—including Alpha-Go’s makers at DeepMind—could explain how and why the program decided to do what it did. And of course, neither could AlphaGo. AI doesn’t explain—it acts.

  But when AI acts, it can make mistakes. A Google algorithm that captions photographs regularly misidentified people of color as gorillas.62 An Amazon Alexa device offered porn when a child asked to play a song.63 A Microsoft chatbot called Tay began spouting white supremacist hate speech after less than twenty-four hours of training on Twitter.64 Worst of all, in March 2018 a self-driving Uber car on a test run in Arizona ran over and killed a woman, the first time an autonomous vehicle was involved in a pedestrian death. The automobile detected the victim six seconds before the crash, but its self-driving algorithm classified her as an unknown object until it was too late to brake.65

  Some of those working on developing autonomous cars, like Elon Musk at Tesla, argue that early problems in the technology should be balanced against the fact that human drivers cause deadly crashes all the time.66 37,133 people in the United States died in motor accidents in 2017, about one victim every fifteen minutes.67 Musk has a point, but error in an AI algorithm is qualitatively different than human error. A single bad human driver might kill a few people at most when they make a mistake, but an error in a single AI could spread across an entire industry, and cause far greater damage. IBM’s Watson for Oncology AI was used by hundreds of hospitals to recommend treatment for patients with cancer. But the algorithm had been trained on a small number of hypothetical cases with little input from human oncologists, and as STAT News reported in 2018, many of the treatments Watson suggested were shown to be flawed, including recommending for a patient with severe bleeding a drug that was so contraindicated for their symptoms that it came with a black box warning.68

  No technology is foolproof, especially in its early days. But if a bridge collapses or an airplane crashes, experts can usually look over the evidence and draw a clear explanation of what went wrong. Not always so with AI. At a major industry conference in 2017, Google AI researcher Ali Rahimi received a forty-second standing ovation when he likened artificial intelligence not to electricity or to fire—like his boss Sundar Pichai—but to medieval alchemy. Alchemy produced important scientific advances, including the development of metallurgy, but alchemists couldn’t explain the scientific basis of why what they were doing worked when it worked. AI researchers, Rahimi suggested, are currently closer to alchemists than scientists.69

  While most AI researchers are against autonomous weapons, the fact remains that there is little regulation in the field. A comparison to biotechnology is illustrative. Biological weapons are banned by international law. In the life sciences there are ethics boards at universities to review experiments and reject ones they find wanting. There is the Department of Bioethics at the National Institutes of Health, to think
deeply about the direction and methods of medical research. There is the National Institutes of Health, period. Biologists are trained in lab safety and ethics. That’s why when someone in the field goes rogue—like the Chinese scientist who gene-edited human embryos—outrage tends to follow.

  AI ethics, by contrast, is in its infancy. There is nothing like the National Institutes of Health around AI, no significant independent boards to oversee experiments. While there has been an international push to ban autonomous weapons, no such law exists—and in fact, a proposed treaty was blocked by countries including the United States and Russia in 2018.70

  Some positive change is on the way. Brent Hecht is a young computer scientist at Northwestern University who chairs the Future of Computing Academy, a part of the world’s largest computer science society. Hecht advocates that peer reviewers in computer science and AI who judge articles submitted to scientific journals should evaluate the social impact that a piece of research might have, as well as its intellectual quality. That might seem like a small thing, but it’s unprecedented for the field. “Computer science has been sloppy about how it understands and communicates the impacts of its work, because we haven’t been trained to think about these kinds of things,” Hecht told Nature. “It’s like a medical study that says, ‘Look, we cured 1,000 people,’ but doesn’t mention that it caused a new disease in 500 of them.”

  Large tech companies are beginning to take notice as well. When DeepMind was purchased by Google in 2014, the AI start-up insisted on the creation of an AI ethics board, to ensure that its work should “remain under meaningful human control and be used for socially beneficial purposes.”71 In 2017 the Future of Life Institute convened a major conference at Asilomar Conference Grounds in California, where more than four decades ago biologists met to hash out the ethical issues around the new science of recombinant DNA. The result was the creation of the Asilomar Principles, a road map meant to guide the industry toward beneficial and friendly AI.72 In 2019 Facebook spent $7.5 million to endow its first AI ethics institute, at the Technical University of Munich in Germany.73

  At a moment when tens of billions of dollars are flowing to AI research and implementation, however, just a tiny amount is being spent on efforts to keep AI safe. According to figures compiled by Seb Farquhar at the Centre for Effective Altruism, more than fifty organizations had explicit AI safety-related programs in 2016, with spending levels at about $6.6 million. That’s a fourfold increase from just a couple of years before, but it hardly compares to the scale of the challenge.74 And while attention to the possible risks of AI has grown in recent years, the field as a whole is far more focused on breaking new ground than double-checking its work. “You are dealing with creative geniuses who pour their whole mental abilities into a difficult problem,” said Christine Peterson of the Foresight Institute. “To say we also want you to think about the social issues and the hardware insecurities is like asking Picasso to think about how his paint is made. It’s just not how they’re wired.”

  There may also be a competitive disadvantage to slowing down the pace of AI research. As impressive as AI projects like AlphaZero are, they still represent what is called “narrow AI.” Narrow AI programs can be very smart and effective at carrying out specific tasks, but they are unable to transfer what they have learned to another field. In other words, they’re unable to generalize it, as even the youngest human being is able to generalize what he or she learns. AlphaZero learned how to play chess in nine hours and could then beat any human who has ever lived,75 but if the room where it was playing caught on fire, it wouldn’t have a clue what to do. Narrow AI remains a tool—a very powerful tool, but a tool nonetheless.

  The ultimate goal of AI researchers is to create artificial general intelligence, or AGI. This would be a machine intelligence that could think and reason and generalize as humans do, if not necessarily in a humanlike manner. A true AGI wouldn’t need to be fed millions upon millions of bits of carefully labeled data to learn. It wouldn’t make basic mistakes in language comprehension—or at least not for long. It would be able to transfer what it learned in one subject to another, drawing connections that enable it to become smarter and smarter. And it would be able to do all this at the accelerated speed of a machine, with an infallible memory that could be backed up in the cloud. To us slow, carbon-based humans, an AGI would seem to have superpowers, and there may be very few limits, if any, to what it could do. In this way, Sundar Pichai would be right—the development of artificial general intelligence really would be more significant than electricity or fire.

  The first country or company to develop powerful AGI would be in a position to utterly dominate its competitors, even to the point of taking over the world if it so chooses. This would be a winner-takes-all competition, since the first move the owner of a working AGI might be smart to make would be to sabotage any rival efforts to develop artificial general intelligence. And that should worry us, because the closest analogue we have is the race to a nuclear bomb during World War II. Safety concerns at the Manhattan Project took a backseat to the all-important goal of developing the bomb before the Nazis. That worked out well enough—the Allies won the war, and you’ll recall that the atmosphere did not ignite and end life on Earth after Trinity—but the same dynamics could prove disastrous if the groups racing toward AGI are tempted to cut ethical corners to cross the finish line first. And that’s because if human-level AGI is actually achievable, it’s not likely to stay at human level for long.

  Nor is there any guarantee that human beings will remain in control of something that will wildly exceed our own capabilities in, well, everything. As Nick Bostrom wrote in his 2014 book, Superintelligence, which introduced the existential threat of AI to the general public: “Before the prospect of an intelligence explosion, we humans are like small children playing with a bomb.… We have little idea when the detonation will occur, though if we hold the device to our ear we can hear a faint ticking sound.”76

  There is a classic Hollywood film that perfectly captures the confusion, the betrayal, and the chaos that could follow the creation of a superpowerful artificial intelligence—and despite all the times you may have seen a still of Arnold Schwarzenegger above a story about an AI takeover, it is not The Terminator. It’s The Sorcerer’s Apprentice, the famous Mickey Mouse short from the 1940 animated Disney film Fantasia.

  It begins with Mickey’s motivations. Tired of fetching water to fill a cistern in the sorcerer’s cave, Mickey (the apprentice) wants the broom (the AI) to do his menial labor for him. He puts on the sorcerer’s magic hat, and then, using a spell he barely understands, Mickey brings the broom to life. He instructs it to carry buckets of water from the fountain outside to the cistern. And the broom does so—the program works. The broom is doing exactly what it’s been told to do—so well, in fact, that the apprentice can relax in his master’s chair, where he drifts off and dreams of the unlimited power that will be his once he becomes a full-fledged sorcerer.

  But Mickey is soon awoken when the chair is knocked over by the force of the rising water. Has the spell been broken? Quite the opposite. The broom is still doing exactly what it was programmed to do: fill the cistern with water. And while the cistern looks full and then some, to the broom, as it would to an AI, there’s always a tiny probability that the cistern is not 100 percent full. And because all the broom knows is its spell—its software code—it will keep fulfilling those instructions over and over again, in an effort to drive that probability slightly higher. While a human—or a cartoon mouse—knows that instructions aren’t meant to be followed to the absolute letter, the broom doesn’t have that basic common sense, and neither would an AI.

  When Mickey tries to physically stop the broom, it marches over him. It doesn’t matter that Mickey is the programmer who instructed the broom to fill the cistern. The instructions weren’t to listen to Mickey at all times—they were to ensure that the cistern is filled. Mickey is now not something to be obeyed, but rather
an obstacle to be overcome, just as any human getting in the way of a superintelligent AI—even for the best reasons—would become an obstacle to be overcome. When a desperate Mickey chops up the broom, all he does is create an army of brooms, each bent on carrying out his original commands. The broom has adapted, because even though self-preservation isn’t part of the broom’s instructions, it can’t fill the cistern if it’s been destroyed. So it fights for its life—just as a superintelligent AI would if we tried to unplug it.

  Mickey, now close to drowning in the sea of water that fills the cave, grabs the book of spells and frantically looks for some magic words—some code—that can reprogram his broom. But he’s not smart enough, just as we wouldn’t be smart enough to reprogram a superintelligent AI that would surely see those attempts as obstacles to fulfilling its goals—and respond accordingly. It’s only when the far more knowledgeable sorcerer himself returns that the broom can be disenchanted—reprogrammed—and the apprentice is saved. But as we should know by now, when it comes to man-made existential risks, we’re all apprentices. (The Sorcerer’s Apprentice analogy is taken from the work of Nate Soares at the Machine Intelligence Research Institute.)77

 

‹ Prev