Book Read Free

End Times: A Brief Guide to the End of the World

Page 24

by Bryan Walsh


  Even the advanced biotechnology of the future will require more expertise than pushing a button on a cell phone—but perhaps not that much more, and certainly far less than is required to use current weapons of mass destruction. “What we now have [with biotechnology] is the mirror opposite of nuclear weapons, which require great infrastructure, access to controlled materials and knowledge that you can’t simply go and look up online,” Gabriella Blum, a professor at Harvard Law School and the coauthor of the book The Future of Violence, told me. “So we ask ourselves: what’s people’s propensity to inflict harm, and in what ways?”

  It doesn’t even have to be deliberate harm. Because viruses and bacteria can self-replicate, even an accident could be just as catastrophic as a deliberate attack. (After all, every natural disease outbreak is, in a sense, an accident, and those accidents have taken the lives of billions of humans.) This is why biotechnology ultimately poses the single greatest existential risk humans will face in the years to come, as the science continues to mature. Biotechnology takes our ingenuity, our thirst for discovery—and turns it against us. It leaves us only as strong as our weakest, maddest link. It gives us promise and it gives us power, the most dangerous gifts of all.

  ARTIFICIAL INTELLIGENCE

  Summoning the Demon

  I received my first computer when I was eight years old, a Christmas gift my family gave to itself in 1986. It was an Apple IIe, a plump beige desktop with a cathode-ray tube monitor and an external drive for floppy disks. I remember the way the screen would buzz when it was turned on, giving off the slightest electric charge if you put a fingertip to the glass. I remember how the letters and numbers would sink into the keyboard when pressed, back when computer keyboards were still meant to evoke typewriters. I remember the sounds the computer would make when it was working, which I learned to diagnose like a doctor with a stethoscope pressed to a patient’s chest. When everything was operating cleanly—not often—the computer would hum contentedly. But when something went wrong, it would issue an angry wheeze from inside its plastic chassis, as if protesting what it was being forced to do by its inferior human users.

  Our family had owned pieces of technology before the Apple: TVs and VCRs, telephones and microwaves, alarm clocks and remote controls, even an old Atari 2600 video game console. But what set the Apple IIe apart—though I didn’t know it at the time—was that it was a general-purpose machine. Every other electronic product we owned had a single function—the microwave heated up food; the VCR recorded television; the Atari played Pong. But the Apple was designed to do anything that you could program it to do: operate a payroll program, do graphic design, create spreadsheets, run a word processor, or, as was mostly the case at our house, play games. Computers like the Apple IIe were limited only by quality of their programs and the power of their hardware.

  That power has grown exponentially in the years since. The iPhone 7 I carry around with me is already more than three years old, but it has over 30,000 times more RAM than my old Apple IIe.1 Computing power has increased by more than a trillionfold since the mid-century days of room-sized mainframes, and that dizzying rate of improvement—captured in Moore’s law, which I referenced in the previous chapter—is still continuing.2 And not only are computers more powerful, they’re now ubiquitous. Computers are found inside our desktops and our laptops and our smartphones, but also our watches and our TVs and our speakers and our clocks and our cars and our baby monitors and our lightbulbs and our scales and our appliances and our alarm systems and our vacuum cleaners. The development of the Web and mobile data transmission means that those computers can now talk to each other, and the creation of the cloud—the offloading of processing and storage to remote server farms accessed via the internet—means that computing is much less limited by the power of an individual device. The venture capitalist Marc Andreessen—founder of Netscape and an early investor in Facebook and Twitter—has an apt description for the universalization of computerization. “Software,” Andreessen wrote in 2011, “is eating the world.”3

  It’s nearly eaten this book as well. Without the computing revolution, it would be impossible for scientists at NASA to track and potentially deflect incoming asteroids, just as it would be impossible for volcanologists to create a global monitoring system for supereruptions. The first nuclear bomb may have been developed without the help of modern computers, but the Gadget’s far more powerful successors—and the global nuclear war they threatened—wouldn’t have happened without advanced computing. The electricity that feeds the software that is eating the world is contributing to climate change, but powerful computing also helps manage energy efficiency and speeds the development of cleaner and renewable sources of power. The computing revolution made cheap genetic sequencing and now genetic synthesis possible, which is a valuable tool for the battle against infectious disease—and an extinction-level threat thanks to the new tools of biotechnology.

  Until recently, computing was powerful because it made us powerful—for better and for worse. Computers were general-purpose tools, like my Apple IIe, and they were our tools, controlled by us, working for us. They amplified human intelligence, which is the same thing as amplifying human power. It may feel as if our computers make us dumber, and perhaps they do, the way that riding in a car instead of running can make you weaker over time. Yet just as a human driving an automobile can easily outpace the fastest runner who ever lived, I know I’m much, much smarter with an iPhone in my hand, and all it can do, than I would be if left to my own non-Apple devices.

  But what happens if the intelligence augmented by explosive computing power is not human, but artificial? What happens if we lose control of the machines that undergird every corner of the world as we know it? What happens if our tools develop minds of their own—minds that are incalculably superior to ours?

  What may happen to us is what happens when any piece of technology is rendered obsolete. We’ll be junked.

  There’s no easy definition for artificial intelligence, or AI. Scientists can’t agree on what constitutes “true AI” versus what might simply be a very effective and fast computer program. But here’s a shot: intelligence is the ability to perceive one’s environment accurately and take actions that maximize the probability of achieving given objectives. It doesn’t mean being smart, in a sense of having a great store of knowledge, or the ability to do complex mathematics. My toddler son doesn’t know that one plus one equals two, and as of this writing his vocabulary is largely limited to excitedly shouting “Dog!” every time he sees anything that is vaguely furry and walks on four legs. (I would not put money on him in the annual ImageNet Large Scale Visual Recognition Challenge, the World Cup of computer vision.) But when he toddles into our kitchen and figures out how to reach up to the counter and pull down a cookie, he’s perceiving and manipulating his environment to achieve his goal—even if his goal in this case boils down to sugar. That’s the spark of intelligence, a quality that only organic life—and humans most of all—has so far demonstrated.

  Computers can already process information far faster than we can. They can remember much more, and they can remember it without decay or delay, without fatigue, without errors. That’s not new. But in recent years the computing revolution has become a revolution in artificial intelligence. AIs can trounce us in games like chess and Go that were long considered reliable markers of intelligence. They can instantly recognize images, with few errors. They can play the stock market better than your broker. They can carry on conversations via text almost as well as a person can. They can look at a human face and tell a lie from a truth. They can do much of what you can do—and they can do it better.

  Most of all, AIs are learning to learn. My old Apple IIe was a general-purpose machine in that it could run any variety of programs, but it could only do what the program directed it to do—and those programs were written by human beings. But AIs ultimately aim to be general-purpose learning machines, taking in data by the terabyte, analyzing it,
and drawing conclusions that it can use to achieve its objectives, whether that means winning at StarCraft II or writing hit electronic dance tracks—both of which are possible today.4 This is what humans do, but because an AI can draw on far more data than a human brain could ever hold, and process that data far faster than a human brain could ever think, it has the potential to learn more quickly and more thoroughly than humans ever could. Right now that learning is largely limited to narrow subjects, but if that ability broadens, artificial intelligence may become worthy of the name. If AI can do that, it will cease to merely be a tool of the bipedal primates that currently rule this planet. It will become our equal, ever so briefly. And then quickly—because an AI is nothing if not quick—it will become our superior. We’re intelligent—Homo sapiens, after all, means “wise man.”5 But an AI could become superintelligent.

  We did not rise to the top of the food chain because we’re stronger or faster than other animals. We made it here because we are smarter. Take that primacy away and we may find ourselves at the mercy of a superintelligent AI in the same sense as endangered gorillas are at the mercy of us. And just as we’ve driven countless species to extinction not out of enmity or even intention, but because we decided we needed the space and the resources they were taking up, so a superintelligent AI might nudge us out of existence simply because our very presence gets in the way of the AI achieving its goals. We would be no more able to resist it than the far stronger gorilla has been able to resist us.

  You’re reading this book, so you’ve probably heard the warnings. Tesla and SpaceX founder Elon Musk has cited AI as “the biggest risk we face as a civilization,”6 and calls developing general AI “summoning the demon.”7 The late Stephen Hawking said that the “development of full artificial intelligence could spell the end of the human race.”8 Well before authentic AI was even a possibility, we entertained ourselves with scare stories about intelligent machines rising up and overthrowing their human creators: The Terminator, The Matrix, Battlestar Galactica, Westworld. Existential risk exists as an academic subject largely because of worries about artificial intelligence. All of the major centers on existential risk—the Future of Humanity Institute (FHI), the Future of Life Institute (FLI), the Centre for the Study of Existential Risk (CSER)—put AI at the center of their work. CSER, for example, was born during a shared cab ride when Skype co-creator Jaan Tallinn told the Cambridge philosopher Huw Price that he thought his chance of dying in an AI-related accident was as great as death from heart disease or cancer.9 Tallinn is far from the only one who believes this.

  AI is the ultimate existential risk, because our destruction would come at the hands of a creation that would represent the summation of human intelligence. But AI is also the ultimate source of what some call “existential hope,” the flip side of existential risk.10 Our vulnerability to existential threats, natural or man-made, largely comes down to a matter of intelligence. We may not be smart enough to figure out how to deflect a massive asteroid, and we don’t yet know how to stop a supereruption. We know how to prevent nuclear war, but we aren’t wise enough to ensure that those missiles will never be fired. We aren’t intelligent enough yet to develop clean and ultra-cheap sources of energy that could eliminate the threat of climate change while guaranteeing that every person on this planet could enjoy the life that they deserve. We’re not smart enough to eradicate the threat of infectious disease, or to design biological defenses that could neutralize any engineered pathogens. We’re not smart enough to outsmart death—of ourselves, or of our species.

  But if AI becomes what its most fervent evangelists believe it could be—not merely artificial intelligence, but superintelligence—then nothing is impossible. We could colonize the stars, live forever by uploading our consciousness into a virtual heaven, eliminate all the pain and ills that are part of being human. Instead of an existential catastrophe, we could create what is called existential “eucatastrophe”—a sudden explosion of value.11 The only obstacle is intelligence—an obstacle put in place by our own biology and evolution. But our silicon creations, which have no such limits, just might pull it off—and they could bring us along.

  No wonder that a Silicon Valley luminary as bright as Google CEO Sundar Pichai has said that AI will be more important than “electricity or fire.”12 AI experts are so in demand that they can earn salaries as high as $500,000 right out of school.13 Militaries—led by the United States and China—are spending billions on AI-driven autonomous weapons that could change the nature of warfare as fundamentally as nuclear bombs once did. Every tech company now thinks of itself as an AI company—Facebook and Uber have hoovered up some of the best AI talent from universities, and in 2018 Google rebranded its entire research division as simply Google AI.14 Whether you’re building a social network or creating drugs or designing an autonomous car, research in tech increasingly is research in AI—and everything else is mere engineering.

  Those companies know that the rewards of winning the race to true AI may well be infinite. And make no mistake—it is a race. The corporations or countries that develop the best AI will be in a position to dominate the rest of the world, which is why until recently little thought was given to research that could ensure that AI is developed safely, to minimize existential risk and maximize existential hope. It’s as if we find ourselves in the early 1940s and we’re racing toward a nuclear bomb. And like the scientists who gathered around the New Mexico desert in the predawn morning of July 16, 1945, we don’t know for sure what our invention might unleash, up to and including the end of the world.

  Oppenheimer, Fermi, and the rest could only wait to see what Trinity would bring. But we can try to actively shape how AI develops. This is why existential risk experts are so obsessed with AI—more than any other threat the human race faces, this is where we can make a difference. We can hope to turn catastrophe to eucatastrophe. This is a race as well, a race to develop the tools to control AI before AI spins out of control. The difference could be the difference between the life and the death of the future.

  This is AI—the cause of and solution to all existential risk. As Hawking wrote in his final book, published after his death: “The advent of super-intelligent AI would be either the best or the worst thing ever to happen to humanity.”15

  Unless, of course, superintelligent AI is a fantasy. That’s something else that sets AI apart from other existential risks. We know we are warming the climate. We know we can make pathogens more virulent. And we know a nuclear button exists. But superintelligent AI might not be possible for centuries. It might not be possible at all. Which should either give us comfort or cause concern. Because one constant over the multi-decade history of AI is that whatever prediction we make about artificial intelligence is almost certainly going to be wrong.

  In 1956 a group of researchers in the field of neural nets and the study of intelligence gathered at Dartmouth University. They convened for what has to be an all-time ambitious summer project, after proposing the following plan to their funders at the Rockefeller Foundation: “An attempt will be made to find how to make machines use language, form abstractions and concepts, solve kinds of problems now reserved for humans, and improve themselves. We think that a significant advance can be made in one or more of these problems if a carefully selected group of scientists work on it together for a summer.”16 (And what did you do on your summer vacation?)

  As it turned out, the Dartmouth Summer Project did not exactly find how to make machines use language or improve themselves—two challenges that still bedevil AI researchers. But it did help kick off the first boom in AI, as scientists rushed to create programs that pushed the boundaries of machine capability.

  There were programs that could prove logic theorems, programs that could solve college-level calculus problems, even programs that could crack terrible puns. One of the most famous was Eliza, a natural-language processing computer program that mimicked the response of a nondirective Rogerian psychologist. Created by Joseph We
izenbaum at the MIT Artificial Intelligence Laboratory between 1964 and 1966, Eliza allowed users to carry on a conversation via keyboard with a facsimile of the most facile kind of psychologist, the sort who turns every statement a patient makes back into a question. (“I’m having problems with my mother.” “How does that make you feel?”)

  Weizenbaum actually designed Eliza to demonstrate how superficial communication was between humans and machines at the time—though that didn’t stop users, including his own secretary, from sharing their secrets with the program—but the creation of an AI that could converse seamlessly with a human being has been the holy grail of AI since before the field properly existed.

  In 1950 the English mathematician and computer scientist Alan Turing invented what he called the “Imitation Game,” now known as the Turing Test. In the game a human judge carries on a conversation via text with two entities—one a fellow human, the other a computer program. The judge doesn’t know whether they are communicating with a machine or a person, and if they can’t tell the difference between the two from the responses, the machine is said to have passed the Turing Test. To this day the test remains a popular marker of the progress of AI research—there’s an annual contest, the Loebner Prize, that judges chatbots against the Turing Test—though it has been criticized for being limited and anthropocentric. If you’ve ever wanted to strangle a customer service chatbot, you can thank Alan Turing—and also for breaking the Nazis’ Enigma code and helping to win World War II.

  Turing predicted that by the year 2000 a computer with 128 MB of memory—a colossal amount by the standards of 1950—would have a 70 percent chance winning his Imitation Game. He was wrong on both counts. Computers had far larger memories by 2000 than 128 MB—which today would barely be enough to store a digital copy of Pink Floyd’s double album The Wall—but still weren’t passing his test. More recently chatbots have performed better on the Turing Test, although it’s impossible to know whether that’s because the bots are becoming more like humans, or because texting-addled, screen-addicted humans are becoming more like bots.17

 

‹ Prev