Falter: Has the Human Game Begun to Play Itself Out?

Home > Other > Falter: Has the Human Game Begun to Play Itself Out? > Page 17
Falter: Has the Human Game Begun to Play Itself Out? Page 17

by Bill McKibben


  In his foundational 2008 paper, “The Basic AI Drives,” researcher Stephen M. Omohundro pointed out that even an AI pointed in the most trivial direction might cause real problems. “Surely no harm could come from building a chess-playing robot,” Omohundro begins—except that, unless it’s very carefully programmed, “it will try to break into other machines and make copies of itself, and will try to acquire resources without regard for anyone else’s safety. These potentially harmful behaviors will occur not because they were programmed in at the start, but because of the intrinsic nature of goal-driven systems.” It’s really, really smart, and it keeps at its task, which is to play chess at any cost. “So, you build a chess-playing robot thinking you can turn it off should something go wrong. But to your surprise, you find that it strenuously resists your efforts to turn it off.”24

  Consider what’s become the canonical formulation of the problem, an artificial intelligence that is assigned the task of manufacturing paper clips in a 3-D printer. (Why paper clips in an increasingly paperless world? It doesn’t matter.) At first, says another Oxford scientist, Anders Sandberg, nothing seems to happen, because the AI is simply searching the internet. It “zooms through various possibilities. It notices that smarter systems generally can make more paper-clips, so making itself smarter will likely increase the number of paper-clips that will eventually be made. It does so. It considers how it can make paper-clips using the 3D printer, estimating the number of possible paper-clips. It notes that if it could get more raw materials it could make more paper-clips. It hence figures out a plan to manufacture devices that will make it much smarter, prevent interference with its plan, and will turn all of Earth (and later the universe) into paper-clips. It does so.”25 Those who have seen the film The Sorcerer’s Apprentice will grasp the basic nature of the problem, examples of which can themselves be almost endlessly (and wittily) multiplied. “Let’s say you create a self-improving AI to pick strawberries,” Elon Musk once said. “It gets better and better at picking strawberries and picks more and more and it is self-improving, so all it really wants to do is pick strawberries. So then it would have all the world be strawberry fields. Strawberry fields forever.”26

  Remember, in the vision of all these people, computers in the next few years will have brainpower far surpassing that of any person, or any group of persons, and these machines will keep teaching themselves to get smarter, 24/7. As intelligence explodes, and the AI gains the ability to improve itself, it will soon outstrip our ability to control it. “It is hard to overestimate what it will be able to do, and impossible to know what it will think,” James Barrat writes in a book with the telling title Our Final Invention. “It does not have to hate us before choosing to use our molecules for a purpose other than keeping us alive.” As he points out, we don’t particularly hate field mice, but every hour of every day we plow under millions of their dens to make sure we have supper.27 This isn’t like, say, Y2K, where grizzled old programmers could emerge out of their retirement communities to save the day with some code. “If I tried to pull the plug on it, it’s smart enough that it’s figured out a way of stopping me,” Anders Sandberg said of his paper clip AI. “Because if I pull the plug, there will be fewer paper clips in the world and that’s bad.”28

  You’ll be pleased to know that not everyone is worried. Steven Pinker ridicules fears of “digital apocalypse,” insisting that “like any other technology,” artificial intelligence is “tested before it is implemented and constantly tweaked for safety and efficacy.”29 The always lucid virtual reality pioneer Jaron Lanier is dubious about the danger, too, but for precisely the opposite reason. AI, he says, is “a story we computer scientists made up to help us get funding once upon a time.”30 Imperfect software, Lanier says, not ever-faster hardware, puts an effective limit on our danger. “Software is brittle,” he says. “If every little thing isn’t perfect, it breaks.”31 For his part, Mark Zuckerberg has described Musk’s worries as “hysterical,” and indeed, a few weeks after the Tesla baron made public his fears, the Facebook baron announced that he was building a helpful AI to run his house. It would recognize his friends and let them in. It would monitor the nursery. It would make toast. Unlike Musk, Zuckerberg perkily explained, he chose “hope over fear.”32

  A few months later, though, it emerged that Facebook’s AI-based ad system had become so automated that it was happily (and automatically) offering mailing lists to people who said they wanted to reach “Jew-haters.” Facebook’s reliance on automation “has to do with Facebook’s scale,” one analyst explained. With a staff of 17,000, the company has but one employee for every 77,000 users, meaning it has “to run itself in part through a kind of ad hoc artificial intelligence: a collection of automated user and customer interfaces that shift and blend to meet Facebooker preference and advertiser demand.”33 This is why Zuckerberg is one of the richest men on earth, but it is also a little scary, one example being the Trump presidency. Another came in 2017, when Facebook had to shut down an artificial intelligence system it had built to negotiate with other AI agents: The system had “diverged from its training in English to develop its own language.” At first the new lingo seemed “nonsensical gibberish,” but when researchers analyzed the exchanges between two bots named Bob and Alice, they determined that, in fact, the bots had developed a highly efficient jargon for bartering, even if it was essentially incomprehensible to humans. “Modern AIs operate on a ‘reward’ principle where they expect following a course of action to give them a ‘benefit,’” one researcher explained. “In this instance there was no reward to continuing to use English, so they built a more efficient solution instead.”34 As Zuckerberg meekly explained when he was summoned to testify before Congress in 2018, “Right now a lot of our AI systems make decisions in ways that people don’t really understand.”35 It’s not just Facebook. In 2016, Microsoft had to shut down an AI chatbot it had named Tay after just a single day because Twitter users, who were supposed to make her smarter “through casual and playful conversation,” had instead turned her into a misogynistic racist. “Bush did 9/11, and Hitler would have done a better job than the monkey we have now,” Tay was soon happily tweeting. “Donald Trump is the only hope we’ve got.”36

  Scientists have even theorized that AIs following their own impulses might explain why we haven’t found other civilizations out in space. Forget asteroids and supervolcanoes, says Bostrom—“even if they destroyed a significant number of civilizations we would expect some to get lucky and escape disaster.” But what if there is some technology “that (a) virtually all sufficiently advanced civilizations eventually discover and (b) its discovery leads almost universally to existential disaster”?37 That is to say, perhaps the reason we don’t hear from other civilizations is because interstellar space is dotted not with sentient life but with orbiting piles of paper clips.

  No one, I think, has a particularly good answer for this set of practical challenges. Unlike global warming or germline engineering, they’re not even exactly real—not yet. They’re hard to imagine because we’ve never had to imagine them. Even the engineers building these technologies just work on their particular pieces without ever putting the whole puzzle together. But quite a few of the people who have made it their business to think about these possibilities are scared—of massive inequality written into our genes, of chess-mad AIs. It should be enough to scare us into slowing down, instead of forever speeding up. We should be searching diligently for workable regulations, not cursing government for getting in the way.

  Still, I don’t want to pursue this line of thinking any further. Practical problems are by definition theoretically soluble—that’s why we call them “problems.” For the moment, let’s assume that we won’t create Frankenstein’s monsters and that we will make sure that all people have equal access to the fertility lab. Let’s assume that, to the degree AI is real, careful programmers will manage to make it into a benign and helpful force that reliably does our bidding. Let’s assume
that everything goes absolutely right. Let’s assume the ads come true.

  And then let’s ask a more metaphysical, and maybe more important, question: What does that do to the human game? What it does, I think, is begin to rob it of meaning.

  16

  This “human game” I’ve been describing differs from most games we play in that there’s no obvious end. If you’re a biologist, you might contend that the goal is to ensure the widest possible spread of your genes; if you’re a theologian, the target might be heaven. Economists believe we keep score via what they call “maximizing utility”; poets and jazz musicians fix on the sublime. I’ve said before that I think there are better and worse ways to play this game—it’s most stylish and satisfying when more people find ways to live with more dignity—but I think the game’s only real goal is to continue itself. It’s the game that never ends, which is why its meaning is elusive.

  Still, let’s think about those other, more obvious, games: tennis, baseball, stock car racing. They divert a preposterous amount of our time and energy, both physical and mental. All feature some way of keeping score, some way of knowing who’s won: most points, most runs, fastest times. They have prizes, championships. But even with all that, their meaning is a little elusive, too. Once the final game of the season has receded a few days into the past, even the most die-hard fan doesn’t really care that her team won. (After all, it’s only a few months from the end of the World Series to the start of spring training, when the slate is wiped clean and it all begins again.) What we remember are the stories that went into that victory; what lingers on are particular episodes of courage, of sublime skill, of transcendent luck, of great emotion. “It’s how you play the game” is the truest of clichés. We assign great meaning to these dramas; they become totems we repeat to one another, and to ourselves, for years. Ask me about the 2004 Red Sox, but not unless you have some time to spare.

  For those of us who play sports, this is doubly true. The competitions that we train for, sometimes obsessively, need goals: you can’t really have a race unless there’s a finish line to try to cross ahead of other people. But most people who play sports don’t get paid to do it, and no one else is watching; there’s no external reward at all. You do it entirely for the meaning, for the exhilarating sense of teamwork that comes from a perfectly executed pick and roll, the lift of the boat when all eight oars are swinging in perfect unison, the sense of discovery that comes from pushing against your own ever-changing limits. I’m a distance athlete—a mediocre, aging one who doesn’t race much anymore, but a few times every winter, I’ll put on a bib and line up for the start of a cross-country ski race, and an hour or three later I’ll cross a finish line somewhere in the middle of the pack. Literally no one cares how well I did, not even my wife. But for me, these are always great dramas, asking the same set of questions: am I willing to make myself hurt, to push past the daily and the easy and the normal? And often the answer is no. I raced last weekend. I was tired, and my mind preoccupied, and half a mile into the race I was twenty yards behind another guy, and there I stayed for the entire race, unable to will myself to go hard enough, hurt enough, to close the gap. No one else could have known or noticed, but I was a little disappointed in myself, just as on other days I’ve been absurdly if quietly proud. Yes, I’d come in 32nd or 48th or 716th, finishing anonymously in a knot of racers stumbling past some electric eye. But in the race I’d been monitoring in my head, against the guy who came in 33rd or 49th or 717th, I’d managed some great burst of effort, shown myself something I wasn’t sure was still there.

  So, here’s what begins to worry me: with the new technologies we’re developing, it’s remarkably easy to wash that meaning right out of something even as peripheral as sports. In fact, we’re very close to doing it. Erythropoietin, or EPO, is a hormone that stimulates the production of red blood cells. Happily, we have learned to produce it artificially, so we can give it to people suffering from anemia and to those who must undergo chemotherapy. It is remarkable medicine for the repair of problems in our bodies. Apparently, it was given to the cyclist Lance Armstrong when he was being treated for the testicular cancer that almost took his life, and he of course survived, and thank heaven all around. The researchers who figured out what EPO was and how to make it and what dosage made sick people healthy—they were playing the human game with panache.

  But if you’re healthy and you take EPO, you get extra red blood cells and can run faster and farther than people who don’t. Lance Armstrong also took EPO (and testosterone and human growth hormone and probably some other stuff) en route to seven Tour de France victories after his recovery from cancer. It enabled him to climb the Alps with a dash and grit never seen before. People thrilled to watch, transfixed by his epic ascents, and when he launched a charity, Livestrong, they joined by the millions, strapping on yellow plastic bracelets to commemorate the power of the human will. And then it emerged that it wasn’t triumph of the human will at all. Sure, he’d worked hard, but he’d done it in concert with those drugs. And for almost all of us, that robbed his victories of any real meaning. He was stripped of his titles, and the charity he’d founded asked him to step aside. “What people connect with is Lance’s story,” an official of his foundation said. “Take charge of your life.” But it turned out that that wasn’t really his story; instead, it was “find an unscrupulous doctor who will give you an edge.” It wasn’t dash and grit; it was EPO. Barry Bonds’s home runs were towering, awesome—and then it became clear that they were the product only in part of diligence, application, skill, gift. They were also the product of drugs. We test athletes for those drugs now, in an effort to keep sports “real,” to prevent the erosion of their meaning—because otherwise, it is all utterly pointless.

  This is not an attempt to be pure, to meet some philosophical ideal. We mix people and machines, for instance, in all kinds of ways. I love Vermont’s local stock car track (“Thunder Road, the nation’s site of excitement!”) because the men and women at the wheel show skill and courage. But I don’t think I’d bother going if the races were run by driverless cars. They could doubtless go faster, just as runners genetically altered to have more red blood cells can doubtless go faster. But faster isn’t really the point. The story is the point.

  If something as marginal (though wonderful) as sports can see meaning leach away when we mess with people’s bodies or remove them from the picture, perhaps we should think long and hard about more important kinds of meaning. The human game, after all, requires us to be human.

  * * *

  For some people, none of this causes any worry because they perceive no distinction between “artificial” and “natural.” Indeed, they say that anything we do is “natural” because we are a product of nature. “The three hundred different breeds of dogs that are around today are all the result of genetic selection over ten thousand years,” observes the Oxford ethicist Julian Savulescu. “Some are smart, some are stupid, some are vicious, some are placid, some are hardworking, some are lazy, that’s all genetic.” He goes on to say, “[W]hat took us ten thousand years in the case of dogs could take us a single generation,” once we can engineer human embryos.1 So, why not?

  It’s true, obviously, that humans can and do try to engineer their offspring. The mating of two Ivy League grads in the hope of producing a surefire Harvard admit can be as carefully scheduled as the breeding of two Chow Chows to ensure deep-set eyes. Consciously or unconsciously, people reliably try to select mates who will produce the kind of children they want. Indeed, in most of the world’s cultures, parents make the matches for their own kids, with the grandchildren very much in mind.

  Genetics is not the only tool parents use to try to produce the kids they desire, of course. We also, many of us, invest a great deal of time and energy and money building the right environment. From the moment the embryo has settled in the womb, its genetic code already determined, people start talking to their kids, playing them music. (The smart money is now
predicting that “Rosetta Stone language tapes for babies may soon usurp Beethoven as the womb soundtrack of choice.”) 2 We try to choose our kids’ friends, and their meals, and their pastimes. Some of this is well-intentioned, and some of it is cruel and overbearing—everyone knows people whose lives were stunted by this kind of parenting.

  And so, those who want to allow germline engineering often argue by analogy: If it’s okay to try to get your kids into Princeton, then surely it’s also okay to turn certain genes off or on in order to try to make those kids more intelligent. If we don’t limit the ability of parents to push and harass and love their children in a particular direction, why would we limit their ability to accomplish the same thing more efficiently with genetic engineering? It would make Ayn Rand mad as hell to suggest that parents shouldn’t be able to do this if they want. Here’s James Watson, discoverer of the double helix, who describes himself as a libertarian: “I don’t believe we can let the government start dictating the decisions people make about what sort of families they’ll have.”3

 

‹ Prev