Falter: Has the Human Game Begun to Play Itself Out?

Home > Other > Falter: Has the Human Game Begun to Play Itself Out? > Page 14
Falter: Has the Human Game Begun to Play Itself Out? Page 14

by Bill McKibben


  12

  There’s one other spot on Earth with game-changing leverage. And though it’s not that far as the drone flies from the Palm Springs resorts where the Kochs gather their cronies each year, it’s a very different world.

  The tech billionaires who inhabit Silicon Valley aren’t at all like the fossil fuel moguls and assorted other magnates who celebrated Trump’s rise to power. Instead of aging troglodytes, they’re mostly youthful social progressives. Don’t look for them on the golf course; they’re kite-surfing. No, they’re not. They used to kite-surf, but now they’re hydrofoiling. “It’s like flying,” Ariel Poler, a start-up investor, told a reporter—he was standing by the winged doors of his Tesla and pulling on body armor and a helmet before heading into the ocean. “The board doesn’t touch the water. It’s like an airplane wing.”1

  Anyway, these tech masters would laugh, and not politely, at the thought of trying to resurrect an eighteenth-century technology like coal. They’re all about the future: Tesla is installing the world’s largest rooftop solar array on the top of its Gigafactory, which produces more lithium-ion batteries than any facility on Earth. Google spelled out its corporate logo in mirrors at the giant solar station in the Mojave Desert on the day it announced that it would power every last watt of its global business with renewable energy; it’s the world’s biggest corporate purchaser of green power.2

  But there is exactly one human being who bridges that cultural gulf between these different species of plutocrat. Vanity Fair, in 2016, declared that Ayn Rand was “perhaps the most influential figure in the tech industry.” Steve Wozniak (cofounder of Apple) said that Steve Jobs (deity) considered Atlas Shrugged one of his guides in life.3 Elon Musk (also a deity, and straight out of a Rand novel, with his rockets and hyperloops and wild cars) says Rand “has a fairly extreme set of views, but she has some good points in there.”4 That’s as faint as the praise gets. Travis Kalanick, who founded Uber, used the cover of The Fountainhead as his Twitter avatar. Peter Thiel, a cofounder of PayPal and an early investor in Facebook, once launched a mission to develop a floating city, a “sea-stead” that would be a politically autonomous city-state where national governments would have no sway.5

  Some of Silicon Valley’s antigovernment sentiment is old, or at least as old as anything can be in Silicon Valley. As early as 2001—before the iPhone and Facebook, back in the days when you just checked email—a writer named Paulina Borsook published Cyberselfish, a book she called a “critical romp through the terribly libertarian culture of high-tech.” Even then, she said, it was unsurprising to open the local newspaper—this was right before Craigslist decimated local newspapers—and see a personal ad that read, “Ayn Rand enthusiast is seeking libertarian-oriented female for great conversation and romance. I am a very bright and attractive high-tech entrepreneur.” Every industry has a flavor, and tech’s was the hatred of regulation, a “pervasive weltanschauung” that “manifests itself in everything from a rebel-outsider posture” to “an embarrassing lack of philanthropy.”6 Suspicion of government, she said, was “the techie equivalent to the Judeo-Christian heritage of the West. Just as, if you live in the West, you are shaped by this Judeo-Christian heritage regardless of how you were brought up,” so Randian hubris flowed through the water in Cupertino and Menlo Park.7 Borsook credited it to many things: for one, annoyance at the government’s clueless early attempts to regulate tech by, say, banning strong cryptographic protection. And then there was the simple fact that coders live, by necessity, in a logical, rule-based universe that “can put you in a continual state of exasperation verging on rage at how messy and imperfect humans and their societies are.”8 It’s all a little silly, as it was government investment that got the internet up and running in the first place, but there’s no denying that anyone put behind a keyboard for the first time comes away with a sense of autonomy: You can explore anywhere you want to go. It feels free.

  In any event, the leaders of this community are deeply attached to the idea that they should be left alone to do their thing: create value, build apps, change the world. For them, the key Rand quote is not about the immorality of community—most new tech is theoretically focused on building community, after all—or even about the horror of taxes. Instead, it’s from early in The Fountainhead, when Howard Roark is explaining to his architecture professors that he’s going to design buildings the way he wants to. The school’s dean, who has accused him of going “contrary to every principle we have tried to teach you, contrary to all established precedents and traditions of Art,” then asks, “Do you mean to tell me that you’re thinking seriously of building that way, when and if you are an architect?”

  “Yes.”

  “My dear fellow, who will let you?”

  “That’s not the point. The point is, who will stop me?”9

  For reasons that will soon become clear, that may turn out to be the crucial question of the human future.

  PART THREE

  The Name of the Game

  13

  I was talking to this guy I know named Ray, and he asked me what I’d been up to that day. I said I’d been out cross-country skiing with the dog.

  “Cross-country skiing is fine,” he said. “But I don’t like downhill. I also don’t like being on the side of cliffs. I don’t drive anymore on roads that go around the side of mountains. I avoid that, because we don’t have backups yet for our version-one biological bodies.”

  How was he feeling? I asked.

  “So far, so good,” he said. “I’ve fine-tuned my regimen. I’ve gotten it down to about a hundred pills a day. It used to be more.”

  “A hundred pills?”

  “A good example is metformin. It appears to kill cancer cells when they try to reproduce.… Nominally it’s for diabetes. I’ve been saying for twenty-five years it’s a calorie-restriction mimetic.”

  “Uh-huh,” I said.

  “The reason that people who are taking it don’t have zero cancer cells is that they don’t take it quite right. They take a big dose in the morning. You need to take a five-hundred-milligram extended-release pill every four hours. It’s more than the maximum dose, nominally.”

  So, this guy Ray, Ray Kurzweil, is the “director of engineering” at Google, which is arguably the most important company on the planet. He leads a team charged with developing artificial intelligence. And the reason he is so careful in his daily life is that he firmly believes that if he can just live to 2030 or so, he will never die, that we’re accelerating with such great speed toward technological power so immense that it will reshape everything about us. Again, he’s not a crank—or, if he is, he’s a crank who’s directing engineering efforts at the company with the largest market capitalization ever recorded.

  “In 1955, when I was seven, I recall my grandfather describing a trip to Europe,” Kurzweil told me one day.1 “He was given the opportunity to handle Leonardo da Vinci’s notebooks. He described this experience in reverential terms. These were not documents written by God, but by a human. This was the religion I grew up with: the power of human ideas to change the world. And the notion that you, Ray, could find those ideas. To this day, I continue to believe in this basic philosophy. Whether it’s relationship difficulties or great social and political questions, there is an idea that will allow us to prevail.”

  Of all Kurzweil’s many ideas, acceleration is his most profound, “a key basis for my futurism,” he says. Essentially: our machines are getting smarter, and they’re getting smarter faster. “The number of calculations per second, per constant dollar, has been on a smooth trajectory right back to the 1890 census,” he says, a trajectory that he emphasizes is accelerating exponentially, not linearly. His critics, he says, “apply their linear brains. It’s like when we were sequencing the genome. People said it would take seven hundred years. But when you finished one percent after seven years, you were almost done; you’re only seven doublings from one hundred percent. So, our ability to sequence, understand, and
reprogram those genetics is also growing exponentially. That’s biotechnology. We’re already getting significant progress in things like immunotherapy. We can reprogram your system to consider cancer cells a pathogen and go after it. It’s a trickle now, but it will be a flood over the next decade.”

  Kurzweil’s maxim, he insists, applies not just to biotechnology. The basic idea (that the power of a computer keeps doubling and doubling and then doubling again) governs a wide variety of fields, all of which show signs that they’re coming into the steep slope of the growth curve. For Kurzweil, it’s much like what happened two million years ago, when humans added to their brains the big bundle of cells we call the neocortex. “That was the enabling factor for us to invent language, art, music, tools, technology, science. No other species does these things,” he says. But that great leap forward came with intrinsic limits: if our brains had kept expanding, adding neo-neocortexes, our skulls would have grown so large we could never have slid out the birth canal. This time that’s not a problem, given that the big new brain is external: “My thesis is we’re going to do it again, by the 2030s. We’ll have a synthetic neocortex in the cloud. We’ll connect our brains to the cloud just the way your smartphone is connected now. We’ll become funnier and smarter and able to more effectively express ourselves. We’ll create forms of expression we can’t imagine today, just as the other primates can’t really understand music.”

  Once again, this is Google’s director of engineering speaking. And speaking not just for himself. His boss, Sergey Brin, says the same thing, quite plainly: “You should presume that someday we will be able to make machines that can reason, think, and do things better than we can.”2 To a remarkable extent, we already have. In 2016, the world’s best Go player was beaten by a computer program, which went on the next year to beat all sixty of the world’s top players, even though Go is supposed to be much harder, subtler, more human than chess. In 2017 an artificial intelligence program crushed the world’s top players at Texas Hold ’Em—that is to say, it knew how to bluff. Given enough examples, AI programs can now learn almost anything: Facebook’s DeepFace algorithm recognizes specific human faces in photos 97 percent of the time, “even when those faces are partly hidden or poorly lit,” which is on a par with what people can manage.3 (Microsoft boasts that its software can reliably distinguish between pictures of the two varieties of Welsh corgi.)4 An AI bot spent two weeks learning a video game called Defense of the Ancients, and then defeated the world’s top players. “It feels a little like a human but a little like something else,” said one of the players who was vanquished.5

  Sure, it all seems a little trivial—games, after all. The most visible product so far from Kurzweil’s team at Google is Smart Reply, those three suggested ripostes at the bottom of your Gmail. (“That sounds great.” “Can’t make it then.” “Let me check!”) But Kurzweil’s not really out to help you answer your email; he’s out to collect more data, to help the cloud learn. Wired magazine reported in 2017 that it’s “just the first visible part of the group’s main project: a system for understanding the meaning of language. Codenamed Kona, the effort is aiming for nothing less than creating software as linguistically fluent as you or me.”6 Sound unrealistic? If so, it won’t be for the lack of computing power. Kurzweil has estimated that by 2020, a thousand-dollar PC will have the computing power of a human brain: twenty million billion calculations a second. By 2029, it should be a thousand times more powerful than the human brain, at least by these brute measures. By 2055, “$1,000 worth of computing power will equal the processing power of all the humans on the planet,” he says.7 By 2099, should we get there, “a penny’s worth of computing power will be a billion times as powerful as all the human brains now on the planet.”

  * * *

  For the moment, let’s not try to figure out whether this is a good thing or a bad thing. For now, let’s just operate on the assumption that it’s a big thing, that it represents an unmatched degree of leverage. If the unchecked and accelerating combustion of fossil fuel was powerful enough to fundamentally change nature, then the unchecked and accelerating technological power observable in Silicon Valley and its global outposts may well be enough to fundamentally challenge human nature. It took a couple of hundred years to do it with coal and gas and oil, though that was an example of acceleration, too—half the emissions, and the ones that seem to have shattered various physical thresholds, came in the last three decades. It probably won’t take that long with artificial intelligence, or so the scientists who study the field tell us.

  To be clear, we already have achieved what the writer Tim Urban calls artificial narrow intelligence, sometimes referred to as “weak AI.” “There’s AI that can beat the world chess champion in chess, but that’s the only thing it does. Ask it to figure out a better way to store data on a hard drive and it’ll look at you blankly,” he says.8 This weak AI is all around us. It’s why Amazon knows the thing you want to buy next, and it’s how Siri sort of responds to your queries, and it’s why your new car knows to slow down if another car pulls in front of you. When the fully self-driving car finally arrives in your driveway, that will be weak AI to the max: thousands of sensors deployed to perform a specific task better than you can do it. You’ll be able to drink IPAs for hours at your local tavern, and the self-driving car will take you home—and it may well be able to recommend precisely which IPAs you’d like best. But it won’t be able to carry on an interesting discussion about whether this is the best course for your life.

  That next step up is artificial general intelligence, sometimes referred to as “strong AI.” That’s a computer “as smart as a human across the board, a machine that can perform any intellectual task a human being can,” in Urban’s description. This kind of intelligence would require “the ability to reason, plan, solve problems, think abstractly, comprehend complex ideas, learn quickly, and learn from experience.”9 Five years ago a pair of researchers asked hundreds of AI experts at a series of conferences when we’d reach this milestone—more precisely, it asked them to name a “median optimistic year,” when there was a 10 percent chance we’d get there; a median realistic year, a 50 percent chance; and a “pessimistic” year, in which there was a 90 percent chance. The optimistic year: 2022. The realistic year (the year when they thought there was a 50 percent likelihood): 2040. The pessimistic year: 2075. That is, the people working in the field were convinced that there was a 90 percent chance we’d have strong artificial intelligence by the time a child born this year was middle-aged (middle-aged by our current reckoning—stay tuned). A similar survey, conducted more recently, simply asked experts when they thought we’d get there. Forty-two percent said 2030 or before; only 2 percent said “never.”10 As one Carnegie Mellon professor put it, “I no longer have the feeling, which I had twenty-five years ago, that there are gaping holes. I know we don’t have a good architecture to assemble the ideas, but it’s not obvious to me that we are missing components.”11

  What happens then? What happens once a computer is as smart as a person? Probably, say some of these AI experts, it just keeps going. If it’s been programmed to keep increasing its intelligence, perhaps it takes it an hour to go from the understanding of an average four-year-old to “pumping out the grand theory of physics that unifies general relativity and quantum mechanics, something no human has been able to definitively do,” says Urban. “Ninety minutes after that, the AI has become an artificial super intelligence, 170,000 times more intelligent than a human.” As he points out, we have a hard time imagining that, “any more than a bumblebee can wrap its head around Keynesian economics. In our world smart means a 130 and stupid means an 85 IQ—we don’t have a word for an IQ of 12,952.”12 You can see how what I’ve been calling “the human game” might be somewhat altered by such a development, or any development remotely like it. It’s leverage on a different scale.

  But before we figure out how likely all this is, and before we figure out if it’s a good idea, let’s lo
ok at one particular real-world example of these fast-growing new powers. It will give us a better sense of how far we can go and still stay ourselves.

  14

  In 1953, Francis Crick and James Watson discovered the double-helix nature of DNA, which was a remarkable achievement, but it didn’t change the world overnight. Some highlights on the genetic time line since:

  1974: The first genetically modified animal is produced (a mouse).

  1996: Some Scottish blokes clone a sheep and name it Dolly.

  1999: An artist named Eduardo Kac sticks some jellyfish DNA in a rabbit and makes her glow a phosphorescent green when exposed to black light. “It is a new era and we need a new kind of art,” he explains. “It makes no sense to paint as we painted in caves.”

  Also 1999: Scientists at Princeton, MIT, and Washington University find that they can boost a mouse’s memory by changing a single gene—these “Doogie mice,” named after a precociously smart TV character now lost to the mists of time, can locate a hidden underwater platform faster than unimproved mice.

  2009: Asian scientists produce an even smarter mouse, which they call Hobbie-J, after a character in a Chinese cartoon. “When these mice were given a choice to take a left or a right turn to get a chocolate reward, Hobbie-J was able to remember the correct path for much longer than the normal mice, but after five minutes he, too, forgot. ‘We can never turn it into a mathematician,’ the researcher explains. ‘They are rats, after all.’”1

 

‹ Prev