by Bryan Walsh
In March 2018, I visited my friend Ed Finn in Tempe, Arizona, where he was putting on a conference about human space settlement, which included a mockup of a city on the moon set almost 150 years in the future. Ed is the founding director of the Center for Science and the Imagination at Arizona State University. His job is to explore the intersection between art and science, and how they both might be used to predict and shape the future we want. He is someone who welcomes what’s to come—but when I asked him about Musk’s and Bezos’s plans, even he was skeptical that space could serve as our species’s escape pod.
“When we think about the climate crisis and other threats over the next hundred years, this space stuff is a luxury,” Ed told me. “It is never going to solve our problems on Earth and it will never be a safety valve. Maybe in a few hundred years we’ll be ready. But in the short term, space is an experiment, not a survival plan.”
We still need survival plans.
THE END
Why We Fight
Some five hundred generations have passed since the first stirrings of what we now call human civilization.1 Five hundred times a generation of fathers and mothers has ushered children into the world, who in time would succeed them, giving birth to children of their own who take their turn on the wheel. In that span of time—barely more than a blink against the life span of the cosmos—whole peoples have risen, fallen, and vanished from the face of the Earth. As mortality is to individuals, so collapse seems to be to human societies—simply part of our nature. Just as our children and our children’s children continue our story, though, so those lost civilizations of the past leave something behind to be found by us, writing themselves into our time, as we hope to write ourselves into tomorrow.
But what happens when there’s no one left to read the past, or write the future?
Bill Kitchen has one idea. Kitchen is an electrical engineer who made a fortune designing and selling amusement rides—you might know his iFly indoor skydiving attraction—but growing up as a young boy in Florida his passion was cosmology and astrology. He is a lifetime member of the SETI League, a nonprofit dedicated to privatizing the search for intelligent life, and like many of the other people we’ve met in this book, he is deeply worried about the fate of his species. “We humans are destroying the planet,” he told me in his gentle southern accent. “It could be an asteroid or a nuclear war or rogue genetics or an AI that decides the Earth is better off without us. But humanity is in danger.”
Kitchen wants to build a digital time capsule that would be a compendium of Earth’s natural and anthropological history, a full suite of the best the human species has to offer, as well as a new Rosetta Stone that would explain our language and how to decode the information the time capsule contains. It would include the sequenced genomes of as many animals and plants on Earth as possible, a Noah’s Ark for the ultimate flood, as well as the genomes of actual human beings. And all of this information will be beamed into space through a high-powered laser that Kitchen calls the Interstellar Beacon.2 Kitchen plans for the beacon—which is still in the earliest of early stages—to be updated and broadcast continuously, to the moment of our eventual demise. “This will be our legacy,” he said.
There have been other plans to create similar planetary archives. The Alliance to Rescue Civilization (ARC) advocated for the establishment of a database of humanity, complete with DNA samples of Earth life, to be stored on the moon and staffed by astronaut archivists. Lunar Mission One is a planned project to send a robotic probe to the moon’s Shackleton Crater, where it will bore a hole and store public and private data about humanity—including the genetic data of people who will pay to fund the project. (A launch had been planned for 2024, but funding is fuzzy.) Memory of Mankind is an archival venture that will inscribe human knowledge on ceramic disks inspired by the Sumerian clay tablets that contain some of the oldest recorded information in human history. The tablets will be buried deep in a salt mine in Austria, there to be found by whoever or whatever might one day come across them. The Arch Mission Foundation is encoding human knowledge on 5-D data optical storage disks and plans to seed them around the solar system. The group’s first disk is currently somewhere in space—it was launched on Elon Musk’s SpaceX Falcon Heavy test flight in February 2018, tucked inside a racer-red Tesla Roadster. Fittingly, the disk contained digital copies of the sci-fi author Isaac Asimov’s Foundation trilogy, which tells the story of a scientist who foresees the collapse of civilization and creates a compendium of all knowledge to give humans of the future a chance to rebuild.
These projects differ in their aims and methods. Some, like Memory of Mankind, are meant to be analog time capsules at a moment when information has become increasingly digitized, and therefore vulnerable to a technological collapse. Others, like Kitchen’s Interstellar Beacon or Lunar Mission One, hold out the unlikely promise that individuals might actually be resurrected by some power in the future, human or otherwise. What they have in common is an awareness that our time may be coming to a close, and a hope that it will be possible to leave something of ourselves for a future that may yet be.
These beacons and depositories would differ from the pyramids and other monuments left by past civilizations. Like the “two vast and trunkless legs of stone” in Percy Shelley’s “Ozymandias,” those ancient shrines proclaimed the greatness of the kings and queens of their age, in full expectation that their people and their culture would live on indefinitely. But we, who have unearthed the colossal wrecks of the deep past and suspect the tenuousness of our shared future, should know better.
The field of existential risk dwells on the darkest of topics, but a strain of hope runs through it, hope that some of the same technologies that could doom us could also give us the power to cheat not just death but extinction itself, that we could be the generation that breaks the cycle of collapse altogether. No monuments would we need, for our ever-growing presence, stretching to every corner of space, would be all the proclamation that we require, until the universe itself goes dark. Perhaps. Myself, I’d settle for something humbler. Just a record of who we were. A permanent mark on the cosmos.
Back at the start of this book, I recounted the unveiling of the Doomsday Clock in 2018. The experts behind the clock—which has kept humanity’s time since 1947—set the hands at 11:58. That was closer to midnight than any year since 1953, when both the United States and the Soviet Union tested their first thermonuclear warheads within nine months of each other. Announcing the new setting of the Doomsday Clock in the Bulletin of the Atomic Scientists in August 1953, Eugene Rabinowitch, the Russian-born American biophysicist and Manhattan Project alumnus who had cofounded the Bulletin, wrote that “only a few more swings of the pendulum, and, from Moscow to Chicago, atomic explosions will strike midnight for Western civilization.”3
The Bulletin now updates the setting on the Doomsday Clock each year, and on January 24, 2019, the journal’s Science and Security Board gathered again at the National Press Club in Washington, D.C., to reveal the new time. William Perry, the former U.S. secretary of defense whom we met in chapter 3, and ex–California governor Jerry Brown stood side by side on the stage as they unveiled the clock. The time was 11:58, two minutes to midnight—the same as the previous year. If the apocalypse wasn’t any closer, it was no further away. The end of the world appeared to be in a holding pattern.
“Two minutes to midnight invokes memories of 1953,” said Perry, who had lived and worked through every close call of the Cold War. “I know it because I was there.” For his part, Brown, who had just finished his second stint as the governor of America’s most populous state and had recently joined the Bulletin as its executive chair, called out the world leaders ignoring our growing existential peril. “The blindness and stupidity of the politicians and their consultants is truly shocking in the face of nuclear catastrophe and danger,” Brown said. “It is two minutes to midnight.… It’s hard to even feel or sense the peril and the danger we’re in, but th
ese scientists know what they’re talking about. It’s late and it’s getting later, and we got to wake people up.”4
If I had one objective in writing this book, it’s that: wake people up. Wake them up to the reality of existential threats, whether from nature or the hand of man—and wake them up to the fact that we’re not helpless in the face of those threats. The first part is easier. We may not fear thermonuclear war the way we once did, but the events of the past few years, the tensions between the United States and Russia, the way wild cards like North Korea have crashed the nuclear club, has at least snapped us out of the atomic amnesia of the post–Cold War years. With each passing month, climate change ever more indelibly imprints itself on the global consciousness, and with it, the deepening sense that we have irredeemably damaged our planet. You don’t need to know the meaning of a climate tipping point to fear that we have passed it. And while we may not fully understand emerging technologies like artificial intelligence or synthetic biology, we know enough to worry about where they might take us. It’s not an accident that our films and novels and TV shows mine the end of the world for material. We’re afraid. But fear isn’t sustainable—and fear isn’t a strategy for survival.
In keeping the Doomsday Clock set at two minutes to midnight, the Bulletin made the only decision it could. The world hadn’t gotten perceptibly worse over the course of 2018; there were even some improvements, like the beginnings of negotiation between the United States and North Korea on nuclear weapons. But on balance it hadn’t gotten any better, either—arms control treaties between Washington and Moscow broke down, and the globe kept warming, even as carbon emissions reached a historic high. The Bulletin had a term for the state we now find ourselves in, circling the drain of Armageddon: a “new abnormal,”5 as the physicist Robert Rosner put it the day of the clock’s unveiling, “a disturbing reality in which things are not getting better and are not being effectively dealt with.” It’s not just that we find ourselves in a state of existential fear—we’ve had reason for such fear since the morning of July 16, 1945. But that fear has bred not passion but paralysis. Though we can imagine the end of the world all too easily, we can’t imagine coming together to save it. And that creeping futility is what we must overcome.
The Doomsday Clock is a brilliant symbol, but a symbol is all it is. There is no countdown to the end—at least not one we can hear. But if our current existential risks worsen with each passing year, and if we continue to add new ones, the odds of our long-term survival will be short.
The hopeful view is that what appears to be ever-increasing existential risk is actually a temporary bottleneck created by new technologies we can’t yet control and by environmental challenges that are a function of our accelerating growth, like climate change. If we can make it through that bottleneck, we’ll find safety on the other side. We just need a breakthrough. Maybe it will be artificial intelligence, ethical and controlled. Maybe it will be some mix of clean energy and carbon engineering that defuses climate change and gives us centuries more to grow on this planet. Maybe Elon Musk and Jeff Bezos are right, and the move to space will keep us safe. In this vision we survive—and thrive—not by slowing down, but by speeding up.
The age of existential risk has sharpened the stakes, but this has been the central human challenge for as long as there have been human beings. Faced with limits, we invent new technologies and new practices that allow us to grow, which then use up more resources and create new risks, forcing us to innovate again to keep one step ahead of our growing capacity for both success and destruction. This is how we put seven and a half billion people on this planet. This is how we reached a point where the life of the average human being is longer and healthier and richer and just plain better than it has ever been before, no matter how fed up and pessimistic we may feel on a day-to-day basis. But with each passing year the race becomes faster and harder to run. Sooner or later we may stumble, and be overtaken.
There is another option. We could deliberately choose to slow down, to select a more sustainable speed, to eschew both the potential risks and potential benefits of emerging technology. This, at its heart, is what environmentalists and conservationists have long called for us to do, and it applies not just to our energy use but to our mind-set, as individuals and as a species. What if we choose to live within our means?
It might work. But we would be surrendering much as well. In seeking to avoid existential risk, we would be giving up existential hope, a lottery ticket to a technological heaven where there are limits no longer. And we would be asking ourselves to alter what seems to be a basic drive of humanity: growth. Great religions claiming billions of adherents counsel humility and abstention, yet look around. The drive to grow and compete seems so hardwired into human beings that it can seem as if the only way we could change it would be to change our very DNA. Not by political activism, or moral suasion, but by rewriting our own source code.
The most startling conversation I had while researching this book wasn’t with a nuclear warrior like William Perry or a gene-editing scientist like George Church. It was with a philosopher of ethics at the University of Oxford named Julian Savulescu. Savulescu believes that the new technologies I’ve highlighted and the thirst for growth have put humanity at risk of what he calls “Ultimate Harm”—meaning the end of the world. It might be the global threat of climate change, the nation-state threat of nuclear war, or the individual threat of bioengineered viruses. Savulescu’s point is that as the power to inflict Ultimate Harm spreads, human ethics become the only emergency brake. If the world will blow up if just one of us pushes the self-destruct button, then we will survive only if human beings—all human beings—can be trusted not to push that button. The problem, Savulescu told me, is that we don’t have the ethics to handle the dangerous world that we ourselves have created. “There’s this growing mismatch between our cognitive and technological powers and our moral capacity to use them,” he said.
Savulescu has a radical solution. He suggests that the same cutting-edge biotechnology that now poses an existential risk itself—the greatest looming existential risk, in my view—could one day be used to engineer more ethical and more moral human beings. As we learn to identify the genes associated with altruism and a sense of justice, we could upregulate them in the next generation, creating children who would innately possess the wisdom not to use that terrifying bioweapon, who would see the prudence in curbing their present-day consumption to ensure that future generations have a future at all. The options for self-destruction would still exist, but our morally bioenhanced offspring would be too good to choose them.
If that sounds like a desperate measure, well, so are the times. “I think we’re at this point where we need to look at every avenue,” Savulescu said. “And one of those avenues is not just looking to political reform—which we should be doing—but also to be looking at ourselves. We’re the ones who cause the problems. We’re the ones who make the choices. We’re the ones who create these political systems. No one wants to acknowledge the elephant in the room, and that is that human beings may be the problem, not the political system.”6 It’s not just ethical AI that we would need to create. It’s ethical human beings.
This is a moral philosopher’s thought experiment, not a concrete plan to begin editing genes for altruism into our babies—which, it should be noted, we’re not close to knowing how to do, even if we wanted. But as we spoke—Savulescu in Oxford, me in Brooklyn—I realized he was trying to answer a question that had nagged me since I began reporting on environmental issues years ago, and one that followed me throughout the time I spent on this book: is it easier to change people or technology? If you hold out hope for people, then you believe that we can be persuaded to behave in a way that is more sustainable, even if it demands sacrifice. If you believe technology is more responsive, then you’re in favor of running the race against risk faster, putting faith in our ability to innovate ahead of doom.
From what I have observed, mo
st of us speak as if we believe it is people who can be changed, but behave as if technology will keep us ahead. We embrace a rhetoric of political change and personal responsibility, but the lives we actually live depend on technological and economic growth, whatever the consequences. Savulescu was trying to split the difference: use technology to change people. That should tell you just how difficult it is to fundamentally change ourselves.
People have changed, of course. We’ve largely abandoned hideous practices like slavery, expanded the circle of human rights, and fought for the power to rule ourselves. But those changes mostly fed the engine of growth, and put more power in the hands of individuals, to be used for good or ill. Short of a fundamental political or even spiritual revolution, what I can’t see changing is that primal human drive to expand.
Perhaps I’m suffering from a failure of imagination. The Marxist political theorist and literary critic Fredric Jameson, after all, once wrote that it is “easier to imagine the end of the world than to imagine the end of capitalism.”7 But everywhere I’ve traveled on this planet, I’ve seen people who want more. More for themselves, and more for their children. Who will tell them they can’t have it, even if it may cost the world?
So we must run faster, as if we’re running for our lives.
I began this book by describing a photograph of my wife, my father, my mother, myself, and my newborn son, taken just hours after his birth. It was the future coming into being, a single image that held three generations out of the hundreds that have walked this planet since our species launched, without ever realizing it, the ongoing project we call civilization.