End Times: A Brief Guide to the End of the World
Page 9
Yellowstone is one of the most closely watched volcanic systems in the world, so if it begins to show signs of building toward a supereruption, USGS volcanologists like Michael Poland will almost certainly notice, hopefully with years to spare. But there are many other potential supervolcanoes around the world, and not all have their own observatory. That’s why volcanologists in a 2005 report advocated for the creation of a kind of Intergovernmental Panel on Climate Change for volcanoes—a global scientific body that would meet regularly to report on the latest research in volcanic risk.68 This would hopefully place the threat of supervolcanoes squarely on the public radar, while putting more pressure on governments to create a global volcano monitoring system—something that very much does not exist today. Given that the world already spends more than $50 million to track incoming asteroids, it makes sense to spend at least that much on what is a greater threat. (The 2005 report came about in part because volcanologists were envious of the resources being made available to asteroid defense in the post-Armageddon 1990s. Scientific competition can have its upsides.)
Hans-Peter Plag calculated that such a monitoring system would cost perhaps $370 million per year. For comparison, the United States currently spends $22.3 million on its volcano hazards program.69 Plag believes that sufficient pre-eruption warning could cut potential fatalities in half by giving humanity time to evacuate anyone near a supervolcano and adapt to the deprivations of a volcanic winter. Employing the same kind of existential risk math we used for asteroid defense, Plag estimates that the cost from expected fatalities from a rare supervolcano eruption still comes to between $1.1 billion and $7 billion per year, which would make the benefits of a global volcano monitoring system as much as 18 times greater than the expense,70 simply because of the protection it would provide against the worst of the worst eruptions.
“I believe we are very naive to think that because we can handle small disasters, it means we can handle big ones,” Plag told me. “There’s no reason to think that because you can handle a cold, you can handle pneumonia. But that’s where we are on volcanoes.”
We can’t yet prevent a volcanic eruption the way we can theoretically deflect an incoming asteroid. But I wanted to know if that could change—which is how I ended up connecting with Brian Wilcox. Wilcox is a longtime NASA engineer who served on an agency task force focusing on planetary defense. For NASA, planetary defense means asteroids and other things from outer space. When Wilcox ran the numbers, however, he became convinced that volcanoes were a much greater existential threat than asteroids—and a more likely one. “The probability of a two-kilometer-sized [1.2-mile] asteroid hitting the Earth, a big enough collision to cause major global cooling, is 2 to 10 times less likely than a supervolcano that could have a similar effect,” he told me.
But while the mechanics of asteroids are simple—at least for a NASA engineer—the mechanics of volcanoes are not. “With volcanoes there’s a lot of guesswork about what is actually going on,” Wilcox said. “We don’t know what fraction of the magma chamber in Yellowstone is melted, and we don’t know the percentage of volatile gases in it that could trigger cooling.” (The amount of cooling that might occur after a supervolcano blows depends not just on the sheer size of the eruption, but also on the amount of sulfur that would be released in the debris cloud. More sulfur means more cooling.)
That’s an excellent argument for spending more money on volcanic science and monitoring, which Wilcox supports. But NASA simply wouldn’t be NASA if it couldn’t come up with an ambitious (and expensive) engineering project that might just save the planet. So in 2015 Wilcox and a group of colleagues from NASA’s Jet Propulsion Laboratory published a paper with the modest title of “Defending Human Civilization from Supervolcanic Eruptions.”71
As Wilcox sees it, volcanoes are essentially heat generators. Yellowstone’s geysers are what happens when water seeping into the churning magma reservoir beneath the park is superheated and explodes through the surface, a process that Wilcox compares to a coffee percolator. But not all of the heat is released, and what remains builds up in the reservoir, melting more rock and sulfur, adding more magma, until an eruption becomes inevitable.
Wilcox and his colleagues suggest that the most direct way to defuse a supervolcano would be to simply cool its magma reservoir down so that it never reaches the eruption stage. That could be done directly by increasing the amount of water that reaches the reservoir. It would likely head off a supereruption—and would make the geysers of Yellowstone even more spectacular—but you would need a Great Lake’s worth of liquid to make a dent in the park’s vast volcanic system.
Wilcox instead suggests drilling some six miles down into the supervolcano and constructing tubes that can carry water around the magma reservoir. That water would leach heat from the magma, gradually cooling down the system and pulling it back from the eruption point. The entire project would cost about three and a half billion dollars, but it would have the side benefit of generating carbon-free geothermal electricity at a competitive rate. (Geothermal power plants use steam produced by hot water found underground to drive electrical turbines.) “The thing that makes Yellowstone a force of nature is that it stores up heat for hundreds of thousands of years before it goes kablooey all at once,” said Wilcox. “It would be good if we drained away that heat before it could do a lot of damage.”
There are, it’s safe to say, a few caveats. Current drilling technology is barely sufficient to reach the necessary depths of six miles, and even then, the holes made are less than a foot across, which would limit the amount of water that could be delivered to the magma reservoir.72 The extreme heat and corrosive gases beneath Yellowstone would eat away at drilling equipment. Oh—and this one might be important—the whole process is so slow that it would take tens of thousands of years to properly cool off the supervolcano. As USGS’s Jake Lowenstern told me about Wilcox’s plan, “It all seems a bit fanciful.”
To be clear, Wilcox’s concept is a thought experiment, not a NASA-approved doomsday strategy. There’s little chance of it ever being put into practice, but it’s worth pondering, because doing so demands that we step outside our brief human time frame and think like a geologist. As Wilcox told me, “If you want to defend human civilization on that time scale, you need to act on that time scale.”
Time is what makes supervolcanoes such a confounding existential threat—time and uncertainty. While astrophysicists can calculate the orbital paths of potentially hazardous asteroids decades into the future, we know far less about what’s happening beneath our feet than we do about outer space. But what we’re learning should worry us. Simply based on geologic evidence, scientists had estimated that supereruptions occur on average about every 100,000 years.73 But in a 2017 paper, Jonathan Rougier, a professor of statistical science at the University of Bristol, provided new calculations predicting that supereruptions occur every 5,200 to 48,000 years, with a best-guess value of about 17,000 years.74 Given that the last VEI 8 eruption was more than 26,000 years ago, it might seem as if we’re overdue. We’re not necessarily—the Earth doesn’t follow a regular timetable. But it does mean, as Rougier put it in his paper, that human civilization has been “slightly lucky not to experience any supereruptions” so far.75
We may have been luckier than we can possibly know. Scientists estimate the risk of natural catastrophes like supervolcanoes or asteroid impacts largely by looking back to the past and observing how many times such events have occurred. But it is by definition impossible for us to observe a catastrophe so great that it would have destroyed the human race while we existed, or if it happened before we came on the scene, have interrupted the chain of life that led to the rise of Homo sapiens. That’s not because such megacatastrophes are impossible; rather, it’s because if they had occurred, we humans would not be here to do the observing. Our own existence now casts what Nick Bostrom, Anders Sandberg, and Milan Ćirković called in a 2010 paper an “anthropic shadow” over our planet’s past
.76
This matters to us because while our current existence guarantees that a supervolcano—or any other existential threat—could never have wiped out humanity in the past, we’re offered no such protection in the future. The anthropic shadow only falls backward. We may look at our planet’s geologic record and feel confident that supervolcanoes and major asteroids are so rare that we need not worry about them. But that confidence could be fatally misplaced.
Sooner or later—and on a geologic time scale at least, much sooner—we will face a supereruption. It is in the nature of the planet we live on, the most geologically active in the solar system. Yet of all the existential risks we’ll explore in this book, natural and man-made, it is the one for which we’ve done the least to prepare. Volcanoes have caused mass extinction on this planet before; in fact, they are the serial killers of life. We have been lucky—our existence after 3.5 billion years of evolution is nothing short of a cosmic miracle, even if it was a miracle that had to happen. But that luck may not hold.
NUCLEAR
The Final Curtain on Mankind
The chief asset of a missile range is its emptiness. At maximum size—its dimensions shrink and grow like a shadow depending on what its owners the U.S. Army is testing on a given day—the White Sands Missile Range in central New Mexico covers about as much land as the state of Connecticut.1 Nearly all of it is vacant, devoid of buildings or roads or even the few animals that live in the flat, dry scrub the Spanish called the Jornada del Muerto, the Journey of Death. It’s a name that seems almost too perfect, like that of the Sierra Oscura, the Dark Mountain, which loomed to the east as I drove up on an April morning to Stallion Gate, the northern entrance to the missile range and the one closest to Trinity Site.
It was to see Trinity that I had come to New Mexico and the White Sands Missile Range. We think of Hiroshima as the dawn of nuclear weapons, but it was here at Trinity Site that the very first nuclear bomb was detonated, the work of more than 100,000 people—from ditch diggers to dozens of past and future Nobel Prize–winning scientists—employed by the U.S. government’s Manhattan Project.2 Hiroshima, Nagasaki, the Cuban Missile Crisis, the North Korean standoff, and whatever might come next—it can all be traced back to what happened here in New Mexico on July 16, 1945 at 5:30 a.m., when a successful atomic test resulted in an explosion more destructive than anything humans had caused before.
Trinity, however, inaugurated more than just the nuclear age. It ushered us into the era of man-made existential risk, an era that we live in still. Before Trinity, the world could end, and nearly did, as mass extinction events repeatedly erased most of life on Earth. But the cause each time was natural: supervolcanic eruptions, a collision with an asteroid, sudden and drastic climate change. After Trinity, though, human beings could be the authors of our own annihilation. All the existential risks that would follow—anthropogenic climate change, biotechnology, artificial intelligence—flow from what happened at Trinity, a hinge point in human history. So you can bet I wanted to see the place where it all began.
Nearly seventy-five years after the test, though, there was little evidence of the sheer physical power of what happened that morning in 1945, nor of its significance. Places where history was made are memorialized and meant to be visited—think Gettysburg National Military Park, Independence Hall in Philadelphia, even sites of scientific achievement like the Thomas Edison National Historical Park in New Jersey, where the lightbulb was invented. After World War II the Interior Department tried to create a national monument at Trinity, but its efforts were continually frustrated by the military, which wanted to retain White Sands to test its growing inventory of missiles. There was also the problem of Trinity’s mixed legacy—the greatest of the technical and scientific accomplishments that helped win the war for the Allies, and the birthplace of the first weapon of mass destruction. It wasn’t until 1975 that Trinity was finally declared a national historic landmark—a few ranks down from a national historical park—but even now it remains closed to most visitors, save for two Saturdays a year in October and April, when the army runs an open house and cars line up for miles in the desert sunlight, waiting to enter through Stallion Gate.
I was fortunate enough to schedule a private tour of Trinity Site the day before the April open house in 2018. My guide was Jim Eckles, a welcoming and talkative Nebraska native in his sixties who had worked for years as the missile range’s public affairs officer and wrote a history of the Trinity test not long after retiring. Eckles knew how to dress for the New Mexico desert, even in the spring—a broad hat that shaded his entire face and white hair, and sunglasses with narrow oval lenses. He also knew how to conduct a tour of a historical site that didn’t necessarily have much to see. “The starkness you see out here is important,” he told me. “It’s not a house where George Washington stayed. It’s a bomb test site, and that emptiness is the right way to memorialize what happened here.”
By sheer area Trinity is mostly a parking lot, albeit one that usually sits empty, waiting for those two days when it will fill with RVs dusty from the seventeen-mile drive from Stallion Gate. That lot holds a mangled metal carcass called Jumbo, a 200-ton steel tube originally designed to contain the priceless plutonium in the Trinity bomb—the Gadget, as the scientists called it—in the event of a malfunction during detonation. In the end Jumbo wasn’t used, and it was eventually ordered blown up by Lieutenant General Leslie Groves, the military commander of the Manhattan Project—in part, Eckles told me, because Groves was worried that someone would ask why the army had spent $12 million on a custom-built steel tube no one actually needed. But eight 500-pound bombs only managed to blow off both ends of the tube—which perhaps shouldn’t have been surprising, given that Jumbo had survived the actual Trinity nuclear blast. “It turns out that it’s really hard to destroy a big thing made of steel,” Eckles said. Now Jumbo sits rusting in the Trinity parking lot, a footnote to human fallibility of a different sort.
In fairness, though, everything done at Trinity was being done for the first time, which meant some guesswork. Before the test, some of the senior Manhattan Project scientists organized a betting pool with a one-dollar entry fee, guessing the size of the explosion. Official estimates before the test suggested that the expected strength of the bomb would fall between 500 and 7,000 tons of TNT.3 This was at a time when the most powerful explosives in the U.S. arsenal had about 10 tons of TNT equivalent power.4 Then there was the question of whether the Gadget—the product of more than three years of effort—would even work. Groves himself thought there was only a 40–60 percent chance of success.5
The conditions the night before the test were atrocious—heavy rain and lightning, which is not ideal when there are miles of electrical cable snaking through the open desert and about thirteen pounds of highly radioactive plutonium, wrapped in wires and screws and silver and gold and high explosives, all mounted on top of a hundred-foot steel tower.6 But there was no time to wait for better weather. Harry Truman—who had been president for barely three months and had only been told about the Manhattan Project when he assumed the office after Franklin Roosevelt’s death in April—was due to meet the day after the scheduled test with Soviet premier Joseph Stalin. A successful test of the atomic bomb would be a powerful card for the new president to play as he charted the endgame of the war with his current ally and soon-to-be adversary. Trinity may have heralded a new era of man-made existential risk, but as ever it was immediate politics that drove the decisions on the ground, not any reckoning with truly long-term consequences. With one exception.
In 1942, the Hungarian-American physicist Edward Teller—who would go on to develop the more powerful hydrogen bomb and become one of the inspirations for Stanley Kubrick’s film Dr. Strangelove7—ran some calculations and concluded that an atomic bomb might just possibly create enough heat to ignite the atmosphere and the oceans, causing a global inferno and the end of the world. When Robert Oppenheimer, the scientific leader of the Manhattan Project, t
old the physicist Arthur Compton about Teller’s figures, the older man reportedly responded in horror. “This would be the ultimate catastrophe!” Compton recalled in an interview after the war with the author Pearl Buck. “Better to accept the slavery of the Nazis than run a chance of drawing the final curtain on mankind!”8
According to Buck, Compton told Oppenheimer that if additional calculations showed that the odds of igniting the atmosphere with a nuclear explosion were more than approximately three in one million, all work on the bomb should stop.9 Fortunately, about six months before the Trinity test, Teller and the Polish-American nuclear scientist Emil Konopinski produced a report on the subject, titled “LA-602: Ignition of the Atmosphere with Nuclear Bombs.”10 They concluded that it would be virtually impossible for even a much larger bomb than what would be tested at Trinity to create such a runaway reaction and end the world.
“Virtually impossible” was good enough for wartime, so the work on the bomb continued. After the war, Oppenheimer told a Senate panel that “before we made our first test of the atomic bomb, we were sure on theoretical grounds that we would not set the atmosphere on fire.”11 At the time, though, not everyone may have been fully convinced. The night before the test, the Italian-American physicist Enrico Fermi offered to take wagers on whether the bomb would indeed ignite the atmosphere—and if so, whether it would merely destroy New Mexico or the entire world.12