Book Read Free

Cascadia's Fault

Page 28

by Jerry Thompson


  Looking closely at where the break occurred, how strong it was, and the aftershock pattern that followed, he argued that a key part of their original prediction had come true. What happened in 2004 was physically “a near-perfect repeat” of the 1966 event, according to Lindh. The same earthquake happened again—rupturing the same fifteen-mile-long (25 km) segment of the San Andreas between the same two little bends or “discontinuities” in the rock and with the same overall magnitude—what some had called Parkfield’s signature or “characteristic” earthquake. One might have expected the magnitude to be greater than 6 because the jolt was twelve to fifteen years later than expected and therefore had more time to accumulate strain in the rocks. But the magnitude was 6, just like its predecessors. Hence, Lindh argued, it was a repeat of the same event.

  One curious twist was that the 1966 event had ripped the fault from north to south while this time it unzipped from south to north. And according to Lindh, there may have been a “small premonitory signal” at three or four Parkfield strainmeters. Holes had been drilled hundreds of feet down into the fracture zone in the late 1990s and extremely sensitive instruments capable of detecting very small increments of stress had indeed recorded signals “of the order of 10 nanostrain,” if all the devices were working properly.

  While this sounded like an infinitesimally small thing to measure, Lindh pointed out that if the coming quake had been a magnitude 7 instead of a 6, then the amount of strain—and the creep along the fault—would probably have scaled up by a factor of ten, which “would be easily observable with current downhole instrumentation.” His point was that new state-of-the-art strainmeters can detect things even those magical GPS rigs cannot see from above.

  When a fault creeps way down deep, not all the horizontal motion is transferred to the surface because rocks bend and deform under stress. Therefore, if the rocks started to move hundreds of feet below ground, and if this turned out to be a reliable symptom of a coming rupture, the GPS stations up at the surface might not detect the signal even at the supposed magnitude 7 level. But the much more sensitive strainmeters could—or might.

  Lindh took the opportunity to bite back at critics of prediction and those in government committees, labs, and universities who had unofficially given up trying to solve the problem. He ridiculed the fashion in California of making “probability forecasts” so vague that the results were “no longer accurate enough” to be of use to society. For example, the official Working Group on California Earthquake Probabilities has estimated the odds of at least one magnitude 6.7 or larger event in the San Francisco region sometime in the next thirty years to be 62 percent. The odds are 67 percent for the same size jolt to hit the Los Angeles area. But sometime in the next thirty years? How should people respond to a prediction like that? Building codes and insurance rates might be adjusted, but how do the rest of us make sense of it?

  The numbers could mean anything or nothing to the majority of citizens, who don’t really comprehend statistical probabilities. Instead Lindh argued that the data collected at Parkfield and elsewhere in recent years had vastly improved our understanding of fault behavior, of the accumulation of strain, and of seismic patterns over longer periods so that it should be possible now to target the three most likely zones to rupture and to design more focused “prediction experiments” that could save lives.

  “While I understand that some in the field of seismology are afraid of the ‘P word,’ the public is not,” wrote Lindh. “They think it’s what seismologists are working on. It is my opinion that the public would respond very positively to our highlighting some of the most serious threats to their lives and welfare, particularly if it were accompanied by a serious commitment to do everything we could to further our understanding of those segments, and maybe in the process even reduce the risk they represent.”

  Prediction fell from grace with a disappointing thud just as Kazushige Obara in Japan and Herb Dragert and Garry Rogers at the Geological Survey of Canada were learning about ETS (episodic tremor and slip), the bizarrely regular twitching deep down on the lower reaches of the Cascadia Subduction Zone that promised a new way to track the behavior of a major fault. At roughly the same time in California, all that high-tech equipment buried in the ground or stretched across the San Andreas at Parkfield—the creepmeters, the tiltmeters, the seismographs, and lasers—was being reconfigured for a new and bigger experiment. The desire to know more about what happens along the rocky surfaces of a fracture zone just before an earthquake was actually gaining momentum despite the alleged failure of the Parkfield prediction.

  Parkfield was reborn as SAFOD—the San Andreas Fault Observatory at Depth, a deep-earth research project funded by the National Science Foundation in partnership with the USGS. In June 2004 an oil rig crew started drilling a hole into the hilly brown rangeland not far from the initiating point of the 1966 Parkfield temblor. With a rotary bit nearly ten inches (25 cm) in diameter, they sank a shaft almost two miles (3 km) into the earth on the western side of the fault and installed another package of instruments designed to monitor the initiation of small, repetitious earthquakes at close range.

  At the end of September, when the long-awaited Parkfield event finally happened, there was no immediate payoff because, as even Bakun and Lindh admitted, the new equipment did not detect any obvious precursors. But the drilling continued, enthusiasm undimmed. The next phase of the SAFOD project would deploy the oil industry’s newest directional-drilling technology to turn the bit almost sideways and then drill through the fault—from the Pacific plate eastward—until it penetrated the gap and reached relatively undisturbed rock in the North America plate on the eastern side of the San Andreas. The plan was to bring up core samples of rocks and fluids to find any secret ingredient that might cause ruptures to begin.

  The project would also implant more instruments inside the active zone to make long-term measurements of small to moderate tremors and continuous measurements of rock deformation as it built up during the next cycle. Nothing on this scale had ever been tried. These were ambitious goals: dig down to the very heart of an active fault and watch it rupture from the inside.

  SAFOD was only one of three components of an even grander science project called EarthScope, which set out to monitor plate tectonic movement along the entire U.S. west coast and create a 3D seismic image of the basement of North America. The Plate Boundary Observatory would do on a continental scale what SAFOD was doing close up on the San Andreas. The USArray—a spiderweb of new seismometers spun across the lower 48—would probe thousands of miles down to study the forces that create and shape the earth’s crust from the bottom up.

  To me it sounded like NASA gone underground. In fact, when I rang geophysicist and project director Greg van der Vink in Washington for some background, he volunteered his own analogy that EarthScope was geology’s equivalent of a lunar landing, “the biggest thing we’ve ever done.” But with a $200 million construction and installation budget for the first five years, EarthScope was really more like NASA on a crash diet, although it was still an impressive undertaking. And it would eventually become the Plate Boundary Observatory’s job to focus a sharp new lens on Cascadia’s fault.

  With satellite technology that could measure plate movements down to half a centimeter, the system was intended to cover the western edge of North America from Mexico to Alaska with receivers spaced roughly 125 miles (200 km) apart. If the funding held out there would eventually be 875 permanent GPS stations working in concert with 175 deep borehole strainmeters 650 feet (200 m) underground to measure “at the proton level” what satellites cannot see from space. On standby would be another pool of a hundred portable GPS receivers for temporary deployment and rapid response to volcanic and tectonic emergencies.

  Although EarthScope’s budget looked flush by Canadian standards, it had taken van der Vink and others a long time to convince Washington politicians to spend money on something as optically unsexy as geology. Officials at the Nation
al Science Foundation insisted the money be spent in the United States. If Mexico and Canada wanted to join the project they would have to pay for their own equipment.

  After the Rogers and Dragert findings were published in Science, however, news about episodic tremor and slip spread quickly. Greg van der Vink told me that ETS was “the poster child” for EarthScope science, exactly the kind of thing they were meant to study and “one of the most exciting new discoveries in a long time.” As the implications sank in—here was a major fault sending some kind of mysterious signal every fourteen months like a giant metronome—the Cascadia Subduction Zone suddenly became a higher priority.

  When Herb Dragert heard about the Plate Boundary Observatory project he sensed an opportunity. Having a few of those deep borehole strainmeters installed in Canada would be a great way to double-check his own findings. The Americans, realizing that if Canada could not afford to install strainmeters of its own there might be a huge gap in the data flow right at the most critical point along the locked zone where Cascadia’s next rupture was most likely to happen, lobbied for an exception to the rule. Canada got the borehole strainmeters.

  “Initially they were going to put six in the entire Pacific Northwest. From northern California to Vancouver Island—six strainmeters.” Dragert sounded more than a little incredulous. “There’s thirty-five now,” he chuckled, “because they want to find out about ETS.”

  Like Dragert, Garry Rogers was especially keen to have another, independent set of instruments measure Cascadia’s ground motions during the slip events just to make sure it was really happening. “When you start seeing phenomena with several kinds of instruments seeing the same thing, it becomes very convincing to a lot more people in the science world,” Rogers explained. “In fact now three different kinds of measuring techniques—GPS, strainmeters, and seismometers—they’re all telling us the same thing. And they’re all telling us that stress build-up has a time element to it.”

  The old notion that stress build-up along the fault was a slow, steady, constant process caused by tectonic plates always moving against each other at roughly the same speed was apparently not accurate. Or the concept was more complex than early thinkers realized. The plates may be moving at a steady rate, but with an earth made up of all kinds of hard and soft rocks, mud, sand, and messy fluids, the build-up of stress between two plates is jerky.

  “ETS events could be essentially like the clicks of a ratchet wrench,” said Chris Goldfinger at Oregon State, continuing the thought. “As you crank it tighter and tighter, you’re adding more and more load—as the Juan de Fuca plate tries to dive into the mantle. But the locking point between the two plates won’t let it go, at this point, so it’s giving—in small, squishy motions that may be cranking up the load for the big earthquake.”

  “You actually find that the probability of a megathrust earthquake is larger during one of these slips—or immediately after one of these slip events—than the rest of the time,” ventured Dragert. According to calculations made by his GSC colleagues Stephane Mazzotti and John Adams, the risk jumps by a factor of about thirty. “Right now, roughly three hundred years into the cycle [Cascadia’s last big quake was 311 years ago], the probability of a megathrust earthquake next week is roughly one in 200,000. So it’s a very low probability. During the slip event, or immediately after a slip event, it’s maybe twenty or thirty times that. It’s still only one in four or five thousand, so it’s still a low probability. But the difference is a factor of twenty or thirty.”

  To me the numbers or percentages seemed less significant than the idea that Cascadia’s level of risk goes up for about ten days every fourteen months. And if it’s true that a new load of stress gets shifted from lower down in the zone to the higher-up, locked part where the earthquake will eventually be generated, then it probably makes sense that one of these ETS events could eventually trigger the main event. Ruptures on Cascadia’s fault (and the other subduction zones) may not be completely random after all. A new glimmer of hope for prediction optimists.

  “That’s what we’ve been looking for,” said Dragert emphatically. “We won’t call it a prediction yet, but I think once we know what the heck is going on here, we might be able to say, ‘One of these slip events has started . . . The probability of a triggered event is higher than in the previous fourteen months.’ What the emergency response people do with that—it’s up to them.” Dragert and Rogers have already suggested that emergency responders conduct their annual earthquake training exercises during ETS events just to raise awareness and have everybody thinking seismically during that zone of higher probability. Just in case.

  “The ETS intervals are different in other places in the world,” said Rogers, adding another complication. In northern California the cycle is much shorter than in British Columbia. In Oregon it’s longer. “So that’s something we don’t understand,” he said. “Why is it different from one subduction zone to another?”

  With the setbacks at Parkfield still fresh in everyone’s mind, it seemed to me unlikely that many of the skeptics would change their opinions and agree that quakes do come in time-dependent cycles. The discovery of ETS, though, did inject new energy into the prediction quest. Just when chaos theory seemed to have won the day, there was a fresh reason to think that at least some seismic shocks might not be a statistical crapshoot.

  Garry Rogers was cautiously optimistic when I asked whether earthquake prediction had a future. He hesitated a moment, choosing his words carefully. “Under certain circumstances—yes,” he said. “I think maybe Cascadia, maybe specific faults like the San Andreas that are very seriously studied, or specific faults in China—yes. I think I’m optimistic that that will happen. I’m not optimistic that we’re going to be able to predict earthquakes everywhere. And I’m not optimistic that any of the predictions are going to happen soon.” He smiled as I winced at all of the qualifications to his optimism. “So that’s a qualified yes, if you like. We just don’t know enough. It’s a really tough problem.”

  Chris Goldfinger made a point of putting the Parkfield setback into a larger context. “Forecasting and prediction were words that were in great disfavor in the past couple of decades, partly based on the great Parkfield experiment,” he said. “It wasn’t really a failure—I don’t think—at all. People expected too much from a one-shot experiment like that. And so for decades, prediction became the ‘P-word’ and nobody used that word at all. But now, science is marching on. And we’re seeing things like ETS events, we’re seeing things like turbidite evidence for clustering [of Cascadia’s earthquakes over time], and the [possible triggering] relationship to the San Andreas. People around the world are seeing other, similar kinds of relationships. And while it’s far from prediction, it’s progress.”

  “It may be that we never get to the ‘You’re gonna have an earthquake next Thursday’ sort of scenario,” added Goldfinger. “But I think it’s entirely possible that we’ll get to a point where we can say, ‘Sometime in the next decade we’re highly likely to have something happen.’ And I think that sort of thing is on the horizon, in the not-too-distant future.”

  Lori Dengler at Humboldt State, who began her career as a prediction optimist working initially with William Bakun and Allan Lindh on the Parkfield experiment, eventually lost her enthusiasm for trying to read seismic tea leaves. In her opinion building stronger buildings and making communities more resilient should be the higher priorities. As for the current ability to forecast Cascadia’s inevitable failure, she said, “Well, I’ll tell you something I’m absolutely sure about. The next Cascadia earthquake is one day closer today than it was yesterday.”

  Put the question a different way and you face another quandary. What if we did achieve a breakthrough in the science? What if the experts had another success like Haicheng and then became courageous, or foolhardy, enough to issue a prediction for Cascadia or the San Andreas? Would politicians and public officials know what to do with the information? Imagine
you are the mayor, the police chief, the premier of British Columbia or the governor of Washington, Oregon, or California and the scientists come to your office early one morning and say, pretty much as Cao Xianqing did, “We think a major earthquake will happen today or this evening.” How would you respond? What would you do?

  “Some people would question that if you have a prediction—if it’s not accurate enough—that you may cause more disturbance to life than to save life,” Kelin Wang said. “If you predicted something that didn’t happen—if you shut the factories and people moved away—and nothing happened? It’s a complicated issue. It’s both scientific and social.” He shook his head and smiled. “It’s very complicated.”

  PART 3

  SHOCKWAVES

  CHAPTER 21

  Facing Reality: Cascadia Equals Sumatra

  Vasily Titov’s flight to Chicago was canceled at the last minute, so he was destined to spend Christmas alone in Seattle without his wife. She had taken an earlier flight and was already back east visiting relatives when unexplained airline woes at Sea-Tac Airport ruined Titov’s holiday in December 2004. “It was a sad moment for me that I had to spend Christmas day and Christmas night by myself,” Titov said, quietly mocking himself. “I had nothing better to do than go to the office and play with my model,” his computer model of a large tsunami. Late that afternoon he took the scenic route along Sand Point Way to NOAA’s Pacific Marine Environmental Laboratory on Seattle’s Lake Washington and fired up the hard drive.

 

‹ Prev