The point seemed to be that even though some things like why snakes crawled out of their dens or why some—not all—mice and rats were dazed and disoriented may not have clear scientific explanations, there was no reason to doubt that they really did happen. Wang and company confirmed that “it was the foreshocks alone that triggered the final decisions” to warn and evacuate. The good news was Haicheng proved that “at least some earthquakes do have precursors that may lead to some prediction.” To me the bottom line was this conclusion: “Although the prediction of the Haicheng earthquake was a blend of confusion, empirical analysis, intuitive judgment, and good luck, it was an attempt to predict a major earthquake that for the first time did not end up with practical failure.”
Seventeen months and three weeks after Haicheng, everything China thought it knew about seismic prediction came down like a house of cards. At 3:42 a.m. on July 28, 1976, lightning flashed across the sky and the earth rumbled ominously. Seconds later another major earthquake struck northern China—this time with no prediction and no evacuation. The rupture happened directly underneath the industrial city of Tangshan, roughly 90 miles (140 km) east of Beijing.
The magnitude 7.5 rupture shifted the ground about five feet (1.5 m) horizontally and three feet (1 m) vertically, destroying nearly 100 percent of the living quarters and 80 percent of the industrial buildings in the city. People were jolted from their sleep in total darkness, screaming and choking on thick dust from unreinforced brick buildings that collapsed in piles of rubble. Official reports estimated the death toll at 240,000 with 164,000 more seriously injured. That’s roughly the same number who died from the Sumatra quake and the Indian Ocean tsunami in 2004. Critics claimed the Tangshan estimate was conservative, however, and that the real number of people killed could have been 600,000 according to estimates made by foreign observers. Whatever the count, Tangshan was far and away the most deadly single quake in the twentieth century and one of the great tragedies of all time.
Tangshan was a major center for coal mining, iron and steel production, and the manufacture of cement. Nearly all of it was wrecked. Bridges and highways collapsed, pipelines broke, dams cracked, more than ten thousand large industrial chimneys fell, and twenty-eight trains passing through the city overturned or were derailed. The key question, though, was why nobody saw it coming. Did the lessons of Haicheng not apply here? Apparently not.
In the final two months before Tangshan, not a single foreshock was detected by a regional seismic network capable of measuring tremors as small as magnitude 1.7. Some of the other little twitches and anomalies that had preceded the Haicheng event also preceded Tangshan but apparently the signals were not strong enough to trigger a prediction or evacuation. How different could the geological structures be only three hundred miles (480 km) away from Haicheng? If Haicheng had foreshocks, why not Tangshan? Good and important questions that still need to be answered, said Kelin Wang.
Preliminary answers suggested by Wang and company could be that fault failures are unique, so different from each other that whatever anomaly or precursor helps to predict one event probably won’t work for a different kind of fault in a different physical setting. And once a rupture has happened, it modifies the geological structures and rock properties enough that the precursors then change as well. The symptom that tipped us off to the last quake may not precede the next, even if it happens on the same fault. And then sometimes you get the symptoms but the big temblor never comes.
If that’s the real lesson of Haicheng and Tangshan, why bother with prediction? When I finally got a chance to interview Kelin Wang, he still seemed hopeful about the prospects—a persistent, low-key optimist—although he was wary of putting prediction into practice too soon. “If the Haicheng story was true,” he began, “then earthquake prediction is not impossible.” It did show us “how early in the stage we are still in terms of earthquake prediction. And the difference—the major difference—between these two earthquakes is the Haicheng earthquake had a foreshock sequence and the Tangshan earthquake had no foreshocks at all.” Strange as it may sound this did little to dampen enthusiasm for the dark art of prediction.
The Tangshan disaster gave prediction optimists a sharp reality check. So did the Parkfield experiment in California. On September 28, 2004, a magnitude 6 temblor finally rattled the farming town of Parkfield—at least twelve years after scientists predicted it would happen. To say that William Bakun and Allan Lindh of the USGS, who along with Tom McEvilly of the University of California at Berkeley had offered the forecast in 1985, were disappointed would probably be an understatement. Five moderate (magnitude 6) events with similar “characteristics” had occurred on the Parkfield segment of the San Andreas since 1857. By their calculations the seventh in what looked like a repeating series of nearly identical ruptures should have happened some time in 1988 but surely by the end of 1992 with a 95 percent probability.
Clearly the “time-predictable” part of their hypothesis was wrong. The idea that fault failures tend to repeat themselves like clockwork had been kicking around since 1910, when geologist Harry F. Reid of Johns Hopkins University suggested it ought to be possible to figure out when and where quakes would happen by keeping close tabs on the build-up of stress. Looking at how unevenly land had shifted along the San Andreas during the great San Francisco earthquake of 1906, Reid developed the elastic rebound hypothesis—a cornerstone of modern geology long before the advent of plate tectonics—which Bakun, Lindh, and McEvilly set out to test in Parkfield eight decades later.
Reid’s idea was that stress built up unevenly along the fault and it took a massive rupture—at the point where the strain was great enough to cause the rocks to fail—in order to relieve or recover that strain. The longer the strain built up, the bigger the shock would be. If you knew how often the fault had ruptured in the past, you could in theory estimate how long before the next one was due.
But what if the last rip did not release all of the accumulated strain? Wouldn’t that alter the timeline for the next one? When Bakun and Lindh published their forecast in the August 16, 1985, edition of Science, they noted that the 1983 magnitude 6.5 at Coalinga, eighteen miles (30 km) off the San Andreas to the northeast of Parkfield, might have done exactly that. It had just possibly relieved enough of the “tension in the spring” of Parkfield’s clock to delay the next rumble in the series. A delay of more than a dozen years, however, was way more than merely late.
Critics within the science community didn’t wait until the 2004 jolt to pounce on Parkfield. Even though the expected magnitude 6 event did happen eventually, in more or less the same location as last time and where Bakun and his colleagues said it would be, the Parkfield prediction experiment was branded a failure shortly after the original time window closed in January 1993. A long-standing and rancorous philosophical debate intensified as some seismologists turned away from divining the future and deleted “the P-word” from their vocabularies.
By the mid-1990s Robert J. Geller at the University of Tokyo had become the most persistent and outspoken critic of everything predictive, especially Japan’s massive and well-funded multiyear effort to anticipate the next big temblor near Tokyo. Geller had been scathing in his view of American efforts as well, his central thesis being that prediction studies have been going on for more than a century—and yet we seemed no nearer a solution to the problem than we were in the beginning.
Geller was fond of quoting Charles Richter, developer of the earliest and best-known earthquake magnitude scale and one of the most respected seismologists in the world, who in 1977 commented that he had “a horror of predictions and of predictors. Journalists and the general public rush to any suggestion of earthquake prediction like hogs toward a full trough.” Vitriol and aspersions aside, Geller’s central argument against prediction was and is based on the idea that “individual earthquakes are inherently unpredictable because of the chaotic, highly nonlinear nature of the source process.” Basically there are so many th
ings going on deep underground that we can never know when a rock surface is going to fail. He dismissed the idea that “the Earth telegraphs its punches,” using auto accidents as an analogy.
The rate of car crashes may be estimated, but the time and location of an individual accident “cannot be predicted.” As for precursors, even though speeding frequently precedes accidents, only a small fraction of speeding violations are followed by serious accidents. Therefore speeding is not a reliable precursor. Similarly, he argued, there are no reliable precursors to seismic shocks. Even after a car crash has begun to happen, its final extent and severity depend on other equally unpredictable, quickly changing interactions between drivers, cars, and other objects. Put simply, car wrecks and quakes are too chaotic to foretell, according to Geller.
In October 1996 he joined forces with David Jackson and Yan Kagan at UCLA and Francesco Mulargia at the University of Bologna to write a critique for Science that appeared under the provocative headline “Earthquakes Cannot Be Predicted.” In the article they cast doubt on the Haicheng prediction story, suggesting that political pressures had led to exaggerated claims. They wrote that there are “strong reasons to doubt” that any observable, identifiable, or reliable precursors exist. They pointed out that long-term predictions both for the Tokai region in Japan and for Parkfield had failed while other damaging jolts (Loma Prieta, Landers, and Northridge in California, plus Okushiri Island and Kobe in Japan) had not been predicted. They cautioned that false hopes about the effectiveness of prediction efforts had already created negative side effects.
After the frightening and damaging Northridge temblor in southern California, for example, stories began to spread that an even larger quake was about to happen but that scientists were keeping quiet to avoid causing panic. The gossip became so widespread that Caltech seismologists felt compelled to issue a denial: “Aftershocks will continue. However, the rumor of the prediction of a major earthquake is false. Caltech cannot release predictions since it is impossible to predict earthquakes.”
Not surprisingly the article spawned a series of energetic replies from those who felt the baby of prediction science should not be thrown out with the bathwater of uncertainty. Max Wyss at the University of Alaska took issue with almost every point made by Geller and company. He countered that in 10 to 30 percent of large quakes foreshocks do occur and are precursors, that strain is released in earthquakes only after it has been accumulated for centuries, and that measuring the build-up of stresses within the crust is therefore not a waste of time and money.
Wyss concluded that most experts living at the time of Columbus would have said it was impossible to reach India by sailing west from Europe and that “funds should not be wasted on such a folly.” And while Geller et al seemed to be making a similar mistake, Wyss doubted that “human curiosity and ingenuity can be prevented in the long run.” The secrets of quake prediction would be unlocked sooner or later. Richard Aceves and Stephen Park at University of California Riverside suggested it was premature to give up on prediction. “The length of an experiment,” they wrote, “should not be an argument against the potential value of the eventual results.”
In a later article Geller repeated his contention that “people would be far better off living and working in buildings that were designed to withstand earthquakes when they did occur.” He insisted that the “incorrect impression” quakes can be foretold leads to “wasting funds on pointless prediction research,” diverting resources from more “practical precautions that could save lives and reduce property damage when a quake comes.”
In the spring of 1997 someone with inside knowledge leaked a government document that slammed Japan’s vaunted $147 million a year prediction research program. The confidential review, published in the Yomiuri Shimbun, quoted Masayuki Kikuchi, a seismologist at the University of Tokyo’s Earthquake Research Institute, as saying that “trying to predict earthquakes is unreasonable.” After thirty-two years of trying, all those scientists and all that high-tech equipment had failed to meet the stated goal of warning the public of impending earthquakes.
The report said the government should admit that seismic forecasting was not currently possible and shift the program’s focus. It was the sharpest criticism ever, and it did eventually lead to a change in direction. With so much invested and so much more at stake, though, there was no way the whole campaign would be ditched. People in Japan are intimately aware of earthquakes and the public desire for some kind of warning—whether unreasonable or not—is a political reality that cannot be ignored.
Faults in the Tokai region off the coast of Japan—where three tectonic plates come together—have rattled the earth repeatedly and people worry about the next one. The subduction zone there tore apart in 1854, the great Tokyo earthquake of 1923 killed more than 140,000 people, two more big fault breaks occurred in the 1940s, and another magnitude 8 is expected any day now.
Japan’s first five-year prediction research plan was launched in 1965. In 1978, with still no sign of an impending quake, the program was ramped up with passage of the Large-Scale Earthquake Countermeasures Act, which concentrated most of the nation’s seismic brain power and technical resources on the so-called Tokai Gap. Whenever some anomaly is observed by the monitoring network, a special evaluation committee of technical experts—known locally as “the six wise men”—must be paged and rushed by police cars to a command center in Tokyo, where they will gather around a conference table and focus on the data stream. Then very quickly they must decide whether or not to call the prime minister.
If the anomaly is identified as a reliable precursor, only the prime minister has the authority to issue a warning to Tokyo’s thirteen million residents. If and when that day comes, a large-scale emergency operation will be initiated almost immediately. Bullet trains and factory production lines will be stopped, gas pipelines will shut down, highway traffic will be diverted, schools will be evacuated, and businesses will close. According to one study a shutdown like that would cost the Japanese economy as much as $7 billion per day, so the six wise men can’t afford to get it wrong. False alarms would be exceedingly unwelcome.
Even though the people of Japan still tell their leaders they want some kind of warning system, they were not at all impressed with what happened in Kobe in 1995. With all those smart people and so much equipment focused on the Tokai Gap, apparently nobody saw the Kobe quake coming. It was an ugly surprise from a fault that was not considered a threat. It killed more than six thousand people.
In spite of the embarrassing setback, Kiyoo Mogi, a professor at Nihon University and then chair of the wise men’s committee, defended the prediction program, calling it Japan’s moral obligation not only to its own citizens but to people in poorer, quake-prone countries around the world as well. “Can we give up efforts at prediction and just passively wait for a big one?” he asked. “I don’t think so.” What Mogi did was try to change the rules.
He argued that a definite “yes or no” prediction—as the six wise men are required by law to make—was beyond Japan’s technical capability with the knowledge and equipment available. Instead, he suggested the warnings be graded with some level of probability, expressed like weather forecasts. The government could say, for example, that there’s a 40 percent chance of an earthquake this week. People would be made aware that it might happen and that they ought to prepare themselves.
Mogi’s idea was rejected, so he resigned from the committee in 1996. The program carried on but it gradually changed direction. In the aftermath of Kobe prediction research spending actually increased again with installation of a dense web of GPS stations to monitor crustal movement and strain build-up. But by September 2003, with all the new equipment up and running, an array of 1,224 GPS stations and about 1,000 seismometers failed to spot any symptoms of the magnitude 8.3 Tokachi-Oki earthquake. It came as another rude shock.
The prediction team had also started work on what they called a “real-time seismic warning s
ystem.” Japanese scientists were hoping to use super-fast technology to reduce the extent and severity of damage once a fault had begun to slip. They loaded a supercomputer with 100,000 preprogrammed scenarios based on the magnitude and exact location of the coming temblor. As soon as the ground began to shake instruments would feed data to the computer and the computer would spit out the most likely scenario—one minute later.
But on June 13, 2008, a magnitude 6.9 shockwave hit northern Japan, killing at least thirteen people, destroying homes and factories throughout the region. The real-time system did signal that a powerful jolt was happening—roughly three and a half seconds after it started—but the source of the quake was too close to be of any use to places like Oshu, which was only eighteen miles (30 km) from the epicenter. People there received 0.3 seconds of warning. The unfortunate reality is that those closest to the strongest level of shaking will always be the ones who receive the shortest notice. Even if the system works exactly as it should, a real-time warning system will benefit primarily those farther away. On the other hand, it could stop or slow the spread of fires and speed the arrival of emergency crews. So in Japan—at least for the foreseeable future—the supercomputer and the six wise men still have a job to do.
Not only was the Parkfield earthquake a dozen years late but the densely woven grid of seismographs, strainmeters, lasers, and other equipment that made the area one of the most closely watched rupture patches in the world had apparently failed to spot any obvious symptoms or definite precursors. In 1934 and 1966 the Parkfield main shocks had been preceded by apparently identical magnitude 5 foreshocks, each about seventeen minutes prior to the magnitude 6 main event. But not this time.
In 1966 as well, the fault had seemed to creep a bit more than normal in the weeks before the failure. There were reports of new cracks in the ground and a water pipe crossing the zone broke the night before the rupture. Nothing like that happened before the 2004 event. No obvious foreshocks or slip before the main event. Seven “creepmeters” were deployed along the rupture zone with nothing to show for the effort. But all was not lost according to Allan Lindh, who in early 2005 wrote an opinion piece for Seismological Research Letters defending the work at Parkfield. His paper sounded a new rallying cry for prediction science.
Cascadia's Fault Page 27