On July 12, 1993, little more than a year after Cascadia’s fault started unzipping in Petrolia, California, another powerful Ring of Fire earthquake tore the ocean floor west of Hokkaido in the Sea of Japan, hoisting a mountain of seawater that quickly broke under the force of gravity into a series of tsunamis. On nearby Okushiri Island seismic damage was only one of several disasters. Toppled fuel tanks and broken gas pipes fed fires that spread rapidly through the rubble. Cape Aonae, a peninsula on the south end of the island, was completely overtopped by thirty-foot (10 m) waves. The highest tsunami to hit Okushiri was almost thirty meters—a wall of water nearly a hundred feet high.
The scariest part was that all of this happened in the middle of the night, so people living there never saw the tsunamis coming—yet they clearly knew what to expect. The Japanese had learned enough from painful experience with earthquakes and tsunamis that most of the island’s residents instinctively moved to higher ground as soon as the earth started to rumble. Almost two hundred died and many thousands were injured. Homes, businesses, and the main harbor were badly damaged. The toll would have been far worse if more people had lingered in their wrecked villages only to be drowned by the train of towering waves that hissed and roared from the darkness and slammed ashore a short time after the jolt.
There was little that Vasily Titov could do personally for the people of Okushiri Island. By the time he moved to PMEL in Seattle, however, his tsunami model was advanced enough to be ready for a real-world test that might help others in the future. He and his research partners gathered a wealth of new details from the Japanese about where the water went and how high it reached along the beaches. In the tragedy of Okushiri Island there might be just enough new “data points” to fine-tune his and several other models that were being developed so that lives could be saved the next time.
“We cannot say when the next big earthquake is going to happen,” said Titov. “However, from the moment a tsunami is generated, if you know some data about the tsunami, our model can actually tell you pretty well what happens next. How high the tsunami wave is going to be at the coastline, how big the impact is going to be at a particular location. The only thing we have to know for that is the measurement of the wave.” Not surprisingly, Japanese scientists had made very precise observations of what happened on Okushiri and along the Hokkaido coast.
One of the many tricks to making a computer simulate a tsunami was learning how to create numerical codes that could reproduce the nonlinear movement of water as a tsunami got bigger and bigger. Before the 1993 wave, Titov and others had created several digital simulations that accurately mimicked the behavior of water in laboratory tests. Titov’s software even performed well in terms of predicting the outcome of a real tsunami generated in the Aleutian Islands.
“It was not a forecast in the operational sense of the word,” Titov conceded, “but I had all the components in my computer. And when the tsunami came, the comparison was so good,” he paused, searching for the words, “I could not believe my eyes. In a nutshell it performed much better than expected.” The Aleutian tsunami that served as his earliest test case was another of those relatively small waves that caused little damage. He knew that bigger waves were not just more of the same. At a certain point they morphed into something else entirely. Two plus two could add up to five or even ten in the nonlinear world of killer waves.
“Tsunamis are such beasts that they change their attitude, if you will, when they grow bigger,” Titov explained. “It’s sort of a trivial thing to say, but in terms of a mathematical model, it means that it goes from the linear stage to be a nonlinear phenomenon. And nonlinear is much more difficult to predict, much more difficult to model.”
The NOAA team needed data from a larger wave to plug in to the computer if they were to see how well the model mimicked what happened when nature went on a rampage. “What was missing was the big event,” said Titov, meaning a wave that could “test the system from the beginning to the end.” That event—the wave that became the benchmark for his model—was the one that hit Okushiri Island in July 1993. He and colleague Chris Moore took the camera crew and me to an editing room where they showed us the results on a large, high-definition flat-panel screen.
The images mesmerized everyone in the room. There in full 3D relief stood Okushiri Island as the leading wave approached the beach. Instead of black night we could see it all in perfect daylight detail, a view from space that could zoom right down to sea level and hover at any angle to see what the wave would do from every conceivable perspective. Titov and Moore had taken data points from the Japanese scientists, entered those numbers into the computer and rolled the timeline backward to the beginning.
Knowing how the wave ended—how high it pegged the needles on tide gauges, how far up the various beaches it ran—they reverse-engineered the event all the way back to flat water the moment before the Hokkaido quake and could play it again and again by clicking a mouse. This was more than just high-quality 3D animation—they had converted raw data into computer code to recreate the wave, then converted it again to graphical animation files that let them “fly” through the air above Okushiri and look at every hair on this monster’s head.
When they hit playback, they ran the event in slow motion to examine exactly how the waves changed shape, size, and direction as they rolled uphill from deep water, scraping across the rough terrain of the foreshore, the fronts of the waves slowing down because of friction and a heavy load of silt and the trailing edges still moving fast, rising high and crashing hard at last against dry land. It was amazing to see, especially when I reminded myself that this was based on real data from the real wave, not the fantasy of some Hollywood special-effects studio.
“What I like about it,” said Chris Moore, his hand on the mouse, “is the aerial photograph pasted over so that you can actually see exactly where the town is situated with respect to the wave.” It looked like Google Earth come to life in 3D. “This is an airstrip.” Moore nodded at the screen. “Each of these little dots is a rooftop—in reds, whites, and blues. So you can sort of see approximately how large the wave is.”
Moore pointed to the small peninsula that was about to be overtopped by the tsunami. In the animation a train of three waves approached the beach. “Here it’s shallow,” he said, hovering the cursor near the southernmost tip of land. “It’s deeper water off of here.” He moved the cursor farther off the beach. “And this wave front, as it animates through, tends to bend around the headland because the wave is slower in shallower water and faster in deeper water. So it bends right around there.”
The computer made it perfectly obvious why the waves would slow and turn as they did. “And then this group of waves here . . .” Moore zoomed closer to a second point of land, the graphics revealing a steep cliff overlooking a small bay. “It also shows reflection off of that headland.” The incoming waves bounced off the wall of rocks and ricocheted back across the bay to hit what had been a sheltered cove on the lee side of the incoming tsunami’s path.
Suddenly Titov tapped the space bar to pause the wave. “See this kind of fissure when the wave withdrew from the coast and formed a hydraulic shock?” He pointed to a frozen wall of water standing just beyond a beach that had been completely drained of its surf right down to bare sand. “That’s the first time—this animation—is the first time I saw anything like that. And if there was no animation, we probably wouldn’t have picked it up.”
“Yeah,” Moore enthused, “let’s just single-step through it and see how it goes.” He rewound the wave ever so slightly and played it back frame by frame. “So right about here is where it’s forming,” he mumbled as the leading wave fell back down the beach, taking all the water with it. “So this water is receding just as the next wave is coming in. It’s almost forming a standing wave or hydraulic shock.”
Because the Okushiri waves struck in darkness there were few eyewitnesses as the tsunami approached. This animation was apparently the only
way to know how the shape of the local sea floor had affected the incoming water. Case histories elsewhere explained and verified what the computer was showing.
“There are eyewitness accounts from the Chile event that sounded really weird,” Titov offered. “They were saying the second wave was sitting outside—offshore—waiting and gaining force . . . And that’s what it was,” he pointed again at the screen. “That was this hydraulic trough—not propagating any more, just sitting there . . . The second wave competing with the retreat from the first wave, creating a standing wave pattern.”
The point of showing us the playback of the Okushiri tsunami was to illustrate how far wave modeling had come by 1993. “This was in fact the first wave that we’ve tested our model against in terms of real event simulations,” said Titov. “It was the most studied and the largest event before Sumatra, really.” The model was still a prototype, however, and much work remained to be done. New research programs were launched to figure out how to simulate the small-scale details of coastal terrain and more complex problems such as the way waves break, how they transport sediment and debris, and how a wall of moving water interacts with solid objects on land.
The lessons of Okushiri were encouraging for Vasily Titov and his colleagues. “You really wonder what’s behind it,” he mused, “what kind of wonders mathematics can do. Writing equations, putting it in the computer, plotting it. And then you see the wave evolving just like you saw on TV . . . Things that I saw when I did the animation of the tsunami in Japan—we would never see it just looking at the formulas . . . It’s really the power of mathematics working for you.”
As the research began to accelerate, so did the ramifications of ignoring or neglecting Cascadia as a major public-safety issue. Within months of the Okushiri disaster in Japan, another scare in the Pacific Northwest and sobering new science from Canada would force politicians to take the initial concrete steps toward a viable coastal warning system. Festering debates and old controversies would be put to rest as the scale and potential of Cascadia’s fault became glaringly undeniable.
In San Francisco, on Market Street near the Ferry building, the ground beneath the street collapsed. This photo was taken on April 20, 1906, two days after the quake. ( U. S. Geological Survey)
A scene of near-total destruction in downtown San Francisco on May 7, 1906. The outer framework of the California Hotel (center) survived the quake, while all around it was rubble and ruin. (U.S. Geological Survey)
A scene from the Alaska earthquake of March 27, 1964. Fourth Avenue in Anchorage collapsed when the ground subsided eleven feet (3.3 m) vertically and lurched fourteen feet (4.3 m) horizontally at the same time.
(U.S. Geological Survey)
A man examines a ten-ply tire through which a plank of wood has been driven by a wave—an indication of the violent force of the tsunami surge that struck Whittier, Alaska, on March 27, 1964. (U.S. Geological Survey)
This photograph, taken in Port Alberni, British Columbia, on March 28, 1964, shows the aftermath of the Alaska earthquake and tsunami. Two of six waves that travelled more than 1,800 miles (3,000 km) from the Gulf of Alaska roared up the Alberni Inlet, flipping cars, smashing fifty-eight homes, and damaging 375 others. (Alberni Valley Museum Photograph Collection, PN13805/Charles Tebby)
The Alaska tsunami of 1964 also hit Crescent City, California, where more than a dozen people died when they ventured back into the danger zone after the first wave, thinking the worst was over. Four of the six huge waves generated in Alaska struck the California coast with deadly effect. (Del Norte County Historical Society)
An aerial view of the eruption of Mount St. Helens, Washington, near the Oregon border, on May 18, 1980. This photo shows physical evidence of a tectonic plate being shoved beneath North America along the Cascadia Subduction Zone. (U.S. Geological Survey)
Earthquake energy traveled long distance from the West Coast to Mexico City on September 19, 1985, with devastating effects on high-rise buildings, which vibrated in harmonic resonance with the low-frequency shockwaves. (U.S. Geological Survey)
Paleoseismologist Brian Atwater discovered this ghost forest on the Copalis River in Washington State in March 1986. The trees and other freshwater plants were killed by salt water when coastal lowlands dropped below high-tide level, probably during Cascadia’s last megathrust earthquake on January 26, 1700. (U.S. Geological Survey/Brian Atwater)
The Loma Prieta (or World Series) earthquake of October 17, 1989, caused deadly destruction in the San Francisco Bay Area. This photograph shows the support columns that failed, causing the collapse of the Cypress Viaduct (Interstate 880) in Oakland. (U.S. Geological Survey)
This photograph from March 2007 shows signs of a new public awareness of tsunami dangers on West Coast beaches. Residents of Seaside, Oregon, have learned the tsunami evacuation route and know they will have as little as fifteen minutes to leave the downtown core once a large quake on the nearby Cascadia Subduction Zone has begun. (Doug Trent)
Hauling a turbidite core sample aboard the research ship Roger Revelle off the coast of Sumatra in May 2007. Offshore landslides triggered by massive earthquakes have left a series of “tectonic fingerprints” in the deep-sea mud. (Chris Aikenhead)
A tsunami simulation at Oregon State University in August 2007. Using a scale model of the resort community of Seaside, the simulation shows how a thirty-foot (10m) tsunami expected from a Cascadia Subduction Zone earthquake would sweep across the community, inundating nearly all the downtown business district and many residential areas. (Omni Film Productions Ltd.)
In August 2007 cinematographer Ian Kerr (left) prepares to record a tsunami simulation with a high-speed camera (protected by a plastic bag). The snorkel lens will provide a graphic, street-level view as the wave rips across this scale model of Seaside, Oregon. (Omni Film Productions Ltd./Scott Spiker)
A tsunami simulation in Seaside, Oregon, in August 2007. The wooden blocks represent residential dwellings, single-story commercial buildings, and multi-story condominiums or hotels. (Omni Film Productions Ltd./Scott Spiker)
Cascadia’s wave makes landfall on the west coast of Vancouver Island in a computer-generated tsunami simulation created in 2008 for the documentary Shockwave. The first of several tsunami waves can be expected to hit areas from British Columbia to California fifteen to twenty minutes after the earthquake. (Omni Film Productions Ltd./Artifex Studios)
A computer-generated illustration from the documentary Shockwave (2008) shows the leading edge of a tsunami generated by a megathrust earthquake along Cascadia’s fault. The tsunami arrives in Ucluelet harbor on the west coast of Vancouver Island fifteen to twenty minutes after the shaking stops. (Omni Film Productions Ltd./Artifex Studios)
CHAPTER 15
Defining the Zone: Hot Rocks and High Water
Decades of terror, or the magnitude 9 scenario—which will it be for Cascadia’s fault? Apparently Gary Carver and Brian Atwater took enough flak from some of their colleagues for using such terms to describe the possible fate of the Pacific Northwest that they decided to tone down the language in subsequent talks. They started referring somewhat in jest to the biggest, full-margin rupture—the magnitude 9 scenario—as a “dinner sausage” earthquake and the series of slightly smaller magnitude 8s as being “breakfast links.” The question of which scenario was more likely to happen remained unanswered and vigorously debated.
Early in the new year of 1994, in the AGU’s Journal of Geophysical Research, a team of scientists at the Geological Survey of Canada took a stab at defining how much of the subduction zone was locked together and which parts might be moving along smoothly. Presumably, if you knew how much of the zone was locked—if there were some way to measure and define it—you might be able to estimate the size of the rupture that would be generated when the thing finally came unstuck. Drawing a line around the “seismogenic zone” would tell emergency planners how close the quake’s epicenter was going to be to major urban areas like Vancouver, Vi
ctoria, Seattle, and Portland.
The Canadian team gathered and distilled all the latest reports from both sides of the international border that showed how much the outer coast was being lifted up, dropped down, or squeezed together. Then they plotted the boundaries of each type of data to show exactly where the western edge of the continent was being deformed and in what direction. Herb Dragert’s laser and GPS surveys were coming to a sharp new focus. Garry Rogers plotted decades of seismic events that showed where all the coastal earthquakes had been and how deep their epicenters were. These overlapping maps of strain and epicentral data points formed the rough outline of the locked subduction zone.
The first thing that jumped out from this updated, multilayered mass of evidence was the idea that the built-up strain and deformation along the western edge of the continent had to be “transient.” The strain was obviously getting released from time to time. The rates at which the coastline was being lifted and the mountains tilted—but especially the speed at which the ground was being squeezed horizontally—were considered “geologically unreasonable.” If the hoisting and crunching had continued at this rate with no interruptions for a million years, the mountains along the west coast would be several miles higher than they are now and would look much like the Himalayas.
Cascadia's Fault Page 19