Share This

Saturday, March 12, 2011

Tsunami's Top Model: Science of Predicting Monster Waves

A map of estimated tsunami travel times.
A map of estimated tsunami travel times.
CREDIT: NOAA View full size image

The 8.9-magnitude earthquake that struck Japan earlier today (Mar. 11) sent a deadly wall of water roaring ashore the country's main island of Honshu, killing hundreds and washing away cars and buildings in a deadly tide of debris.

The quake, which ruptured about 80 miles (130 kilometers) from Japan's northeastern coastline, occurred when one tectonic plate dove violently beneath another, causing a nearly 300-mile (480-km) swath of the seafloor to lurch upward, generating a tsunami.

The devastation in Japan was swift. The monster wave arrived less than two hours after the quake — the world's fifth-largest on record. However, an ocean away, calculations were under way to see what the tsunami would do over the coming hours.

Shortly after the temblor, the U.S. National Oceanic and Atmospheric Administration (NOAA) released a comprehensive list of estimated tsunami heights and arrival times for the North American coast, and watches and warnings were issued from Alaska to California.

Formulating those predictions can be a tricky business.

Damage details
Uri ten Brink, a research geophysicist at the U.S. Geological Survey, said figuring out how fast a tsunami will move is fairly straightforward.

"What is hard to predict is the level of the tsunami generation — the amplitude of the wave," ten Brink told OurAmazingPlanet.

A tsunami has two key ingredients that are important for scientists trying to model how a given wave will behave: amplitude and wavelength.

Amplitude is essentially how tall a wave is, from peak to trough. Wavelength is the distance between each peak.

Ten Brink said these qualities can be illustrated by simply turning on a radio. Turn up the volume, and you've just adjusted the sound waves' amplitude. But changing the volume on your radio doesn't alter the sound's pitch — its wavelength.
If a tsunami's amplitude is very large (loud), it will produce a taller wave. If a tsunami's wavelength is very long (that would be the same as a low, deep sound), it will travel far before it loses energy.

New models

Scientists at federal agencies are using a newly developed tool — a modeling system — called MOST (Method of Splitting Tsunami) to help predict how tsunamis will develop.

The system, widely adopted just last year, has vastly improved predictions of a tsunamis' behavior and effects — wavelength and amplitude among them — and as a result, tsunami warnings have become far more detailed and accurate, according to Costas Synolakis, a professor and director of the Tsunami Research Center at the University of Southern California who, along  with NOAA's Vasily Titov, developed MOST.

MOST models a tsunami from generation to target, Synolakis said, "from the moment it's generated underwater to the maximum penetration point inland."

Even MOST's initial forecasts of tsunami effects, formulated before all the data from buoys scattered around the Pacific Ocean were incorporated, were very accurate, Synolakis said. In addition, the model adjusts itself along the way, changing predictions as conditions and data change.

Although MOST is a huge step forward, Synolakis said, there's always room for improvement, and researchers need better seismic information, more data-collecting buoys bobbing across the world's oceans, and the ability to incorporate GPS data into the model.
"The ultimate goal is to improve the forecast and make it faster when the earthquake happens very close to a coast," Synolakis told

OurAmazingPlanet, adding that the vast amount of data collected in the aftermath of the Japan earthquake will prove invaluable to the scientists trying to improve tsunami models, and perhaps save lives in the future.

"I guess that's a silver lining," he said.

Reach Andrea Mustain at amustain@techmedianetwork.com. Follow her on Twitter @AndreaMustain.
This article was provided by OurAmazingPlanet, a sister site to LiveScience.

Will Next Time Be Different?

 

by Raghuram Rajan  

CHICAGO – Carmen Reinhart and Kenneth Rogoff, in their excellent, eponymous book on debt crises, argue that the most dangerous words in any language are “This time is different.” Perhaps the next most dangerous words are “Next time will be different.”

These words are often uttered when politicians and central banks want to bail out some troubled segment of the economy. “Yes,” one can almost hear them saying, “we understand that bailing out banks will subvert market discipline. But you cannot expect us to stand by and watch the system collapse, causing millions of innocent people to suffer.

We have to live with the hand we are dealt. But next time will be different.” They then use every tool they have to prevent economic losses on their watch.

The government’s incentives are clear. The public rewards them for dealing with the problem at hand – whether building levees to protect houses built on a flood plain or rescuing banks that have dodgy securities on their balance sheets. Politicians and central bankers gain little by letting the greedy or careless face the full consequences of their actions, for many innocent people would suffer as well.

A sympathetic press would amplify their heart-rending stories of lost jobs and homes, making those counseling against intervention appear callous. Democracies are necessarily soft-hearted, whereas markets and nature are not; government inevitably expands to fill the gap.

To the extent that the rough justice meted out by markets or nature teaches anyone to behave better, it has consequences far beyond the horizon of anyone in power today. When asked to choose between the risk of being known to posterity as the central banker who let the system collapse and the intangible future benefits of teaching risk takers a lesson, it does not take genius to predict the central banker’s decision.

Democracy tends to institutionalize moral hazard in sectors that are economically or politically important, such as finance or real estate, allowing them to privatize gains and socialize losses.

Even though the authorities insist that the next time will be different, everyone knows that they will make the same decision when confronted with the same choice again.

So, knowing that next time will not be different, the authorities try their best to prevent a “next time.” But risk takers have every incentive to try their luck again, knowing that, at worst, they will be bailed out. In this cat-and-mouse game, risk takers have the upper hand.

For one thing, risk takers are typically small, cohesive interest groups that, once rescued, have a powerful incentive, as well as the resources, to buy the political influence needed to ensure a return to the status quo ante. If risk takers were allowed to face more serious losses, they would have fewer resources to fight political attempts to constrain their risky activities.

Moreover, the public does not have a long memory, a long time horizon, or an appetite for detail. Even as the United States’ voluminous Dodd-Frank bill tried to ensure that bankers never subjected American taxpayers to undue risk again, public attention had moved on to the state of the real economy and unemployment.

Why focus on financial regulation when the risks of an immediate collapse are small, and when the details are so tedious? As technical experts and lobbyists took over, and the public lost interest, Dodd-Frank became friendlier and friendlier to the banks.

So how can this one-way betting be stopped? The scary answer may be that it does not end until governments run out of money (as in Ireland) or the public runs out of sympathy (as in Germany vis-à-vis the rest of Europe).

To avoid that fate, governments should start by recognizing that the system is programmed to respond to deep distress, and that they can do nothing about it. But they must try to ensure that they do not destroy incentives by doing too much. And they must offset the distortions created by intervention in other ways.

For example, the US Federal Reserve has essentially guaranteed the financial sector that if it gets into trouble, ultra-low interest rates will be maintained (at the expense of savers) until the sector recovers.

In the early to mid-1990’s, rates were kept low because of banks’ real-estate problems. They were slashed again in 2001 and kept ultra-low after the dot-com bust. And they have been ultra-low since 2008. Senior Fed policymakers deny that their interest-rate policy bears any responsibility for risk taking, but there is much evidence to the contrary.

It would be difficult for the Fed to respond differently if the financial sector gets into trouble again. But it does not have to maintain ultra-low interest rates after the crisis has passed, especially if those rates have little impact on generating sustainable economic activity. Doing so merely rewards banks for their past excesses – and taxes savers.

More importantly, if the Fed wants to restore incentives for risk takers and savers, it should offset the effects of staying “low for long” in bad times by increasing interest rates more rapidly than is strictly necessary as the economy recovers.

This will, of course, be politically difficult, because the public has been programmed to think that ultra-low rates are good, and higher rates bad, for growth, without any consideration for the long-term sustainability of growth.

Finally, the pressure on governments to intervene would be lower if individuals had access to a minimum safety net.

Official US policy is so activist in downturns (regardless of its effectiveness) partly because unemployment is so costly to workers – who have little savings, unemployment benefits that run out quickly, and health care that is often tied to a job.

A stronger safety net for individuals might allow politicians to accept more corporate or financial-sector distress, and help bolster their claim that next time really will be different.

Raghuram Rajan, a former chief economist of the IMF, is Professor of Finance at the University of Chicago and the author of Fault Lines: How Hidden Fractures Still Threaten the World Economy.