Book Read Free

The Future of Everything: The Science of Prediction

Page 14

by David Orrell


  In any case, the existence of chaos did not imply that forecasts should not be made—only that there should be more of them. Weather forecasting centres such as the European Centre for Medium-range Weather Forecasts (ECMWF) in Reading, England, and the National Center for Atmospheric Research in Boulder, Colorado, began developing elaborate ensemble forecasting techniques to deal with the effects of chaos. If nearly all errors were due to the initial condition, then a number of forecasts from perturbed initial conditions could be used to form a probabilistic forecast. The ensemble teams were even comfortable enough with the models to adopt a perfect model assumption: “We will assume that our numerical model is essentially perfect,” wrote one.25 A strange thing happened, though. Instead of diverging rapidly, as expected, the perturbed forecasts tended to cluster closely together. It was decided that these were the wrong kind of perturbations, and sophisticated algorithms were developed to find specialized perturbations that diverged more rapidly.

  The butterfly therefore remained pinned down as the source of practically all forecast error. But is the weather really so delicate and finely poised a system that an insect can stir up a hurricane—or knock it off track—with a beat of its tiny wings?

  OUR DAILY FORECAST

  The fundamental causes of atmospheric motion are relatively simple and relate not so much to butterflies but to that other dynamical system that Pythagoras called the cosmos. If the earth were stationary relative to the sun, so that one side always faced the sun and the other side faced away (as the moon does with the earth), then the weather would be extremely dull. One half of the planet would have perpetual daylight, while the other had perpetual night. One side would be warm, the other cold. Viewed as a dynamical system, the weather would be almost like a pendulum at rest.

  Instead, the earth is constantly spinning like a top, which means that each side alternates between day and night, warming and cooling. Also, the planet rotates around the sun, with a period of one year. Because the earth’s axis is at an angle to the sun (as in figure 1.3), first one hemisphere, then the other gets more exposure to light. This results in the seasons. The atmosphere reacts to this differential heating by attempting to equalize the temperature. Warm air at the equator lifts up, and cold air from the poles moves in to replace it. The atmosphere is constantly being churned around and can never reach equilibrium.

  The flow of air is affected by the fact that the planet is round and is spinning. Land at the equator is travelling fast, but land closer to the poles moves more slowly (because the radius of rotation is smaller). As soon as air begins moving—say, from west to east—it interacts with the spinning motion to curl from right to left in the northern hemisphere and left to right in the southern hemisphere. This is known as the Coriolis effect, and it leads to the swirling patterns of winds seen on weather reports. Also, air is subject to the non-linear effects of friction and turbulence, especially at lower latitudes, where it interacts with the land and the oceans. And then there are the effects of moisture to consider: water picked up from the oceans forms clouds, which affect the local heating, which affects the wind, and so on.

  The first step in weather prediction, as Bjerknes wrote, is to specify the initial condition: in other words, today’s weather, as measured by the basic variables of temperature, pressure, wind speed and direction, and humidity. Because it isn’t possible to know these variables at every point, the atmosphere is divided into a giant threedimensional grid, the spherical counterpart to Cartesian coordinates, which surrounds the world like a cage. The resolution— the coarseness of the division—is determined only by the available computer power. In a modern GCM, the resolution might be about forty kilometres horizontally and one kilometre vertically, though this depends on the exact model. (Local models that simulate the weather in particular geographical regions often use a finer grid but rely on global models for inputs.)

  Because the atmospheric variables must be specified at each cell in the grid, the total number of variables in such a model is of the order 10 million. In other words, you wouldn’t want to run this on your desktop. ECMWF has some of the world’s fastest computers tucked away in its basement in Reading, a half-hour train ride from London. The Japanese Earth Simulator, with its 5,000 processors, is even zippier and can perform trillions of calculations per second—making it about a million times faster than the ENIAC computer of 1950. 26

  Observations are carried out on board a range of platforms— weather balloons, ground stations, commercial airliners, boats, satellites. Part of the joy of meteorology is in the hobby-shop nature of much of the equipment. I once attended a meeting in California about the latest observation techniques. In the first talk, a meteorologist from a private company demonstrated his device, called a radio-sonde, for measuring pressure and temperature at high altitudes. A tube a couple of feet long, it was packed full of electronics, including a transmitter to beam the information to a weather station. Such devices, which are in common use by weather centres, are dropped from airplanes. A little parachute opens, and they float gently to the ground, recording the weather as they go.

  The next offering was a balloon that did much the same thing, except that it was launched from the ground and floated around with the wind. But both the balloon and the parachute were totally outclassed by a company that had developed a radio-controlled plane that had flown all the way from the U.S. to Europe—a world record for a pilot-less device. They were working on a version that could float around in the atmosphere indefinitely, taking its energy from the sun while it continuously recorded the weather.

  This was hard to beat, but there was one challenger: a group of Canadians from Vancouver who had come up with a way to do atmospheric measurements over the ocean. Less data collection takes place over the oceans, which for Western Canada (among other places) presents a problem, since the weather there tends to come from the west (i.e., over the ocean). So the idea was to float a buoy that was armed to the teeth with clusters of rockets, like a hedgehog. Every half day or so, one of the rockets would shoot up into the sky. At the top of its trajectory, a little parachute would again open and it would float down, taking measurements as it dropped.

  The Canadians were working on some technical problems. One was how to stop seagulls from sitting on the rockets. Another was to make sure that the rockets were aimed straight up, even in a storm, to prevent the possibility of accidentally firing at passing ships and starting a war.

  The ultimate piece of kit, of course, is the weather satellite, like the Geostationary Operational Environmental Satellites (GOES), which hover 35,800 kilometres above the earth in geo-synchronous orbits.27 The planet is now constantly monitored from above by cameras and sensors, like a dodgy customer at a security-conscious bank.

  Once measurements are obtained, by whichever means, they are transmitted to weather centres. Since the observations are made at a mix of locations around the planet, they must first be interpolated in some way to get the values at the grid points, a procedure known as data assimilation. Every measurement is subject to error, so the assimilation scheme attempts to smooth or filter spurious results. In modern schemes, this is done by adjusting the observations so they are roughly compatible with model predictions from several hours before. The measured state of the atmosphere is called the analysis, and that forms the initial condition for the model forecast. Every few hours, this is passed to the GCM, which cranks out the latest forecast. If there is a modern equivalent of the oracles of ancient Greece, they reside somewhere in the circuits of these computers. Every day they are fed data, and every day they are consulted for a prediction of the future.

  But the predictions still need to be interpreted, just as at Delphi. This is done by human forecasters, often working for private companies, who apply their knowledge of local conditions to improve the result.28 Weather prediction has grown into a multi-billion-dollar industry; forecasts are routinely supplied to all manner of businesses affected by the weather, such as agriculture, energy
, transport, and retail. Consumption of soft drinks, ice cream, movies, medicines, and many other goods change with the weather. Power companies use forecasts to estimate demand and make the expensive decision of whether or not to bring on additional generators. Finance companies offer contracts known as weather derivatives, which insure against anything from a wet winter to the number of frost-free days at an airport, with a worldwide market estimated at $5 billion.29 When the oracle speaks, they listen.

  MEASURING ERROR

  So how reliable are these oracles? To measure the error, we must first choose a metric, a measuring stick. In the Lorenz system, errors were expressed as distance in three-dimensional space. In this Euclidean metric, as it is known, each variable carries the same weight. A GCM, however, contains variables of different types (such as pressure, temperature, and so on) at different locations. Since pressure is not directly comparable to temperature or to windspeed—for one thing, they are in different units—we either have to translate between them or come up with some partial measure. A popular choice in meteorology has been the 500 mb (millibar) height, which is not a pressure but the height at which the atmosphere has a pressure equal to 500 mb. Pressure decreases with height, as Blaise Pascal showed, and 500 mb is about half the pressure at ground level. This metric is often plotted as a kind of contour map: the hilltops represent areas where pressure is higher than usual and the valleys represent low pressure. Meteorologists can read such maps and pick out features that indicate large-scale weather systems, though the relationship with weather on the ground requires much interpretation.

  To obtain a single number—a kind of distance between the forecast and the observations—we can take the RMS difference between the two over a selection of grid points. This gives a sense of the average error over that area. The principle is the same as that for the Lorenz system, but the errors are summed over some thousands of grid points, instead of just three variables. Different metrics, in different atmospheric variables, give different results. A more complete, but harder to compute, metric is known as total energy.30 It translates the different quantities, such as windspeed, temperature, and pressure, into compatible units of metres per second and measures them globally at all grid points. The advantage of total energy is that it accounts for all sources of error.

  Finally, we need some benchmark for weather models to beat. Meteorologists typically use one of the so-called naïve forecasts. The first of these is to say that the weather will be the same as the clima-tology— that is, the average for that day of the year. In the shift map of Chapter 3, this was the “middle of the dough” forecast. The second naïve forecast is persistence, which says that the weather tomorrow or next week will be the same as today’s. If the forecast beats one or both of these, on average, by even a negligible amount, then it is said to have “skill.” Of course, the word here does not have its usual meaning, because neither climatology nor persistence are very good, but skill does imply that the model is on the right track.

  Given these provisos, we can estimate model accuracy. GCMs have now been around for about fifty years, and over that time, they have improved in a slow, incremental fashion. The greatest improvements, and the most commonly cited statistics, have been in the 500 mb metric, which now show skill, compared with persistence or climatology, for up to about a week.31 Some days the models will do better, other days worse, but over many separate trials, they will have a slight edge. However, the 500 mb height is not the same as the weather—you can’t feel it on your face. It corresponds to an average altitude of 5.5 kilometres, which is around where pilots like to fly exactly because it is “above the weather.”32 It is also an intrinsically low-error metric, since it is measured away from most turbulence and storms and tends to average out errors at lower levels.

  Predictions of temperature and windspeed closer to the ground are more difficult. For specialized sectors, such as wind farms, where a small statistical edge in windspeed prediction can translate into economic value, it can be worth using forecasts out to five days.33 For most situations, though, forecasts are noticeably useful for only two or three days, with errors increasing rapidly over that period and then growing more slowly.34 Beyond that, the climatological forecast works about as well—but it doesn’t require a GCM or a supercomputer, or even any interpretation. The most difficult feature to predict (and one of the most important) is precipitation, which has been more resistant to improvement and shows little skill past twenty-four hours.35 In fact, forecasters interpreting the models do not, as a matter of principle, take the numerical output literally, but instead use their own subjective knowledge to significantly improve the result (which is why skilled human forecasters have yet to be replaced by machines).36 Improvements in numerical weather prediction have therefore lagged far behind advances in computer speed and observation technology, despite the great economic value of accurate forecasts.

  Of course, certain features of the atmosphere have a time scale of days to evolve and dissipate, so it is possible to detect them in advance and anticipate their direction. An example is hurricanes (or typhoons, depending on the part of the world), which can be the size of Texas. These systems are like massive heat engines that take energy from warm oceans and release it through condensation at high altitudes. The direction of the circulation is set by the Coriolis effect, so it depends on the hemisphere (counter-clockwise in the northern hemisphere, clockwise in the southern hemisphere). Forecasters can now predict the track of hurricanes with an average three-day error of about 240 kilometres, which is useful for people trying to get out of the way.37 However, estimates of storm intensity are much more difficult and have made little improvement over the past fifteen years.38 Hurricanes draw their energy from the heat in the top few metres of ocean, and as the winds stir up the water, they may bring cooler water up from the depths. A hurricane’s intensity depends on the details of this ocean/atmosphere interaction, which is extremely hard to model or predict. The average intensity may increase if oceans warm because of climate change, which is a concern since the damage caused varies roughly with the cube of windspeed.39

  Longer-term models, which attempt to predict weather phenomena weeks, months, or even years in advance, are naturally even more prone to error. A number of models have been developed to predict the occurrence of El Niño events, which are caused by upwellings of cool water in the Pacific and are second only to the change of seasons in their effect on global weather. The ocean sloshes around on slow time scales, and El Niño events recur every two to seven years. Severe events can significantly disrupt the global economy: the one in 1997– 98 destroyed property in California, caused fires in the Amazonian rainforest, hammered the Colombian coffee harvest, and rang the world climate system like a giant bell. The total damage was estimated at $25 billion (U.S.), though this was offset in certain regions by other benefits, like a warmer winter.40 The phenomenon is clearly worth predicting—accurate forecasts of, say, temperature trends would be worth great sums to power companies alone. Because El Niño is driven by slowly varying ocean effects, there appears to be a degree of predictability. However, according to one report, comparisons of model results with observations have shown that there “wasn’t much skill, if any” over simple estimates. Furthermore, “the use of more complex, physically realistic dynamical models does not automatically provide more reliable forecasts.”41

  Even if it is not possible to make accurate “point predictions” about the weather, it might still be possible to make a probabilistic forecast. Many local forecasters now say that there will be an 80 percent chance of rain tomorrow, instead of coming right out and forecasting rain. One way to go about this numerically is to make an ensemble of forecasts from perturbed initial conditions. If they all call for rain, then confidence is high that rain will occur, while if some call for rain and some for sun, the outlook is mixed. There are two difficulties with this approach, however. First, the model initial condition consists of 10 million or so variables, and there are many d
ifferent ways to perturb them. Ensemble forecasters typically choose the perturbations that give the maximum effect, which may be highly atypical and potentially says more about model peculiarities than about the weather itself. Second, this only works if you believe that most error is due to initial condition, which is not very likely.42

  Some modern ensemble schemes also perturb the model by varying parameters in a random way, adding random terms, or simply using a number of different models. Here it is far harder to choose appropriate perturbations because they occur not just at time zero, as with the initial condition, but must change with time. And since the model equations do not capture every aspect of the system, there is no reason to believe that any settings of the parameters will reproduce the actual weather. Also, to make a true probabilistic forecast, we need to know the probability that a parameter will vary in a certain way, which is awkward if a correct value does not exist.43 The results from ensemble experiments are hard to interpret— the area has evolved a rich and forbidding array of statistical tools—but these schemes appear to offer only slight improvement over simply running a single high-resolution forecast.44

  If weather prediction is so hard, you may wonder how publications such as the Farmer’s Almanac can claim to make reliable forecasts months in advance, based on things like sunspots, the planets, and the tidal action of the moon. The answer is that they don’t do any better than climatology.45 That hasn’t stopped almanacs from being consistent best-sellers for about the past 3,000 years. At least they are an amusing read. In the 1700s, Benjamin Franklin wrote his own very popular almanac under the name Richard Saunders:

  Courteous Reader,

  This is the 15th Time I have entertain’d thee with my annual Productions; I hope to thy Profit as well as mine. For besides the astronomical Calculations, and other Things usually contain’d in Almanacks, which have their daily Use indeed while the Year continues, but then become of no Value, I have constantly interspers’d moral Sentences, prudent Maxims, and wise Sayings. . . . If I now and then insert a Joke or two, that seem to have little in them, my Apology is, that such may have their Use, since perhaps for their Sake light airy Minds peruse the rest, and so are struck by somewhat of more Weight and Moment.46

 

‹ Prev