The Future of Everything: The Science of Prediction

Home > Other > The Future of Everything: The Science of Prediction > Page 13
The Future of Everything: The Science of Prediction Page 13

by David Orrell


  It sounds like something out of a Gabriel Garcia Márquez novel—a magical flowering of the desert, a purely local event. It was only relatively recently that the phenomenon was linked with other, often disruptive, happenings around the world. In fact, El Niño is the ocean part of a global weather pattern that also includes the Southern Oscillation, a see-saw fluctuation in atmospheric pressure between South America and India/Australia that was first reported in 1923 by Sir Gilbert Walker when he was director general of the observatory in India. The coupled ocean/atmosphere phenomenon, known as ENSO, causes everything from droughts in Amazon rainforests to monsoons in India.

  ENSO provides an example of how seemingly unrelated events can be part of a larger pattern. Such teleconnections, as meteorologists have termed them, blur the line between cause and effect, and mean that the ocean and the atmosphere have to be treated as a single system.

  In 1920, the Met. Office became part of the Air Ministry, so Richardson would have become a military employee. He wasn’t interested in modelling the flow of poison gas or being involved with the military in any way, so he resigned. He took a teaching position and signed up as an external student at University College London to study the new science of psychology. His research from then on focused not on meteorology but on something he considered far more important: the dynamics of war. Richardson believed that science was subordinate to morals, and he was motivated not just by intellectual satisfaction but by his pacifism and what he had seen during his ambulance-driving days. He modelled the buildup of conflicts the same way he had once modelled the buildup of a storm, by using differential equations.12 These models were almost a kind of fable. They showed how, like a hurricane developing from a small disturbance, an arms race between two countries can rapidly spiral out of control, as had happened in the First World War and would happen again in 1939.

  THE GCM

  Richardson’s dream of numerical weather prediction was eventually realized in the 1950s, albeit with newly invented silicon computers, not human ones. He would have been less pleased by the fact that most of the new technology was developed as a result of military efforts during and after the Second World War. The meteorological observation network had been expanded to include thousands of balloons radioing back information about temperature, humidity, wind speed, and pressure. High-speed computers were developed to crack enemy codes and model the explosions of atomic bombs. The brilliant Princeton mathematician John von Neumann, who cut his teeth on quantum mechanics and worked during the war on the non-linear dynamics of thermonuclear explosions, realized that the fluid dynamics of atom bombs could be applied to model the atmospheric flow.

  Most meteorologists were skeptical. The punch-card computers of the time had nowhere near the speed of a modern desktop machine, and it seemed unlikely that they could perform the complex calculations required. Henry G. Houghton, the president of the American Meteorological Society, said in 1946, “There appears to be no immediate prospect of an objective method of forecasting based entirely on sound physical principles.”13 Forecasters, it was believed, would always have to rely on what Carl-Gustaf Rossby, Houghton’s predecessor, had called “the horrible subjectivity.”14

  But the skeptics didn’t count on two things: huge increases in computing speed, and the simplification of the equations by mathematicians such as von Neumann’s new hire, Jule Charney. Charney’s quasi-geostrophic approximation replaced Bjerknes’s seven equations with a single one. This still had to be evaluated at each threedimensional grid point, but it made the atmospheric problem computationally tractable, especially since it filtered out the highfrequency oscillations that tended to make models unstable.

  In 1950, numerical weather prediction was demonstrated on a computer at the U.S. Army’s Ballistic Research Laboratory in Maryland. The regional model divided North America into 270 two-dimensional cells. Running on a punch-card machine known as ENIAC—for Electronic Numerical Integrator and Computer— the twenty-four-hour forecast took about twenty-four hours to complete, but the result at least resembled the actual weather. The first big success came in November 1955, when a weather model beat human forecasters in accurately predicting a storm in Washington, D.C.

  Inspired by this, Joseph Smagorinsky and Syukuro Manabe at the U.S. Weather Bureau set about building a three-dimensional model of the global atmosphere. Based on Bjerknes’s primitive equations, it included such details as how the atmosphere exchanges water and heat with the planet’s surface, and how the hydrological cycle (in which rain falls to the ground and re-evaporates) works.15 The GCM, which stood for general circulation model (and later for global climate model, or global coupled model—but not for Greek Circle Model), was born.

  Other teams soon set to work on their own GCMs. Finally, it seemed that Richardson’s dream, and even the deterministic vision of Laplace, was within reach. If we could accurately measure the current state of the atmosphere—Bjerknes’s initial condition—and apply the physical laws—the GCM—then we could predict the future weather as surely as we could the trajectory of the moon around the earth. And we could also control it. Hurricanes could be tamed or steered away, clouds seeded to produce rain, floods avoided.

  The military potential of forecasting was also obvious. The outcome of battles has often depended on the weather. In 1588, the Spanish Armada lost more ships to terrible September storms than it did to the British navy. The Spanish should have heeded the astrologists, who had been predicting bad things for that year: “Total catastrophe may not occur, but the storms will cause havoc by land and sea and the whole world will suffer astonishing upheavals, followed by widespread sorrow.”16 The Allied invasion on D-Day was delayed for a day when forecasters accurately predicted a storm that would have disrupted the landing. Like Apollo’s arrow, forecasts have always been used as an instrument of war.

  Von Neumann even believed that the weather itself could be manipulated and turned into a weapon. To this end, the RAND corporation, a defence think-tank in California, investigated methods to alter local climates.17 The weathermen would be rainmakers. They might not have the bomb, but at least they could inflict a drought on their enemies or depress the hell out of them with extended periods of cold.

  In a United States intent on winning the real Cold War, resources continued to pour into meteorology and computer science. With the establishment of academic programs focused on the study of atmospheric dynamics, such as the meteorology department at the Massachusetts Institute of Technology (MIT), meteorology realized its aim of being viewed as a serious, objective, “hard” science. While GCMs were going up in the world, though, they still couldn’t predict the weather over longer time scales. As a panel of the U.S. National Academy of Sciences dryly reported in 1965, the best models could simulate the atmosphere with features “that have some resemblance to observation.” They concluded that more computer power was necessary.18

  However, there was another fly in the ointment—or rather, a butterfly. Just as Poincaré had done in the previous century, when he realized that the three-body problem was sensitive to initial conditions, the MIT meteorologist Ed Lorenz discovered chaos. The difference was that unlike Poincaré, Lorenz could visualize the solutions to his equations on his computer screen—and they looked like a butterfly.

  STRANGE ATTRACTION

  In the early 1960s, Lorenz had been working on a highly simplified model of atmospheric flow, involving only a handful of equations. On one occasion, he interrupted a simulation before it had finished and wanted to restart it. The computer program was working to six-digit accuracy but outputting only three digits, so he restarted from the three-digit numbers and went for coffee while the machine, which was built from vacuum tubes, cranked away. When he returned, he found that the new “weather” output, or trajectory, was completely different from the previous version. After checking for mistakes, he realized that the difference was the result of the round-off in the initial condition. This sensitivity to initial condition was highly disc
oncerting. It was as if Newton, on seeing the apple fall from the tree, had repeated the experiment by dropping the fruit from almost the same position, and found that it shot off in a different direction.

  FIGURE 4.2. Plot of the variable x versus time t for the Lorenz system. The solid line shows a trajectory (i.e., a simulation of the system) initiated at a point on the attractor; the dashed line shows a trajectory initiated at the same point but rounded to three digits. The small error in initial condition results in a trajectory that clearly separates from the first after about ten time units.

  He later produced similar behaviour using an even simpler model, which simulated convective flow. When a container full of air is heated from underneath, the warm air at the bottom rises to the top, where it cools, causing it to sink down, and so on in a loop.

  Similar convective flow occurs in the atmosphere, when air warms in the equator, for example. Lorenz’s model, though, was highly abstract and was not intended to accurately resemble the real physical dynamics (see Appendix II for details). In Figure4.2,the solid line is a plot of the variable x as a function of time. It oscillates for a while at a high level (0 to 2 on the horizontal time scale), then switches to a low level (2 to 6), and continues back and forth at apparently random intervals. If a second trajectory (the dashed line) is initiated at the same initial condition but rounded off to three decimal places, then the small error grows until the two trajectories become clearly distinct after about ten time units, just as Lorenz found.

  The order in this chaos becomes more clear when two variables are plotted against each other, as in figure 4.3.The trajectory then forms a shape with two lobes that resemble the wings of a butterfly. In dynamical systems theory, this is referred to as an attractor, since no matter what initial condition is used, the trajectory will end up on it. The system is therefore bound and limited by the attractor. The butterfly is not free but pinned.

  FIGURE 4.3. A plot of the variable z versus x reveals the butterfly-shape attractor of the Lorenz system.

  There are three basic types of attractor. In a point attractor, trajectories are drawn to a single fixed point. An example is a pendulum. There, the attractor is the state where the pendulum points straight down. If it is perturbed slightly, it will swing back and forth, with the amplitude of each swing decreasing because of air resistance and friction until it comes to a halt. In a periodic attractor, trajectories are drawn into a repeating cycle, like the lightly forced pendulum in an old-fashioned clock. The third class, to which the Lorenz system belongs, is the so-called strange attractor, which is characteristic of chaotic systems and has a more complex appearance. (The word “strange” does not imply that these attractors are unusual, only that they were discovered after the other two types.)

  In keeping with our baking analogies, we can look at the Lorenz system as a kind of Mixmaster with twin beaters, one for each lobe. A particle in the mix circulates around one of the two lobes, and occasionally switches to the other. The mix is being stretched by the beaters, so two particles that start off close to each other are quickly pulled apart (as one would hope, since it is a mixer). The spread of the mix is limited by the bowl. The effect is very much like the shift map of Chapter 3: the stretching of the dough means that the distance between particles initially increases by a factor of two at each step, but it’s limited because the dough is constantly folded back onto itself.19 The main difference is that the Lorenz system is continuous in time, while the shift map is specified only at discrete times.

  Like the shift map, the Lorenz system also shows exponential growth of errors. This can be seen by plotting the distance between one trajectory and a second from a perturbed initial condition. We first need a measure of distance in three dimensions. In a two-dimensional Cartesian grid, the distance between a point with the co-ordinates x, y and a second point xp, yp can be obtained by using Pythagoras’s theorem (as shown in figure 4.4).

  FIGURE 4.4. Calculating distance in a Cartesian grid. The horizontal distance is xp−x, while the vertical distance is ,yp −y. From Pythagorasߣs theorem, the total distance d is therefore given by d= √(xp − x)2 + (yp− y)2.

  Similarly, the distance, or error, between a point on one three-dimensional trajectory to a point on the perturbed trajectory can be found by squaring the error in each co-ordinate, summing the squares, and taking the square root of the result. The evolution of the error will depend on the initial conditions of the trajectories; however, we can get the expected error growth by performing a large number of experiments and taking the average. Actually, it is more common in meteorology to use the root mean square (RMS) error, which is based on the square root of a sum of squares, instead of the average, though the two have similar properties.20 As seen in figure 4.5 (on page 142), the RMS error increases in a quasi-exponential fashion over the first time unit. As in the shift map (see also figure A.1 in Appendix I on page 352), the error growth eventually saturates; all trajectories must stay on the attractor, so the distance between them is limited.

  The Lorenz system was not intended to be an accurate representation of convection—it was a truncation to three equations of a larger system, which itself was only an approximation and did not fully account for viscosity, which would tend to damp out small perturbations.21 However, it did show in principle that atmospheric flow-type systems could behave chaotically. Since the initial condition—the exact current state of the atmosphere—could never be perfectly known, it followed that we could never perfectly predict the future weather, even if the model was perfect. Errors grew exponentially, so it was only a matter of time before they became huge.

  FIGURE 4.5. Plot of RMS error as a function of time for the Lorenz system, for an initial perturbation of magnitude 0.001 in each variable.

  BLAME THE BUTTERFLY

  One would think that such a Malthusian statement would be poorly received, and indeed the initial reaction was muted. However, Jule Charney, who worked down the hall at MIT, realized the importance of sensitivity to initial condition: perhaps it could account for the poor performance of weather models. Charney therefore decided to determine the rate of error growth in real GCMs, and he asked meteorologists from ten countries to repeat Lorenz’s experiment by initiating trajectories from slightly different points and seeing how quickly they diverged.

  The fact that errors grow in an exponential fashion does not necessarily mean they grow quickly. The money in my checking account grows exponentially, but since the interest rate is less than a percent, I would have to wait about a hundred years for it to double, and rather longer to become a millionaire. A useful measure is the doubling time. If weather errors double in magnitude once every day, then after one week, they would have increased by a factor of 27 = 128. On the other hand, if the doubling time was one week, they would increase by only a factor of two in that time. In fact, if errors grow exponentially, rather than linearly, over the first week, they are actually smaller for times less than one week.

  It turned out that the average doubling time from Charney’s experiments was about five days.22 This wasn’t particularly fast, but it did effectively put a lid of about seventeen days on accurate forecasts, since any errors would by then blow up by about a factor of ten. Also, the growth of initial errors depended to an extent on the type of perturbation that was made to the initial condition, and it was hard to know what would be realistic. As we’ll see, later reports gave much faster doubling times, down to about one day. As models increased in resolution, dividing the atmosphere into a finer and finer grid, employing as many as 10 million variables, it seemed their behaviour grew more chaotic.

  But if anything in science was growing exponentially, it was interest in chaos theory. Chaos (which is a fascinating area of mathematics) was running amuck not just in meteorology, but in many other branches of applied science, such as biology and economics.Variations in populations of species or the beat of a heart or the stock market were being modelled as chaotic systems. Interest was fuelled by advances in co
mputing, which meant that for the first time, scientists could rapidly solve differential equations and visualize the solutions on desktop machines. Many non-linear systems, such as the Mandelbrot set, turned out to have beautiful fractal properties with amazingly rich detail, no matter how closely one zoomed in. The phenomenon of chaos got even more attention in 1972, when Lorenz gave a talk at the American Association for the Advancement of Science and introduced the catchy term “butterfly effect” to describe the sensitivity to initial condition. By the early 1990s, the eggs this butterfly had laid in scientists’ minds were fully hatched: “It is the errors that arise due to instabilities in the atmosphere (even in case of small initial errors) that dominate forecast errors,” concluded one paper.23

  While it might have been depressing to some that chaos had put perfect weather forecasting out of reach, the story it told—the subtext of the equations—was not altogether negative. It helped answer the question of why forecasts always went wrong, despite the massive amounts of time, resources, and brainpower that were expended on them.24 The problem was not with the deterministic approach, the physical laws the GCMs were based on, or the quality of the science, but instead was a natural consequence of the equations themselves. Furthermore, while the butterfly effect meant that exact predictions of the future weather were impossible, this had no effect whatsoever on calculations of the long-term climate. Just as any trajectory in the Lorenz system will eventually settle onto the attractor, so calculations of a climate model’s “attractor” are not dependent on the initial condition.

 

‹ Prev