The Resilient Earth: Science, Global Warming and the Fate of Humanity

Home > Other > The Resilient Earth: Science, Global Warming and the Fate of Humanity > Page 28
The Resilient Earth: Science, Global Warming and the Fate of Humanity Page 28

by Simmons, Allen


  For those who demand that climate science be held to more rigorous standards, this is the ultimate betrayal of scientific integrity. As Wegman stated in testimony before the energy and commerce committee: “I am baffled by the claim that the incorrect method doesn't matter because the answer is correct anyway. Method Wrong + Answer Correct = Bad Science.” For some climate scientists, the end seems to justify the means.

  The Second Pillar of Climate Science

  Aside from the uncertainties in current and historical data there are many gaps in scientists' current data collection capabilities. Many factors affecting climate change are poorly understood and not captured well by current proxy or data gathering techniques. Here are comments regarding climate data from a number of experts in the field. These quotes are taken from the Geotimes web site.422

  “Some major gaps in our understanding of past and future climate are left by existing proxies. For example, cloud properties and atmospheric composition are poorly characterized by proxies, but that may change in the future. Recently, techniques targeted on understanding the role of sulfur in the climate system have begun to make exciting progress on these issues.” — Matt Huber, Danish Center for Earth System Science, Niels Bohr Institute in Copenhagen.

  This reinforces the often voiced criticism that cloud cover is not being properly accounted for in GCM climate predictions. In particular, the mention of new work on understanding sulfur's role in cloud formation is central to the mechanism through which cosmic rays affect climate presented in Chapter .

  “Measurements of water vapor and clouds and precipitation and temperature on time scales of hours and a spatial resolution of 5 kilometers in the horizontal and 1 kilometer in the vertical would be great, so would radiosond measurements of wind throughout the tropics. Unfortunately, we don’t know how to get the water vapor data. Satellite orbits rarely meet the resolution requirements. As a result, any currently envisaged global observing system would be inadequate for at least some essential components.” — Dick Lindzen, Massachusetts Institute of Technology.

  Again, the subject of clouds and water vapor is mentioned along with the spatial resolution available for such measurements. Interestingly, the 5 kilometer resolution mentioned here echoes the statement regarding the importance of sub-5 kilometer resolution in accurately modeling hurricanes presented in Chapter 14. The question of resolution is critical in accurately modeling cloud effects in GCM. Also note that water vapor, a greenhouse gas that dwarfs the contributions of CO2, is also inadequately monitored. And, according to Lindzen, there is no prospect of achieving adequate monitoring anytime in the foreseeable future. Another related factor, also on the IPCC poorly understood phenomenon list, is the affect of aerosols, tiny particles suspended in the atmosphere.

  “As for climate forcings, the big uncertainties are with aerosols, both their direct forcing and their indirect effects via clouds. These depend sensitively upon aerosol characteristics, particularly the composition and size distribution. We must have detailed monitoring of aerosol microphysics including composition specific information. It is not enough to measure the optical depth or back-scattering coefficient of the aerosols. I strongly advocate making global satellite measurements that use the full information potential in observable radiance.” — James Hansen, NASA Goddard Institute for Space Studies.

  Here, Hansen reinforces the IPCC's admission of ignorance regarding aerosols and their impact. Furthermore, he points out how incomplete modern satellite coverage actually is. Finally, Jeff Kiehl points out what this paucity of data means for climate modeling.

  “There is very little data on oceans, things like a long time series of global ocean temperature. There are limited data sets that people are using, but that’s an area that we certainly need a lot more data on. There are some observations on surface energy exchanges between the land surface and the atmosphere, but again that is just at certain points; it is not a global data set, which is what we would need for doing the best job evaluating the models. The models produce a lot more information than we have observations for, and this is not a satisfactory situation. You’d like to have more observations than things you are modeling, but unfortunately, it is just not the case for global modeling.” — Jeff Kiehl, National Center for Atmospheric Research.

  As we have seen, all climate data are inexact, modern measurements and historical proxies all come with margins of error. Historical records, from any period but the recent past, are inherently incomplete, unreliable, and even more recent data are subject to multiple interpretation. This is not to say that such data is bad or erroneous, just that uncertainty must always be taken into account when analyzing the data. The data on which predictions are based must always come under the closest scrutiny.

  All experimental data contain some uncertainty, but the uncertainties in climate data are often larger than the predictions published by the IPCC. This is due to the extensive use of historical proxy data to try and predict future climate trends. IPCC experts have testified that “temperatures inferred using such methods have greater uncertainty than direct measurements.”423 If the first pillar of climate science, the theory, is incomplete, then the second pillar, the experimental data, must be called uncertain. Starting from this unsteady foundation, climate modelers have proceeded to construct imposing computational edifices—global climate models. These GCM computer programs are the basis of the third and final pillar, computation.

  The Limits of Climate Science

  “There is something fascinating about science. One gets such wholesale returns of conjecture out of such a trifling investment of fact.”

  — Mark Twain

  There are three fundamental problems that limit the effectiveness of climate science. These are lack of understanding of Earth's climate system, inherent uncertainty in baseline data, and reliance on conceptual computer models for prediction of future climate. These three problem areas correspond to the three pillars of climate science: theory, experimentation, and computation. The previous two chapters addressed the incompleteness of climate theory and the inherent uncertainty in climate data. In this chapter, we will address the third and most misunderstood pillar, computation. In the context of climate science, computation is primarily represented by climate models—GCM, complex computer programs that have been under continual development for at least a quarter of a century.

  To the layperson, and even many scientists, the pitfalls and problems with modeling of any kind are unknown and unappreciated. To computer scientists who make a study of such things, modeling is fraught with danger for the uninitiated and the unwary. Nevertheless, computer modeling presents a seductive trap for many other wise skeptical scientists. The appeal of running experiments in a clean, mathematically antiseptic world from the comfort of an office can be overwhelming.

  Much of the IPCC's case for rapid and accelerating temperature rise in the future is based on the predictions of computer models. Most people unquestionably accept that these results are accurate—after all, they sound very scientific and run on big super computers. What is really not discussed in the public announcements, but is well known by scientists who do computer modeling, is that models are not very accurate. Particularly when asked to make long-term projections based on limited, short-term data. To understand why this is so, we need to look at how computer modeling is done.

  Why Models Aren't Reliable

  A model is a simplified stand-in for some real system; a computer network, a protein molecule, the atmosphere, or Earth's entire climate. A modeler tries to capture the most important aspects of the system being modeled. For a computer network, this could be the amount of network traffic, the speed of the communication links, the way the various computers are connected to each other. For a protein molecule, it could be the types and number of different atoms, the bonds between them, the kinetic energy of the atoms, etc.

  During the 1990s, Hoffman was a research professor of Computer Science at the University of North Carolina at Chap
el Hill, working on biosequence and protein structure analysis funded by the Human Genome Project. A number of his colleagues were working on the related problem of molecular dynamics simulation (MD), modeling virtual protein molecules with computers. The goal of MD is to calculate the time-dependent behavior of a molecular system. To run these models, some of the fastest super-computers of the time were used. Even so, running the model programs could take weeks and yield only a few milliseconds of simulated time.

  Even relatively small protein molecules consist of several hundred atoms. Molecular dynamics simulation is a type of problem called an N-body problem. This is because every atom in the molecule affects all the other atoms. Computer scientists have a concept called computational complexity, a way of formally stating how hard it is to solve a computational problem. More specifically, how the amount of time required to perform the computation changes with increasing problem size. N-body problems have a computational complexity of O(N2), pronounced “big O, N squared.” In practical terms, this means if you double the number of atoms in your molecule, the time needed to run the simulation will be two squared (22) or four times the previous value. Four times the number of atoms would require 16 times (42) the computer time.

  Because the number of atoms in a specific protein cannot be reduced, computer scientists look for other ways to make their models run faster. One parameter that can be adjusted is called the time step. All computer models execute in a number of discreet steps. In an MD program, starting with the relative positions of all the atoms, all of the forces acting on each atom are calculated. The effects of these forces acting on the atoms over a short period of time, the time step, are calculated. This results in new positions for each atom in the protein molecule. The process then repeats for the next time step. So, if you double the length of the time step, you can cut the required calculations in half.

  Of course, nothing is free and the cost of lengthening the time step is a loss of accuracy. This introduces error into the calculations. What is worse, computer programs also suffer from error propagation. This means that any calculation, where the values used contain errors, will result in answers that contain errors as well. Models that simulate systems over time, like MD and GCM programs, use the output of one time step as input to the next time step's calculations. The result can be ever increasing error that eventually causes the model output to become totally useless.

  Hoffman witnessed an example of this at a conference held at the North Carolina Supercomputing Center, in Research Triangle Park. Several of his colleagues presented the results of their efforts to simulate a simple protein molecule surrounded by water at body temperature. Showing the model output as a cartoon movie of the molecule pulsing and vibrating, interacting with the surrounding water molecules, their first example did something spectacular—the protein molecule exploded into several pieces.

  This was obviously not correct, since the protein in question was known to be stable at body temperature. The second example was much better, at least the protein didn't tear itself apart. What was the difference between the two simulations? The time step used. As it turned out, if a time step greater than two femtoseconds was used, propagated error built up until the molecule self-destructed. A femtosecond is an extremely short period of time. For a computer with a clock rate of 1GHz, every tick of its clock takes one nanosecond, or one billionth of a second. A femtosecond is one millionth of a nanosecond. If a femtosecond took one second, a second would last about 32 million years.

  This story illustrates some of the problems inherent in computer modeling, regardless of the physical system that is being studied. Even very small changes to the model's parameters can cause the output to change from realistic to catastrophically wrong. Identifying the source of the problem in a model is often a matter of trial and error—this was the case with the MD simulation. The MD researchers were lucky, their model was obviously giving the wrong answers because the real molecule didn't act like the simulated one. They had volumes of reliable baseline data to work from—this is often not the case for other models. There is an old truism in computer science: “garbage in, garbage out.” The trick is being able to recognize garbage.

  Modeling the atmosphere is even more complex, requiring knowledge of incoming solar radiation, the movement of air currents over the land and sea, heat convection, the amount of water vapor, the effects of clouds, and on and on. People have been trying to model Earth's atmosphere for decades, primarily to predict the weather. The weather forecasts you hear on your evening news are all based on computer models. How accurate are these models? In the near term, a few days from today, local weather forecasts are about 60% accurate when predicting high temperatures.424

  Hurricane models suffer from the same problems as GCM programs. Because hurricanes often intensify or lose strength quickly, models have trouble accurately predicting their strength. According to Hugh Willoughby, an atmospheric scientist at Florida International University, if a model's data points are not closer than 5 km apart, the simulated storms end up “larger, weaker cartoons of their counterparts in nature.”425

  Storm track prediction is an example of quantitative modeling, where the expected results of a model are hard numbers. In the case of hurricanes, a storm's track and changes in strength over time are the desired results. Climate change modeling is usually an example of qualitative modeling. These types of models result in general trends and overall effects of parameter modification. They are used to provide insight into processes where scientists' intuition fails. A qualitative model can tell us that adding more CO2 to the atmosphere will cause warming. But qualitative models should not be used to make concrete predictions of future conditions, such as the global average temperature for the next 100 years. Or, as Richard W. Hamming426 put it, “The purpose of computing is insight, not numbers.”

  Climate modelers will protest that their models are not the same as short-term weather forecasting models or hurricane path models. They are correct, longer term models are more complicated. There are a number of services, both governmental and commercial, that do longer term predictions. Long-term, meaning for the next season or the next year. These claim to be 80-85% accurate, but they usually concentrate on trend predictions, how many days will it be dry or rainy, how many days will have above-average temperature. An 85% accuracy sounds pretty good—but this is only for a year or so into the future. What do the professional climate predictors say about looking farther into the future? According to the Weather 2000 web site:

  “Trends can be misleading. Examining 30, 40 or even 50 years worth of historical data might only encompass 10 - 20% of the full potential of climate variability. Since quality data only goes back 50 years at best, standard deviations and extreme records based on that data can be gross underestimations, and trends can overestimate the true climate state.”427

  The same source goes on to say, “It is very dangerous to draw conclusions based on the most recent 5 or 10 years worth of historical data.” Remember that this is with regard to one year predictions, the IPCC models are trying to predict 100 years or more into the future.

  Sources of Modeling Error

  There are numerous sources of error that can impact the accuracy of a model. Peter Haff, Professor of Geology and Civil and Environmental Engineering at Duke University, lists seven sources:

  Model imperfection.

  Omission of important processes.

  Lack of knowledge of initial conditions.

  Sensitivity to initial conditions.

  Unresolved heterogeneity.

  Occurrence of external forcing.

  Inapplicability of the factor of safety concept.

  Several of these points seem obvious; if the model is flawed, if important parts have been left out, or the initial conditions are incorrect, no model can provide trustworthy answers. Pilkey states, “it is an axiom of mathematical modeling of natural processes that only a fraction of the various events, large and small, that constitute the process
are actually expressed in the equations.”428 The other points may not be so self-evident.

  Illustration 129: Lorenz's experiment: the difference between the starting point of the two curves is 0.000127. Source Ian Stewart.

  Sensitivity to initial conditions is a result of non-linear responses in the system being modeled. In engineered systems, these types of unexpected and unplanned for responses are called emergent behavior. In natural systems, this type of behavior is often called chaotic. The discovery of chaotic behavior in 1960, by meteorologist Edward Lorenz while developing a weather model, led to the establishment of a new scientific discipline—chaos theory. Lorenz had constructed a computer model, with a set of twelve equations, to study weather prediction. One day, he attempted to restart a prediction run from an intermediate point using values from a printout. To his surprise, the second curve deviated wildly from the initial run. The difference was eventually traced to the number of digits in the parameters used to restart the simulation—by dropping the last few digits from the parameter values, Lorenz had inadvertently discovered the non-linearity lurking in his model.429 Since Lorenz's experiment, chaotic behavior has been found in many natural systems.

  Heterogeneity, the quality of being diverse or made up of many different components, tends to increase with the size of the system being modeled. A model that may work well for some geographic regions, may fail simulating others. Desert environments are different from woodlands, open ocean different from coastal areas, mountains different from grassy planes. Heterogeneity tends to make large systems hard to model, and Earth's climate system is very large.

 

‹ Prev