Book Read Free

Modeling of Atmospheric Chemistry

Page 4

by Guy P Brasseur


  In his 1846 book Kosmos, German scientist Alexander von Humboldt (1769–1859, see Figure 1.1) states that the structure of the universe can be reduced to a problem of mechanics, and reinforces the view presented in 1825 by Pierre-Simon Laplace (1749–1827, see Figure 1.1). In the introduction of his Essai Philosophique sur les Probabilités (Philosophical Essay on Probabilities), Laplace explains that the present state of the Universe should be viewed as the consequence of its past state and the cause of the state that will follow. Once the state of a system is known and the dynamical laws affecting this system are established, all past and future states of the system can be rigorously determined. This concept, which applies to many aspects of the natural sciences, is extremely powerful because it gives humanity the tools to monitor, understand, and predict the evolution of the Universe.

  Figure 1.1 Prussian naturalist and explorer Alexander von Humboldt (a) and French mathematician and astronomer Pierre-Simon, Marquis de Laplace (b).

  Although von Humboldt does not refer explicitly to the concept of model, he attempts to describe the functioning of the world by isolating different causes, combining them in known ways, and asking whether they reinforce or neutralize each other. He states that, “by suppressing details that distract, and by considering only large masses, one rationalizes what cannot be understood through our senses.” This effectively defines models as idealizations of complex systems designed to achieve understanding. Models isolate the system from its environment, simplify the relationships between variables, and make assumptions to neglect certain internal variables and external influences (Walliser, 2002). They are not fully objective tools because they emphasize the essential or focal aspects of a system as conceived by their authors. They are not universal because they include assumptions and simplifications that may be acceptable for some specific applications but not others. Indeed, the success of a model is largely the product of the skills and imagination of the authors.

  During the twentieth century, models started to become central tools for addressing scientific questions and predicting the evolution of phenomena such as economic cycles, population growth, and climate change. They are extensively used today in many disciplines and for many practical applications of societal benefit, weather forecasting being a classic example. As computing power increases and knowledge grows, models are becoming increasingly elaborate and can unify different elements of a complex system to describe their interactions. In the case of Earth science, this is symbolized by the vision of a “virtual Earth” model to describe the evolution of the planet, accounting for the interactions between the atmosphere, ocean, land, biosphere, cryosphere, lithosphere, and coupling this natural system to human influences. Humans in this “virtual Earth” would not be regarded as external factors but as actors through whom environmental feedbacks operate.

  1.3 Mathematical Models

  Mathematical models strip the complexity of a system by identifying the essential driving variables and describing the evolution of these variables with equations based on physical laws or empirical knowledge. They provide a quantitative statement of our knowledge of the system that can be compared to observations. Models of natural systems are often expressed as mathematical applications of the known laws that govern these systems. As stated by Gershenfeld (1999), mathematical models can be rather general or more specific, they can be guided by first principles (physical laws) or by empirical information, they can be analytic or numerical, deterministic or stochastic, continuous or discrete, quantitative or qualitative. Choosing the best model for a particular problem is part of a modeler’s skill.

  Digital computers in the 1950s ushered in the modern era for mathematical models by enabling rapid numerical computation. Computing power has since been doubling steadily every two years (“Moore’s law”) and the scope and complexity of models has grown in concert. This has required in turn a strong effort to continuously improve the physical underpinnings and input information for the models. Otherwise we have “garbage in, garbage out.” Sophisticated models enabled by high-performance computing can extract information from a system that is too complex to be fully understood or quantifiable by human examination. By combining a large amount of information, these models point to system behavior that may not have been anticipated from simple considerations. From this point of view, models generate knowledge. In several fields of science and technology, computer simulations have become a leading knowledge producer. In fact, this approach, which does not belong either to the theoretical nor to the observational domains, is regarded as a new form of scientific practice, a “third way” in scientific methodology complementing theoretical reasoning and experimental methods (Kaufmann and Smarr, 1993).

  For a model to be useful it must show some success at reproducing past observations and predicting future observations. By definition, a model will always have some error that reflects the assumptions and approximations involved in its development. The question is not whether a model has error, but whether the error is small enough for the model to be useful. As the saying goes, “all models are wrong, but some are useful.” A crucial task is to quantify the error statistics of the model, which can be done through error propagation analyses and/or comparison with observations. The choice of observational data sets and statistics to compare to the model is an important part of the modeler’s skill, as is the interpretation of the resulting comparisons. Discrepancies with observations may be deemed acceptable, and used to compile model error statistics, but they may also point to important flaws in the founding assumptions or implementation of the model. The modeler must be able to recognize the latter as it holds the key to advancing knowledge. Some dose of humility is needed because the observations cannot sample all the possible realizations of a complex system. As a result, the error statistics of the model can never be characterized fully.

  Many mathematical models are based on differential equations that describe the evolution in space and time of the variables of interest. These are often conservation equations, generalizing Newton’s second law that the acceleration of an object is proportional to the force applied to that object. Atmospheric chemistry models are based on the continuity equation that describes mass conservation for chemical species. Consider an ensemble of chemical species (i = 1, … n) with mole fractions (commonly called mixing ratios) assembled in a vector C = (C1, … Cn)T. The continuity equation for species i in a fixed (Eulerian) frame of reference is given by

  (1.1)

  Here, v is the 3-D wind vector, and Pi and Li are total production and loss rates for species i that may include contributions from chemical reactions (coupling to other species), emissions, and deposition. The local change in mixing ratio with time (∂Ci/∂t) is expressed as the sum of transport in minus transport out (flux divergence term v•∇Ci) and net local production (Pi – Li). Similar conservation equations are found in other branches of science. For example, replacing Ci with momentum yields the Navier–Stokes equation that forms the basis for models of fluid dynamics.

  A system is said to be deterministic if it is uniquely and entirely predictable once initial conditions are specified. It is stochastic if randomness is present so that only probabilities can be predicted. Systems obeying the laws of classical mechanics are generally deterministic. The two-body problem (e.g., a satellite orbiting a planet or a planet orbiting the Sun), described by Newton’s laws and universal gravitation, is a simple example of a deterministic system. An analytic solution of the associated differential equations can be derived with no random element. All trajectories derived with different initial conditions converge toward the same subspace called an attractor. By contrast, when trajectories starting from slightly different initial conditions diverge from each other at a sufficiently fast rate, the system is said to be chaotic. Meteorological models are a classic example. They are deterministic but exhibit chaotic behavior due to nonlinearity of the Navier–Stokes equation. This chaotic behavior is called turbulence. Chaotic systems evolve i
n a manner that is exceedingly dependent on the precise choice of initial conditions. Since initial conditions in a complex system such as the weather can never be exactly defined, the model results are effectively stochastic and multiple simulations (ensembles) need to be conducted to obtain model output statistics.

  1.4 Meteorological Models

  The basic ideas that led to the development of meteorological forecast models were formulated about a century ago. American meteorologist Cleveland Abbe (1838–1916) first proposed a mathematical approach in a 1901 paper entitled “The physical basis of long-range weather forecasting.” A few years later, in 1904, in a paper entitled “Das Problem von der Wettervorhersage betrachtet vom Standpunkte der Mechanik und der Physik” (The problem of weather prediction from the standpoint of mechanics and physics), Norwegian meteorologist Vilhelm Bjerknes (1862–1951) argued that weather forecasting should be based on the well-established laws of physics and should therefore be regarded as a deterministic problem (see Figure 1.2). He wrote:

  If it is true, as every scientist believes, that subsequent atmospheric states develop from the preceding ones according to physical law, then it is apparent that the necessary and sufficient conditions for the rational solution of forecasting problems are the following:

  1. A sufficiently accurate knowledge of the state of the atmosphere at the initial time;

  2. A sufficiently accurate knowledge of the laws according to which one state of the atmosphere develops from another.

  Figure 1.2 Norwegian meteorologist Vilhelm Bjerknes (a), and American meteorologist Cleveland Abbe (b).

  Source: Wikimedia Commons.

  Bjerknes reiterated his concept in a 1914 paper entitled “Die Meteorologie als exakte Wissenschaft” (Meteorology as an exact science). He used the medical terms “diagnostics” and “prognostics” to describe the two steps shown. He suggested that the evolution of seven meteorological variables (pressure, temperature, the three wind components, air density, and water vapor content) could be predicted from the seven equations expressing the conservation of air mass and water vapor mass (continuity equations), the conservation of energy (thermodynamic equation, which relates the temperature of air to heating and cooling processes), as well as Newton’s law of motion (three components of the Navier–Stokes equation), and the ideal gas law (which relates pressure to air density and temperature). Bjerknes realized that these equations could not be solved analytically, and instead introduced graphical methods to be used for operational weather forecasts.

  During World War I, Lewis Fry Richardson (1881–1951; see Figure 1.3), who was attached to the French Army as an ambulance driver, attempted during his free time to create a numerical weather forecast model using Bjerknes’ principles. He used a numerical algorithm to integrate by hand a simplified form of the meteorological equations, but the results were not satisfying. The failure of his method was later attributed to insufficient knowledge of the initial weather conditions, and to instabilities in the numerical algorithm resulting from an excessively long time step of six hours. Richardson noted that the number of arithmetic operations needed to solve the meteorological equations numerically was so high that it would be impossible for a single operator to advance the computation faster than the weather advances. He proposed then to divide the geographic area for which prediction was to be performed into several spatial domains, and to assemble for each of these domains a team of people who would perform computations in parallel with the other teams, and, when needed, communicate their information between teams. His fantasy led him to propose the construction of a “forecast factory” in a large theater hall (Figure 1.3), where a large number of teams would perform coordinated computations. This construction was a precursor vision of modern massively parallel supercomputers. The methodology used by Richardson to solve numerically the meteorological equations was published in 1922 in the landmark book Weather Prediction by Numerical Process.

  Figure 1.3 British meteorologist Lewis Fry Richardson (b), the map grid he used to make his numerical weather forecast (c), and an artist’s view of a theater hall (a) imagined by Richardson to become a “forecast factory.” Panel (a) reproduced with permission from “Le guide des cités” by François Schuiten and Benoît Peeters,

  © Copyright Casterman.

  The first computer model of the atmosphere was developed in the early 1950s by John von Neumann (1903–1957) and Jule Charney (1917–1981), using the Electronic Numerical Integrator and Computer (ENIAC). The computation took place at about the same pace as the real evolution of the weather, and so results were not useful for weather forecasting. However, the model showed success in reproducing the large-scale features of atmospheric flow. Another major success of early models was the first simulation of cyclogenesis (cyclone formation) in 1956 by Norman Phillips at the Massachusetts Institute of Technology (MIT). Today, with powerful computers, meteorological models provide weather predictions with a high degree of success over a few days and some success up to ten days. Beyond this limit, chaos takes over and the accuracy of the prediction decreases drastically (Figure 1.4). As shown by Edward Lorenz (1917–2008), lack of forecasting predictability beyond two weeks is an unavoidable consequence of imperfect knowledge of the initial state and exponential growth of model instabilities with time (Lorenz, 1963, 1982). Increasing computer power will not relax this limitation. Lorenz’s finding clouded the optimistic view of forecasting presented earlier by Bjerknes. Predictions on longer timescales are still of great value but must be viewed as stochastic, simulating (with a proper ensemble) the statistics of weather rather than any prediction of specific realization at a given time. The statistics of weather define the climate, and such long-range statistical weather prediction is called climate modeling,

  Figure 1.4 Qualitative representation of the predictability of weather, seasonal to interannual variability (El Nino – Southern Oscillation) and climate (natural variations and anthropogenic influences).

  Adapted from US Dept. of Energy, 2008.

  Meteorological models include a so-called dynamical core that solves Bjerknes’ seven equations at a spatial and temporal resolution often determined by available computing power. Smaller-scale turbulent features are represented through somewhat empirical parameterizations. Progress in meteorological models over the past decades has resulted from better characterization of the initial state, improvements in the formulation of physical processes, more effective numerical algorithms, and higher resolution enabled by increases in computer power. Today, atmospheric models may be used as assimilation tools, to help integrate observational data into a coherent theoretical framework; as diagnostic tools, to assist in the interpretation of observations and in the identification of important atmospheric processes; and as prognostic tools, to project the future evolution of the atmosphere on timescales of weather or climate.

  Data assimilation plays a central role in weather forecasting because it helps to better define the initial state for the forecasts. Observations alone cannot define that state because they are not continuous and are affected by measurement errors. The meteorological model provides a continuous description of the initial state, but with model errors. Data assimilation blends the information from the model state with the information from the observations, weighted by their respective errors, to achieve an improved definition of the state. Early approaches simply nudged the model toward the observations by adding a non-physical term to the meteorological equations, relaxing the difference between model and observations. Optimal estimation algorithms based on Bayes’ theorem were developed in the 1960s and provide a sounder foundation for data assimilation. They define a most likely state through minimization of an error-weighted least-squares cost function including information from the model state and from observations. Current operational forecast models use advanced methods to assimilate observations of a range of meteorological variables collected from diverse platforms and at different times. Four-dimensional variational data assimilation (4DVAR)
methods ingest all observations within a time window to numerically optimize the 3-D state at the initial time of that window.

  1.5 Climate Models

  The climate represents the long-term statistics of weather, involving not only the atmosphere but also the surface compartments of the Earth system (atmosphere, oceans, land, cryosphere). It is a particularly complex system to investigate and to model. The evolution of key variables in the different compartments can be described by partial differential equations that represent fundamental physical laws. Solution of the equations involves spatial scales from millimeters (below which turbulence dissipates) to global, and temporal scales from milliseconds to centuries or longer. The finer scales need to be parameterized in order to focus on the evolution of the larger scales. Because of the previously described chaos in the solution to the equations of motion, climate model simulations are effectively stochastic. Ensembles of climate simulations conducted over the same time horizon but with slightly modified initial conditions provide statistics of model results that attempt to reproduce observed climate statistics.

 

‹ Prev