Modeling of Atmospheric Chemistry
Page 5
The first climate models can be traced back to the French mathematician Joseph Fourier (1768–1830, see Figure 1.5), who investigated the processes that have maintained the mean Earth’s temperature at a relatively constant value during its history. In 1896, the Swedish scientist Svante Arrhenius (1859–1927; see Figure 1.5) made the first estimate of the changes in surface temperature to be expected from an increase in the atmospheric concentration of CO2. He did so by using measurements of infrared radiation emitted by the full Moon at different viewing angles to deduce the sensitivity of absorption to the CO2 amount along the optical path, and then using the result in an energy balance equation for the Earth.
Figure 1.5 French mathematician and physicist Jean Baptiste Joseph Fourier (a), Swedish chemist Svante August Arrhenius (b), and British scientist Guy Stewart Callendar (c). Source of panel (c):
G. S. Callendar Archive, University of East Anglia.
In 1938, Guy S. Callendar (1898–1964; see Figure 1.5) used a simple radiative balance model to conclude that a doubling in atmospheric CO2 would warm the Earth surface by 2 °C on average, with considerably more warming at the poles. In the following decades, more detailed calculations were performed by 1-D (vertical) radiative–convective models allowing for vertical transport of heat as well as absorption and emission of radiation. Increasing computing power in the 1950s and 1960s paved the way for 3-D atmospheric climate models, called general circulation models (GCMs) for their focus on describing the general circulation of the atmosphere. Early GCMs were developed by Norman Phillips at MIT, Joseph Smagorinsky and Syukuro Manabe at the Geophysical Fluid Dynamics Laboratory (GFDL) in Princeton, Yale Mintz and Akio Arakawa at the University of California at Los Angeles (UCLA), and Warren Washington and Akira Kasahara at the National Center for Atmospheric Research (NCAR).
Climate models today have become extremely complex and account for coupling between the atmosphere, the ocean, the land, and the cryosphere. The Intergovernmental Panel on Climate Change (IPCC) uses these models to inform decision-makers about the climate implications of different scenarios of future economic development. Several state-of-science climate models worldwide contribute to the IPCC assessments, and yield a range of climate responses to a given perturbation. Attempts to identify a “best” model tend to be futile because each model has its strengths and weaknesses, and ability to reproduce present-day climate is not necessarily a gauge of how well the model can predict future climate. The IPCC uses instead the range of climate responses from the different models for a given scenario as a “wisdom of crowds” statistical ensemble to assess confidence in predictions of climate change.
1.6 Atmospheric Chemistry Models
Interest in developing chemical models for the atmosphere can be traced to the early twentieth century with the first observational inference by Fabry and Buisson (1913) of an ozone layer at high altitude. Subsequent ground-based measurements of the near-horizon solar spectrum in the 1920s established that this ozone layer was present a few tens of kilometers above the surface. Its origin was first explained in 1929 by British geophysicist Sydney Chapman (1888–1970; see Figure 1.6) as a natural consequence of the exposure of molecular oxygen (O2) to ultraviolet (UV) radiation, producing oxygen atoms (O) that go on to combine with O2 to produce ozone (O3). Chapman’s model produced an ozone maximum a few tens of kilometers above the surface, consistent with observations. It introduced several important new concepts, including the interaction of radiation with chemistry (photochemistry) and the chemical cycling of short-lived species (oxygen atom and ozone), the usefulness of dynamical steady-state assumptions applied to short-lived species, and the negative feedback of ozone on itself through absorption of UV radiation.
Figure 1.6 From the top left to the bottom right ((a)–(g)): Sydney Chapman (Courtesy of the University Corporation for Atmospheric Research), Sir David Bates (Courtesy of Queen’s University Belfast), Baron Marcel Nicolet, Paul Crutzen (Courtesy of Tyler Prize for Environmental Achievement), Mario Molina, (Tyler Prize for Environmental Achievement), Frank Sherwood (Sherry) Rowland (Tyler Prize for Environmental Achievement), and Susan Solomon.
By the 1940s and 1950s, attention had turned to the ionized upper atmosphere due to interest in the propagation of radio waves and the origin of the aurora. Models were developed to simulate the chemical composition of this region, and some were 1-D (vertical) to address conceptual issues of coupling between chemistry and transport. In 1950, British and Belgian scientists Sir David Bates (1916–1994) and Baron Marcel Nicolet (1912–1996) (Figure 1.6), who were studying radiative emissions (airglow) in the upper atmosphere, deduced from their photochemical model that hydrogen species produced by the photolysis of water vapor could destroy large amounts of ozone in the mesosphere (50–80 km). Such catalysis by hydrogen oxide radicals was found to also represent a significant sink for ozone in the stratosphere, adding to the ozone loss in the Chapman mechanism. The late 1960s and early 1970s saw the discoveries of additional catalytic cycles for ozone loss involving nitrogen oxide radicals (NOx ≡ NO + NO2) and chlorine radicals (ClOx ≡ Cl + ClO) originating from biogenic nitrous oxide (N2O) and industrial chlorofluorocarbons (CFCs), respectively. The NOx-catalyzed cycle was found to be the dominant ozone-loss process in the natural stratosphere and this finally enabled a successful quantitative simulation of stratospheric ozone. The discovery of a CFC-driven ozone-loss cycle triggered environmental concern over depletion of the ozone layer. This work led to the awarding of the 1995 Nobel Prize in Chemistry to Dutch scientist Paul Crutzen, Mexican scientist Mario Molina, and American scientist Sherwood Rowland (Figure 1.6).
By the 1970s it was thought that our understanding of stratospheric ozone was mature, and global models coupling chemistry and transport began to be developed. These models were mostly two-dimensional (latitude–altitude), assuming uniformity in the longitudinal direction. Early three-dimensional models were also developed by Derek Cunnold at MIT and Michael Schlesinger and Yale Mintz at UCLA. A shock to the research community came in 1985 with the observational discovery of the Antarctic ozone hole, which had not been predicted by any of the models. This prompted intense research in the late 1980s and early 1990s to understand its origin. American scientist Susan Solomon (Figure 1.6) discovered that formation of polar stratospheric clouds (PSCs) under the very cold conditions of the wintertime Antarctic stratosphere enabled surface reactions regenerating chlorine radicals from their reservoirs, thus driving very fast ozone loss. The Antarctic ozone hole was a spectacular lesson in the failure of apparently well-established models when exposed to previously untested environments. Since then there have been no fundamental challenges to our understanding of stratospheric ozone, but continual improvement of models has led to a better understanding of ozone trends.
Rising interest in climate change in the 1980s and 1990s led the global atmospheric chemistry community to turn its attention to the troposphere, where most of the greenhouse gases and aerosol particles reside. In 1971, Hiram (Chip) Levy of the Harvard–Smithsonian Center for Astrophysics (Figure 1.7) used a radiative transfer model to show that sufficient UV-B radiation penetrates into the troposphere to produce the hydroxyl radical OH, a strong radical oxidant that drives the removal of methane, carbon monoxide (CO), and many other important atmospheric gases. This upended the view of the global troposphere as chemically inert with respect to oxidation. As recently as 1970, a review of atmospheric chemistry in Science magazine had stated that “The chemistry of the troposphere is mainly that of a large number of atmospheric constituents and of their reactions with molecular oxygen … Methane and CO are chemically quite inert in the troposphere” (Cadle and Allen, 1970). Levy showed not only that fast oxidation by the OH radical takes place in the troposphere, but also that it drives intricate radical-propagated reaction chains. These chains provide the foundation for much of the current understanding of tropospheric oxidant chemistry.
Figure 1.7 (a) Hiram (Chip) Levy, (b) Arie Haagen-Smit, and (c) John Sein
feld.
Early global 3-D models of tropospheric chemistry were developed in the 1980s by Hiram Levy (by then at GFDL), Michael Prather (Harvard), and Peter Zimmermann (Max-Planck Institute for Chemistry in Mainz). Simulating the troposphere presented modelers with a new range of challenges. Transport is far more complex in the troposphere than in the stratosphere, and is closely coupled to the hydrological cycle through wet convection, scavenging, and clouds. Natural and anthropogenic emissions release a wide range of reactive chemicals that interact with transport on all scales and lead to a variety of chemical regimes. The surface also provides a sink through wet and dry deposition. The environmental issues in the troposphere are diverse and require versatility in models to simulate greenhouse gases, aerosols, oxidants, various pollutants, and deposition. Present-day global models of tropospheric chemistry typically include over 100 coupled species and a horizontal resolution of the order of tens to hundreds km. A number of issues remain today at the frontier of model capabilities, including aerosol microphysics, hydrocarbon oxidation mechanisms, formation of organic aerosols, coupling with the hydrological cycle, and boundary layer turbulent processes.
As the global atmospheric chemistry community gradually worked its way down from the upper atmosphere to the troposphere, a completely independent community with roots in engineering was working on the development of urban and regional air pollution models. Attention to air pollution modeling began in the 1950s. Prior to that, the sources of pollution were considered obvious (smokestacks and chimneys, industry, sewage, etc.) and their impacts immediate. Emergence of the Los Angeles smog in the 1940s shook this concept. The smog was characterized by decreased visibility and harmful effects on health and vegetation, but neither the causes nor the actual agents could be readily identified. The breakthrough came in the 1950s when Caltech chemist Arie Haagen-Smit (1900–1977, see Figure 1.7) showed that NOx and volatile organic compounds (VOCs) emitted by vehicles could react in the atmosphere in the presence of solar radiation to produce ozone, a strong oxidant and toxic agent in surface air. This ozone production in surface air involved a totally different mechanism than in the stratosphere. Ozone was promptly demonstrated to be the principal toxic agent in Los Angeles smog. This introduced a new concept in air pollution; the pollution was worst not at the point of emission, but after atmospheric reaction some distance downwind. Additional toxicity and suppression of visibility was attributed to fine aerosol particles, also produced photochemically during transport in the atmosphere downwind from pollution sources. Similar mechanisms were found subsequently to be responsible for smog in other major cities of the world.
The discovery of photochemically generated ozone and aerosol pollutants in urban air spurred the development of air pollution models to describe the coupling of transport and chemistry. Initial efforts in the 1950s and 1960s focused on tracking the chemical evolution in transported air parcels (simple Lagrangian models) and describing the diffusion of chemically reactive plumes (Gaussian plume models). Three-dimensional air pollution models of the urban environment began to be developed in the 1970s. John Seinfeld of Caltech (Figure 1.7) was a pioneer with his development of airshed models for the Los Angeles Basin and of the underlying algorithms to simulate ozone and aerosols. By the 1970s, it also became apparent that long-range transport of ozone and aerosols caused significant pollution on the regional scale, and this together with concern over acid rain led in the 1980s and 1990s to development of 3-D regional models extending over domains of the order of 1000 km.
A major development over the past decade has been the convergence of the global atmospheric chemistry and air pollution modeling communities. This convergence has been spurred by issues of common interest: intercontinental transport of air pollution, climate forcing by aerosols and tropospheric ozone, and application of satellite observations to understanding of air pollution. Addressing these issues requires global models with fine resolution over the regions of interest. A new scientific front has emerged in bridging the scales of atmospheric chemistry models from urban to global.
Atmospheric chemistry modeling today is a vibrant field, with many challenges facing the research community when it comes to addressing issues of pressing environmental concern. We have discussed some of those challenges involving the representations of processes and the bridging across scales. There are a number of others. One is the development of whole-atmosphere models (from the surface to outer space) to study the response of climate to solar forcing and the response of the upper atmosphere to climate change. Another is the coupling of atmospheric chemistry to biogeochemical processes in surface reservoirs, which is emerging as a critical issue for modeling the nitrogen cycle and the fate of persistent pollutants such as mercury. Yet another challenge is the development of powerful chemical data assimilation tools to successfully manage the massive flow of atmospheric composition data from satellites. These tools are necessary for exploiting the data to test and improve current understanding of atmospheric processes, constrain surface fluxes through inverse modeling, and increase the capability of forecasts for both weather and air quality. Finally, a grand challenge is to integrate atmospheric chemistry into Earth System Models (ESMs) that attempt to fully couple the physics, chemistry, and biology in the different reservoirs of the Earth in order to diagnose interactions and feedbacks. Inclusion of atmospheric chemistry into ESMs has been lagging, largely because of the computational costs associated with the numerical integration of large chemical and aerosol mechanisms. Developing efficient and reliable algorithms is an important task for the future.
1.7 Types of Atmospheric Chemistry Models
The general objective of atmospheric chemistry models is to simulate the evolution of n interacting chemicals in the atmosphere. This is done by solving a coupled system of continuity equations, which in a fixed frame of reference can be written in the general form of equation (1.1). The solution of (1.1) depends on meteorological variables through the 3-D wind vector v, generally including parameterizations to account for fine-scale turbulent contributions to the flux divergence term v • ∇Ci. The local production and loss terms Pi and Li may also depend on meteorological variables.
Many atmospheric chemistry models do not generate their own meteorological environment and instead use 3-D time-dependent data (including winds, humidity, temperature, etc.) generated by an external meteorological model. These are called “offline” models. The meteorological input data must define a mass-conserving airflow with consistent values for the different variables affecting transport, Pi, and Li. By contrast, “online” atmospheric chemistry models are integrated into the parent meteorological model so that the chemical continuity equations are solved together with the meteorological equations for conservation of air mass, momentum, heat, and water. Online models have the advantage that they fully couple chemical transport with dynamics and with the hydrological cycle. They avoid the need for high-resolution meteorological archives, and they are not subject to time-averaging errors associated with the use of offline meteorological fields. They are not necessarily much more computer-intensive, since the cost of simulating many coupled chemical variables is often larger than the cost of the meteorological simulation. But they are far more complex to operate and interpret than offline models. The term chemical transport model (CTM) usually refers to offline 3-D models in the jargon of the atmospheric chemistry community. Here we will use the CTM terminology to refer to atmospheric chemistry models in general, since the methods are usually common to all models.
The meteorological model used to drive an atmospheric chemistry model can either be “free-running” or include assimilation of meteorological data. Data assimilation allows a meteorological model to simulate a specific observed meteorological year. A free-running model without data assimilation generates an ensemble of possible meteorological years, but not an actual observed year. Use of assimilated meteorological data is necessary to compare an atmospheric chemistry model to observations for a particular
year. With a free-running meteorological model only climatological statistics can be compared. However, one advantage of using a free-running meteorological model is that winds and other meteorological variables are physically consistent. Data assimilation applies a non-physical correction to the model meteorology that can cause unrealistic behavior of non-assimilated variables called “data shock.” For example, stratospheric models using assimilated meteorological data tend to suffer from excessive vertical transport because the assimilation of horizontal wind observations generates spurious vertical flow to enforce mass conservation. Advanced data assimilation schemes attempt to minimize these data shocks.
Another distinction can be made between Eulerian and Lagrangian models (Figure 1.8). A Eulerian model solves the continuity equations in a geographically fixed frame of reference, while a Lagrangian model uses a frame of reference that moves with the atmospheric flow. The continuity equation as written in (1.1), including partial differentiation with respect to time and space, describes the evolution of concentrations in a fixed frame of reference and represents the Eulerian approach. Finite-difference approximations of the partial derivatives produce solutions on a fixed grid of points representing the model domain. By contrast, the Lagrangian approach solves the continuity equations for points moving with the flow; for these points we can rewrite (1.1) as