Book Read Free

The Resilient Earth: Science, Global Warming and the Fate of Humanity

Page 25

by Simmons, Allen


  Scientists argue—that's the way science works. Attempting to shut down debate by claiming that consensus has been reached is a sure sign that something other than science is at work. Michael Crichton, best known for his novels but also a graduate of Harvard Medical School and a former postdoctoral fellow at the Salk Institute for Biological Studies, warned of the dangers of “consensus science” in a 2003 speech:

  “Historically, the claim of consensus has been the first refuge of scoundrels; it is a way to avoid debate by claiming that the matter is already settled. Whenever you hear the consensus of scientists agrees on something or other, reach for your wallet, because you're being had. Let's be clear: the work of science has nothing whatever to do with consensus. Consensus is the business of politics. Science, on the contrary, requires only one investigator who happens to be right, which means that he or she has results that are verifiable by reference to the real world. In science consensus is irrelevant. What is relevant is reproducible results. The greatest scientists in history are great precisely because they broke with the consensus.”

  As Anatole France said, “If fifty million people say a foolish thing, it is still a foolish thing.” If any number of scientists believe an erroneous theory to be correct, it is still an erroneous theory. Scientific consensus is what people fall back on when there is no clear-cut evidence or compelling theoretical explanation. To borrow a phrase from that great Texan, John “Cactus Jack” Garner,382 consensus is “not worth a bucket of warm spit.”

  The First Pillar of Climate Science

  As we stated in Chapter 1, scholars often refer to the three pillars of science: theory, experimentation, and computation.383 Now that we have completed our survey of the science behind Earth's climate and the natural causes of climate change we can return to analyzing the IPCC's theory of human-caused global warming. Having spent the last six chapters discussing the theories that try to explain climate change, we will begin here with the first pillar—theory.

  It should be obvious from the number of times we have quoted scientists, declaring the causes of one aspect of climate change or another as “unknown” or “poorly understood,” that the theoretical understanding of Earth's climate is suspect. The detailed and tortuously defined levels of uncertainty presented in the IPCC reports themselves is an admission of fact: the theoretical understanding of Earth's climate is incomplete in fundamental ways.

  In Chapter , we discussed the “missing sink” of carbon that has been under study for thirty years without being found. We have cited the recent realization by the European Parliament that animal emissions are more potent than human CO2 and that, for large portions of Asia, it is particulate pollution in the “brown clouds” causing most of the atmospheric warming. We discussed statistical links between climate and the sunspot cycle that are not explained by conventional climate theory.

  In Chapter , we discussed the astrophysical based theories, linking climate change to solar cycles and even the solar system's path through the Milky Way. These theories, yet to find wide acceptance among climatologists, have been strengthened by new findings regarding ion initiated nucleation (IIN) in the troposphere and lower stratosphere.384 Recent research has also found a link between the ozone layer and global cooling. A report in the Proceedings of the National Academy of Sciences (PNAS) states that current global warming would be substantially worse if not for the cooling effect of stratospheric ozone.385 Every day, science uncovers new relationships, new factors influencing Earth's climate. Theoretical understanding of how Earth's climate functions can only be called incomplete. Theory—the first pillar of climate science—is weak at best.

  In the context of climate science, experimentation involves taking measurements of the oceans and atmosphere and collecting historical climate data in the form of proxies. In the next chapter, we will examine the second pillar, experimentation.

  Experimental Data and Error

  “Doubt is not a pleasant condition, but certainty is absurd.”

  — Voltaire

  Papers discussing global warming often include historical temperature records going back hundreds or thousands of years. At first glance, this seems a bit puzzling, since the thermometer is a relatively recent invention. The first thermometers were called thermoscopes and several scientists invented versions around the same time. Galileo invented a water-filled thermometer in 1593. Italian inventor Santorio Santorio was the first to put a numerical scale on the instrument, allowing precise measurements. Gabriel Fahrenheit386 invented the alcohol thermometer in 1709, and the first modern mercury thermometer in 1714.

  Illustration 120: Anders Celsius (1701-1744).

  Fahrenheit also devised a temperature scale to use with his thermometers. He set zero (0°F) to the coldest temperature he could attain under laboratory conditions. This was created using a mixture of water, ice and ammonium chloride. Fahrenheit adjusted his scale so the high end was the boiling point of water. Fahrenheit's final temperature scale had 180 degrees between the freezing and boiling points of water, which he put at 32°F and 212°F, respectively. But Fahrenheit's scale was not the only one devised.

  The Celsius scale was invented by Swedish Astronomer Anders Celsius387 in 1742. Originally called the centigrade scale because it had 100 degrees between the freezing point (0°C) and boiling point (100°C) of water, the term “Celsius” was officially adopted in 1948 by an international conference on weights and measures. For scientific measurements, Fahrenheit's scale has been replaced by the Celsius scale.

  As scientists and explorers spread across the world they began collecting temperature readings from other lands, from the tropics to the frozen poles. Even so, we only have reliable readings from Europe and eastern North America going back a little over 200 years. Worldwide records have only been available for the past half century. These records are mostly surface temperature readings. Nowadays, satellites and weather balloons report temperatures at various levels in the atmosphere.

  In addition to temperature, many other parameters are of interest to climatologists: insolation levels, cosmic ray flux, atmospheric CO2 levels, the amount of dust and particulates in the atmosphere, to name a few. Since modern instruments have only been available for the last quarter century or so, this poses a problem for scientists wanting to examine Earth's climate in the past. Paleoclimatologists have to rely on data gathering techniques that involve stand-ins for actual direct instrument readings—so called proxy data.

  In order to understand the second pillar of the climate science—experimentation—we need to examine data collection using modern instruments, as well as historical data collection using proxies. We will start with modern data collection techniques first.

  Satellites and Radiosondes

  How reliable are modern climate data? For temperature measurements there are three main sources; remote sensing data from orbiting satellites, direct temperature readings from radiosondes attached to balloons, and surface temperature readings taken at local weather stations and ships at sea. With the launch of TIROS I (Television and InfraRed Observation Satellite) on April 1, 1960, climate science entered the space age. From that day on a large number of spacecraft have observed Earth's weather conditions on a regular basis. Today, most of the world is monitored from the vantage point of outer space.

  In 1963, following the experimental TIROS series, NASA formed an in-house satellite design team at the Goddard Space Flight Center (GSFC). This team was led by William Stroud, Rudolf Stampfl, John Licht, Rudolf Hanel, William Bandeen, and William Nordberg. Together, these men developed the satellite series called NIMBUS (Latin for rain cloud). These spacecraft subsequently led to the swarm of satellites that currently observe Earth's climate and weather conditions.

  Today, most of the world is monitored by orbiting spacecraft and it was the Nimbus pioneers who helped make this possible. The 1960s were a golden age at NASA—the race with the Soviets to put a man on the moon, and competition in Earth-orbiting satellites spurred development
. There were a total of seven Nimbus Satellites launched between 1964 and 1978. It was during this period that Simmons worked with Dr. Rudy Hanel, principal scientist for Nimbus 3 and 4. Hanel developed a modified Michelson infrared interferometer spectrometer, dubbed IRIS. IRIS was designed to produce vertical profiles of temperature, water vapor, ozone, chemical species, and interferograms/spectral measurements. The prototype device was assembled in a small, closed room where the instrument looked at a “black body”—a three foot circle of plywood painted black—which Simmons called the “black gong.”

  One day, while Hanel and Simmons worked on the prototype in the closet, something happened. After thirty minutes of adjusting wires and calibrating the beam-splitter with a laser beam, Simmons said, “There's something wrong. The CO2 level is climbing.” Excitedly, Dr. Hanel replied, “Yes! Yes! The instrument sees our CO2 breaths.” It was an “Aha! Moment” when IRIS first saw its creators' breath. Simmons realized that the device had measured the CO2 from his body and would soon measure the CO2 in Earth's atmosphere. After Hanel's first IRIS successfully flew on Nimbus, another IRIS traveled to Mars on Mariner 9. After the Mars mission, an IRIS flew on Voyager 2 and took measurements as the spacecraft navigated the rings of Saturn. The valuable lessons learned exploring other planets were soon reapplied to monitoring Earth's climate.

  Illustration 121: The TIROS 1 weather satellite from 1960. Source NASA/NOAA.

  In 1978, the first NOAA polar-orbiting satellite, TIROS-N, was launched. It was followed by the spacecraft of the NOAA series, the latest of which is NOAA-N, the 15th to be launched. NOAA uses two satellites, a morning and afternoon satellite, to ensure every part of the Earth is observed at least twice every 12 hours. Each satellite has a lifetime of around six years, so constant replacement is required to maintain coverage.

  These satellites monitor severe weather, which is reported to the National Weather Service, and even assist international search and rescue efforts. Among the instruments carried by these orbiting weather stations is a passive microwave radiometer known as the Microwave Sounding Unit (MSU). The MSU monitors microwave emissions from atmospheric oxygen at several frequencies. By watching different frequency bands, or channels, the temperatures from three overlapping zones of the atmosphere are constantly monitored. These atmospheric zones are known as the lower troposphere (LT), from the surface to 5 miles (8 km), the mid-troposphere (MT), from the surface to 11 miles (18 km), and the lower stratosphere (LS), from 9 miles to 14 miles (15 km to 23 km). With the launch of the NOAA-15 spacecraft, in 1998, the MSU was replaced by the Advanced MSU (AMSU), which provides expanded monitoring capability.

  There are a number of factors that impact the accuracy of satellite temperature readings including differences between MSU instruments, the varying temperature of the spacecraft themselves, and slow changes in the spacecraft's orbit. One scientist who has been intimately involved with gathering and correcting satellite temperature data is John Christy, Professor of Atmospheric Science and director of the Earth Systems Science Center at the University of Alabama in Huntsville. Over the years, Christy has published numerous papers regarding the accuracy of satellite data and is responsible for generating unified satellite temperature histories at UAH. When the temperature trends were analyzed, he found decadal error rates of ±0.05°C, ±0.05°C and ±0.10°C for the LT, MT and LS respectively.388

  These may seem like low error rates but, as Christy has pointed out, “below the stratosphere the anticipated rate of human-induced warming is on the order of 0.1°C to 0.3°C decade-1 so that errors or interannual impacts of 0.01°C decade-1 approach the magnitude of the signal being sought.”389 Meaning that even modern satellite data is too uncertain to base decade long temperature predictions on. Another interesting point made by Christy is that during the 30 years of “good” satellite data collected by the NOAA spacecraft there have been two major volcanic eruptions (Mount St. Helens and Mount Pinatubo) and two exceptionally strong El Niño events (1982-83 and 1997-98). The impact of these events on global temperatures skewed the data collected over the past three decades in ways that cannot be fully accounted for. In short, the only good recent data we have is not typical, so any long-term projections based on that data will be biased.

  Much of Earth's surface is covered by water and there are few weather stations scattered about the ocean's surface. To provide more complete coverage, spacecraft are used to monitor the temperatures of surface water around the globe. Satellite data are calibrated using ship observations of surface temperature from the same time and place. In a study of sea surface temperatures (SST), Reynolds et al. summed up the situation saying, “The globally averaged guess error was 0.3°C; the globally averaged data error was 1.3°C for ship data, 0.5°C for buoy and daytime satellite data, and 0.3°C for nighttime satellite data and SST data generated from sea ice concentrations. Clearly SST analyses have improved over the last 20 years and differences among analyses have tended to become smaller. However, as we have shown, differences remain.”390 If satellite data alone is not good enough to predict the future, what about other data collection methods? Next, we examine the other major method of gathering temperature data from different levels in the atmosphere—radiosondes.

  Illustration 122: Launching a radiosonde c. 1936. Photo NOAA.

  A radiosonde, from the French word sonde meaning “probe,” is an expendable package of instruments sent aloft attached to a weather balloon. As the balloon ascends, the radiosonde measures various atmospheric parameters and transmits them to receivers on the ground. The first radiosonde was launched by Soviet meteorologist Pavel Molchanov on January 30, 1930.391

  Modern radiosondes measure a number of parameters: altitude, location, wind speed, atmospheric pressure, relative humidity and temperature. Worldwide there are more than 800 radiosonde launch sites and most countries share their data. In the United States, the National Weather Service is tasked with providing upper-air observations for use in weather forecasting, severe weather watches and warnings, and atmospheric research. There are 92 launch sites in North America and the Pacific Islands and 10 more in the Caribbean. Each site launches two radiosondes daily. Radiosonde data is freely available from NOAA on the web.

  Measuring temperature using radiosondes involves a number of complications that must be compensated for. As the balloon rises, the instrument that measures temperature, usually a form of thermistor, experiences a time lag. This lag makes accurately matching temperatures with correct altitudes difficult. Another complication is the Sun heating the instrument package as it rises, which biases the temperature readings. Worse still, different instrument packages from different manufacturers respond in different ways to these factors. Changing instrument brands can cause a shift in recorded temperatures by as much as 5.5°F (3°C).392

  Illustration 123: Radiosonde temperature data. Source JunkScience.com.

  Over time, changes in instrumentation and lack of documented comparison data among sites has made trying to construct global temperature histories very difficult. Regardless, radiosonde data is quite useful, particularly when readings are averaged over time. Variation caused by the El Niño-Southern Oscillation (ENSO) can be seen in Illustration 123. When scientists are trying to measure decadal variations on the order of 0.05°C, these data inaccuracies totally overwhelm any detectable trend. Given the uncertainties in both methods of data collection, it is not surprising that there have been a number of controversies regarding disagreement between satellite and radiosonde data.

  If both satellite data and radiosonde measurements have had problems in the past, what about temperature readings from ground sites? It would be natural to assume that thermometer readings from weather stations would be reliable considering that thermometers have been around for more than 200 years. Quickly checking the temperature to see if an extra jacket is needed is one thing—reliably recording temperature readings, accurate enough for climate study use, turns out to be much more difficult.

  Recently, NASA be
came aware of a glitch in their historical temperature data. It seems that a volunteer team, investigating problems with US temperature data used for climate modeling, noticed a suspicious anomaly in NASA's historical temperature graphs. A strange discontinuity, or jump, in temperature readings from many locations occurred around January, 2000.

  The original graphs and data available on the NASA/GISS website were created by Reto Ruedy and James Hansen. Hansen has been a vocal supporter of the IPCC claims, and gained notoriety by accusing the Bush administration of trying to censor his views on climate change. When contacted, Hansen refused to provide the algorithms used to generate the graph data, a position reminiscence of Mann's refusal to release the data and algorithms behind the hockey stick graph. Faced with Hansen's refusal, one of the volunteers reverse-engineered the data algorithm. After analyzing the results, what appeared to be a Y2K bug in the handling of the raw data was found.

  Illustration 124: Revised US Temperatures since 1880. Source NASA.

  For those too young to remember, Y2K refers to the “year 2000 computer crisis.” At the end of the 20th century, computer programmers realized that some software might have problems when the year rolled over from 1999 to 2000. This was anticipated, and steps were taken to correct the problems before they occurred. Even so, the news media was filled with stories of impending disaster; people trapped in elevators, airplanes falling from the skies, the power grid failing and bank accounts emptying overnight. None of these things happened, of course, but there was a run on the banks as the panicked public withdrew money in case the financial system collapsed.

 

‹ Prev