The Collapse of Western Civilization

Home > Other > The Collapse of Western Civilization > Page 2
The Collapse of Western Civilization Page 2

by Naomi Oreskes


  In the early Penumbral Period, physical scientists who spoke out about the potentially catastrophic effects of climate change were accused of being “alarmist” and of acting out of self-interest—to increase financial support for their enterprise, gain attention, or improve their social standing. At first, these accusations took the form of public denunciations; later they included threats, thefts, and the subpoena of private correspondence.1 A crucial but under-studied incident was the legal seizing of notes from scientists who had documented the damage caused by a famous oil spill of the period, the 2011 British Petroleum Deepwater Horizon. Though leaders of the scientific community protested, scientists yielded to the demands, thus helping set the stage for further pressure on scientists from both governments and the industrial enterprises that governments subsidized and protected.2 Then legislation was passed (particularly in the United States) that placed limits on what scientists could study and how they could study it, beginning with the notorious House Bill 819, better known as the “Sea Level Rise Denial Bill,” passed in 2012 by the government of what was then the U.S. state of North Carolina (now part of the Atlantic Continental Shelf).3 Meanwhile the Government Spending Accountability Act of 2012 restricted the ability of government scientists to attend conferences to share and analyze the results of their research.4

  Though ridiculed when first introduced, the Sea Level Rise Denial Bill would become the model for the U.S. National Stability Protection Act of 2025, which led to the conviction and imprisonment of more than three hundred scientists for “endangering the safety and well-being of the general public with unduly alarming threats.” By exaggerating the threat, it was argued, scientists were preventing the economic development essential for coping with climate change. When the scientists appealed, their convictions were upheld by the U.S. Supreme Court under the Clear and Present Danger doctrine, which permitted the government to limit speech deemed to represent an imminent threat.

  Had scientists exaggerated the threat, inadvertently undermining the evidence that would later vindicate them? Certainly, narcissistic fulfillment played a role in the public positions that some scientists took, and in the early part of the Penumbral Period, funds flowed into climate research at the expense of other branches of science, not to mention other forms of intellectual and creative activity. Indeed, it is remarkable how little these extraordinarily wealthy nations spent to support artistic production; one explanation may be that artists were among the first to truly grasp the significance of the changes that were occurring. The most enduring literary work of this time is the celebrated science “fiction” trilogy by an American writer Kim Stanley Robinson—Forty Signs of Rain, Fifty Degrees Below, and Sixty Days and Counting.5 Sculptor Dario Robleto also “spoke” to the issue, particularly species loss; his material productions have been lost, but responses to his work are recorded in contemporary accounts.6 Some environmentalists also anticipated what was to come, notably the Australians Clive Hamilton and Paul Gilding. (Perhaps because Australia’s population was highly educated and living on a continent at the edge of habitability, it was particularly sensitive to the changes under way.)7 These “alarmists”—scientists and artists alike—were correct in their forecasts of an imminent shift in climate; in fact, by 2010 or so, it was clear that scientists had been underestimating the threat, as new developments outpaced early predictions of warming, sea level rise, and Arctic ice loss, among other parameters.8

  It is difficult to understand why humans did not respond appropriately in the early Penumbral Period, when preventive measures were still possible. Many have sought an answer in the general phenomenon of human adaptive optimism, which later proved crucial for survivors. Even more elusive to scholars is why scientists, whose job it was to understand the threat and warn their societies—and who thought that they did understand the threat and that they were warning their societies—failed to appreciate the full magnitude of climate change.

  To shed light on this question, some scholars have pointed to the epistemic structure of Western science, particularly in the late nineteenth and twentieth centuries, which was organized both intellectually and institutionally around “disciplines” in which specialists developed a high level of expertise in a small area of inquiry. This “reductionist” approach, sometimes credited to the seventeenth-century French philosophe René Descartes but not fully developed until the late nineteenth century, was believed to give intellectual power and vigor to investigations by focusing on singular elements of complex problems. “Tractability” was a guiding ideal of the time: problems that were too large or complex to be solved in their totality were divided into smaller, more manageable elements. While reductionism proved powerful in many domains, particularly quantum physics and medical diagnostics, it impeded investigations of complex systems. Reductionism also made it difficult for scientists to articulate the threat posed by climatic change, since many experts did not actually know very much about aspects of the problem beyond their expertise. (Other environmental problems faced similar challenges. For example, for years, scientists did not understand the role of polar stratospheric clouds in severe ozone depletion in the still-glaciated Antarctic region because “chemists” working on the chemical reactions did not even know that there were clouds in the polar stratosphere!) Even scientists who had a broad view of climate change often felt it would be inappropriate for them to articulate it, because that would require them to speak beyond their expertise, and seem to be taking credit for other people’s work.

  Responding to this, scientists and political leaders created the IPCC to bring together the diverse specialists needed to speak to the whole problem. Yet, perhaps because of the diversity of specialist views represented, perhaps because of pressures from governmental sponsors, or perhaps because of the constraints of scientific culture already mentioned, the IPCC had trouble speaking in a clear voice. Other scientists promoted the ideas of systems science, complexity science, and, most pertinent to our purposes here, earth systems science, but these so-called holistic approaches still focused almost entirely on natural systems, omitting from consideration the social components. Yet in many cases, the social components were the dominant system drivers. It was often said, for example, that climate change was caused by increased atmospheric concentrations of greenhouse gases. Scientists understood that those greenhouse gases were accumulating because of the activities of human beings—deforestation and fossil fuel combustion—yet they rarely said that the cause was people, and their patterns of conspicuous consumption.

  Reductionism also made it difficult for scientists to articulate the threat posed by climatic change, since many experts did not actually know very much about aspects of the problem beyond their expertise.

  Other scholars have looked to the roots of Western natural science in religious institutions. Just as religious orders of prior centuries had demonstrated moral rigor through extreme practices of asceticism in dress, lodging, behavior, and food—in essence, practices of physical self-denial—so, too, did physical scientists of the twentieth and twenty-first centuries attempt to demonstrate their intellectual rigor through practices of intellectual self-denial.9 These practices led scientists to demand an excessively stringent standard for accepting claims of any kind, even those involving imminent threats. In an almost childlike attempt to demarcate their practices from those of older explanatory traditions, scientists felt it necessary to prove to themselves and the world how strict they were in their intellectual standards. Thus, they placed the burden of proof on novel claims—even empirical claims about phenomena that their theories predicted. This included claims about changes in the climate.

  In an almost childlike attempt to demarcate their practices from those of older explanatory traditions, scientists felt it necessary to prove to themselves and the world how strict they were in their intellectual standards.

  Some scientists in the early twenty-first century, for example, had recognized that hurricanes were intensifying. This was cons
istent with the expectation—based on physical theory—that warmer sea surface temperatures in regions of cyclogenesis could, and likely would, drive either more hurricanes or more intense ones. However, they backed away from this conclusion under pressure from their scientific colleagues. Much of the argument surrounded the concept of statistical significance. Given what we now know about the dominance of nonlinear systems and the distribution of stochastic processes, the then-dominant notion of a 95 percent confidence limit is hard to fathom. Yet overwhelming evidence suggests that twentieth-century scientists believed that a claim could be accepted only if, by the standards of Fisherian statistics, the possibility that an observed event could have happened by chance was less than 1 in 20. Many phenomena whose causal mechanisms were physically, chemically, or biologically linked to warmer temperatures were dismissed as “unproven” because they did not adhere to this standard of demonstration. Historians have long argued about why this standard was accepted, given that it had neither epistemological nor substantive mathematical basis. We have come to understand the 95 percent confidence limit as a social convention rooted in scientists’ desire to demonstrate their disciplinary severity.

  Western scientists built an intellectual culture based on the premise that it was worse to fool oneself into believing in something that did not exist than not to believe in something that did. Scientists referred to these positions, respectively, as “type I” and “type II” errors, and established protocols designed to avoid type I errors at almost all costs. One scientist wrote, “A type I error is often considered to be more serious, and therefore more important to avoid, than a type II error.” Another claimed that type II errors were not errors at all, just “missed opportunities.”10 So while the pattern of weather events was clearly changing, many scientists insisted that these events could not yet be attributed with certainty to anthropogenic climate change. Even as lay citizens began to accept this link, the scientists who studied it did not.11 More important, political leaders came to believe that they had more time to act than they really did. The irony of these beliefs need not be dwelt on; scientists missed the most important opportunity in human history, and the costs that ensued were indeed nearly “all costs.”

  Western scientists built an intellectual culture based on the premise that it was worse to fool oneself into believing in something that did not exist than not to believe in something that did.

  By 2012, more than 365 billion tons of carbon had been emitted to the atmosphere from fossil fuel combustion and cement production. Another 180 were added from deforestation and other land use changes. Remarkably, more than half of these emissions occurred after the mid-1970s—that is, after scientists had built computer models demonstrating that greenhouse gases would cause warming. Emissions continued to accelerate even after the UNFCCC was established: between 1992 and 2012, total CO2 emissions increased by 38 percent.12 Some of this increase was understandable, as energy use grew in poor nations seeking to raise their standard of living. Less explicable is why, at the very moment when disruptive climate change was becoming apparent, wealthy nations dramatically increased their production of fossil fuels. The countries most involved in this enigma were two of the world’s richest: the United States and Canada.

  A turning point was 2005, when the U.S. Energy Policy Act exempted shale gas drilling from regulatory oversight under the Safe Drinking Water Act. This statute opened the floodgates (or, more precisely, the wellheads) to massive increases in shale gas production.13 U.S. shale gas production at that time was less than 5 trillion cubic feet (Tcf, with “feet” an archaic imperial unit roughly equal to a third of a meter) per annum. By 2035, it had increased to 13.6 Tcf. As the United States expanded shale gas production and exported the relevant technology, other nations followed. By 2035, total gas production had exceeded 250 Tcf per annum.14

  This bullish approach to shale gas production penetrated Canada as well, as investor-owned companies raced to develop additional fossil fuel resources; “frenzy” is not too strong a word to describe the surge of activity that occurred. In the late twentieth century, Canada was considered an advanced nation with a high level of environmental sensitivity, but this changed around the year 2000 when Canada’s government began to push for development of huge tar sand deposits in the province of Alberta, as well as shale gas in various parts of the country. The tar sand deposits (which the government preferred to call oil sands, because liquid oil had a better popular image than sticky tar) had been mined intermittently since the 1960s, but the rising cost of conventional oil now made sustained exploitation economically feasible. The fact that 70 percent of the world’s known reserves were in Canada explains the government’s reversed position on climate change: in 2011, Canada withdrew from the Kyoto Protocol to the UNFCCC.15 Under the protocol, Canada had committed to cut its emissions by 6 percent, but its actual emissions instead increased more than 30 percent.16

  Canada was considered an advanced nation with a high level of environmental sensitivity, but this changed around the year 2000 when Canada’s government began to push for development of huge tar sand deposits in the province of Alberta, as well as shale gas in various parts of the country.

  Meanwhile, following the lead of the United States, the government began aggressively to promote the extraction of shale gas, deposits of which occurred throughout Canada. Besides driving up direct emissions of both CO2 and CH4 to the atmosphere (since many shale gas fields also contained CO2, and virtually all wells leaked), the resulting massive increase in supply of natural gas led to a collapse in the market price, driving out nascent renewable energy industries everywhere except China, where government subsidies and protection for fledgling industries enabled the renewable sector to flourish.

  Cheap natural gas also further undermined the already ailing nuclear power industry, particularly in the United States. To make matters worse, the United States implemented laws forbidding the use of biodiesel fuels—first by the military, and then by the general public—undercutting that emerging market as well.17 Bills were passed on both the state and federal level to restrict the development and use of other forms of renewable energy—particularly in the highly regulated electricity generation industry—and to inhibit the sale of electric cars, maintaining the lock that fossil fuel companies had on energy production and use.18

  Meanwhile, Arctic sea ice melted, and seaways opened that permitted further exploitation of oil and gas reserves in the north polar region. Again, scientists noted what was happening. By the mid-2010s, the Arctic summer sea had lost about 30 percent of its areal extent compared to 1979, when high-precision satellite measurements were first made; the average loss was rather precisely measured at 13.7 percent per decade from 1979 to 2013.19 When the areal extent of summer sea ice was compared to earlier periods using additional data from ships, buoys, and airplanes, the total summer loss was nearly 50 percent. The year 2007 was particularly worrisome, as the famous Northwest Passage—long sought by Arctic explorers—opened, and the polar seas became fully navigable for the first time in recorded history. Scientists understood that it was only a matter of time before the Arctic summer would be ice-free, and that this was a matter of grave concern. But in business and economic circles it was viewed as creating opportunities for further oil and gas exploitation.20 One might have thought that governments would have stepped in to prevent this ominous development—which could only exacerbate climate change—but governments proved complicit. One example: in 2012 the Russian government signed an agreement with American oil giant ExxonMobil, allowing the latter to explore for oil in the Russian Arctic in exchange for Russian access to American shale oil drilling technology.21

  How did these wealthy nations—rich in the resources that would have enabled an orderly transition to a zero-net-carbon infrastructure—justify the deadly expansion of fossil fuel production? Certainly, they fostered the growing denial that obscured the link between climate change and fossil fuel production and consumption. They also ente
rtained a second delusion: that natural gas from shale could offer a “bridge to renewables.” Believing that conventional oil and gas resources were running out (which they were, but at a rate insufficient to avoid disruptive climate change), and stressing that natural gas produced only half as much CO2 as coal, political and economic leaders—and even many climate scientists and “environmentalists”—persuaded themselves and their constituents that promoting shale gas was an environmentally and ethically sound approach.

  This line of reasoning, however, neglected several factors. First, fugitive emissions—CO2 and CH4 that escaped from wellheads into the atmosphere—greatly accelerated warming. (As with so many climate-related phenomena, scientists had foreseen this, but their predictions were buried in specialized journals.) Second, most analyses of the greenhouse gas benefits of gas were based on the assumption that it would replace coal in electricity generation where the benefits, if variable, were nevertheless fairly clear. However, as gas became cheap, it came to be used increasingly in transportation and home heating, where losses in the distribution system negated many of the gains achieved in electricity generation. Third, the calculated benefits were based on the assumption that gas would replace coal, which it did in some regions (particularly in the United States and some parts of Europe), but elsewhere (for example Canada) it mostly replaced nuclear and hydropower. In many regions cheap gas simply became an additional energy source, satisfying expanding demand without replacing other forms of fossil fuel energy production. As new gas-generating power plants were built, infrastructures based on fossil fuels were further locked in, and total global emissions continued to rise. The argument for the climatic benefits of natural gas presupposed that net CO2 emissions would fall, which would have required strict restrictions on coal and petroleum use in the short run and the phase-out of gas as well in the long run.22 Fourth, the analyses mostly omitted the cooling effects of aerosols from coal, which although bad for human health had played a significant role in keeping warming below the level it would otherwise have already reached. Fifth, and perhaps most important, the sustained low prices of fossil fuels, supported by continued subsidies and a lack of external cost accounting, undercut efficiency efforts and weakened emerging markets for solar, wind, and biofuels (including crucial liquid biofuels for aviation).23 Thus, the bridge to a zero-carbon future collapsed before the world had crossed it.

 

‹ Prev