Trust Us, We're Experts PA

Home > Other > Trust Us, We're Experts PA > Page 14
Trust Us, We're Experts PA Page 14

by Sheldon Rampton


  Rolling the Dice

  For most people, “risk” and “hazard” are virtual synonyms, although conventional risk analysts assign them somewhat different meanings. A sharp knife blade, they will tell you, is an example of a hazard, while risk is the probability that the knife will actually hurt someone. Sandman’s formula, however, is concerned with a different kind of risk—namely, the probability that a given hazard will hurt a company’s bottom line. His formula recognizes that beyond the direct liabilities associated with a hazard, a company’s reputation and profitability are affected by the way the public reacts to it.

  Businesses are accustomed to thinking of risk as an economic reality. They take a serious approach to dealing with it and have evolved rigorous and elaborate systems for managing it, with their own specialized vocabulary: country risk, currency exchange risk, inflation and price risk, credit risk, insurance, cost of residual uncertainty, risk pooling, probability, variation, standard deviation, diversification. “Every financial firm of any substance has a formal risk management department,” says Daniel Geer, an e-commerce security expert. “The financial world in its entirety is about packaging risk so that it can be bought and sold, i.e., so that risk can be securitized and finely enough graded to be managed at a profit. Everything from the lowly car loan to the most exotic derivative security is a risk-reward trade-off. Don’t for a minute underestimate the amount of money to be made on Wall Street, London and/or Tokyo when you can invent a new way to package risk. . . . You don’t have to understand forward swaptions, collateralized mortgage obligations, yield burning, or anything else to understand that risk management is where the money is. In a capitalist world, if something is where the money is, that something rules. Risk is that something.”8

  Businesspeople gamble with money, and a bad gamble simply means that someone loses some cash. “Risk analysis” of chemicals and other potential environmental and health risks is derived from “cost-benefit analysis,” which in turn derives from simple profit-and-loss accounting used by private companies. Arbitrary and indefensible assumptions enter the equation, however, when this methodology is used to gamble on things as important as human lives or the natural environment in which people live. What is the dollar value, after all, of a human life? What is the value of the air we breathe, the fertility of our soil, or our continued health and ability to have children? A price can be put on the cost of hospital care for cancer patients, but what price can we put on the suffering that the patients and their families endure? These questions have been asked by government regulators and in product-liability lawsuits, with widely varying answers.

  A growing number of hard decisions facing modern societies involve the question: “How safe is safe enough?” Nuclear waste, recombinant DNA, food additives, and chemical plant explosions are just a few of the effects of technological progress that raise this question. The answers are difficult, because they involve multiple uncertainties: uncertainty about the magnitude of the risk at hand, contradictory data and theories, business trade secrets, conflicting social values, disagreements between technical experts and the public at large. What makes these problems even more intractable is that politics and sophistry are frequently used to shift the blame away from those who cause the harm to those who suffer the consequences. “Risk analysis is a subtle discipline,” observes Ian Stewart, a mathematics professor at Warwick University in England. “It is an elaborate and rather naive procedure that can be abused in several ways. One abuse is to exaggerate benefits and tone down risks. A particularly nasty kind occurs when one group takes the risk but a different group reaps the benefit.”9 Risk management is not merely a technical discipline. Psychology, economics, politics, and the power of vested interests all lurk beneath the seemingly objective language of “balancing risks against benefits.”

  The question of which risks are acceptable depends ultimately on where the person passing judgment stands in relation to those risks. Under our current regulatory system, the risk of chemical exposures is usually passed on to the people who suffer those exposures. If 10 or 20 years later they come down with cancer or their children suffer health problems, identifying the cause—let alone proving it in a court of law—is virtually impossible. Companies find this arrangement profitable, and it certainly encourages technological innovation, but the cost to others can be considerable, as the tobacco industry and the makers of leaded gasoline have tragically proven.

  “Risk assessment is a decision-making technique that first came into use during the presidency of Jimmy Carter, who was trained as a nuclear engineer,” says Peter Montague, the editor of Rachel’s Environment and Health Weekly, a newsletter that offers weekly investigative reporting and opinion on issues of ecology and public health. “At its best, risk assessment is an honest attempt to find a rational basis for decisions, by analyzing the available scientific evidence. In theory it is still an attractive ideal,” Montague says. “However, 20 years of actual practice have badly tarnished the ideal of risk assessment and have sullied the reputation of many a risk assessor.” It arose, he says, in response to the growing realization that “many modern technologies had far surpassed human understanding, giving rise to by-products that were dangerous, long-lived, and completely unanticipated.” The same technologies that have created unparalleled wealth have also created unparalleled problems with municipal and industrial wastes, agricultural chemicals, auto exhausts, smokestack emissions, and greenhouse gases.

  As government regulators and pollution-producing industries came under pressure in the 1970s to address these problems, they began devising quantitative measurements to assess impacts, to weigh risks against benefits, and to establish numerical thresholds that would distinguish between dangerous and safe exposure levels. The effort to develop these quantitative standards, however, is fraught with difficulties. The natural environment is quite different from a laboratory, and laboratory studies cannot hope to duplicate the myriad conditions and environments into which chemical compounds are being released. Financial realities also limit the quality of the information that can be generated through laboratory research. To determine whether a chemical causes cancer, for example, researchers typically take a relatively small number of mice and pump them with large quantities of the chemical in question, because the alternative approach—using tens of thousands of mice and subjecting them to lower exposures—would cost a fortune. The effect of low-dose exposures is estimated by statistical extrapolation from the high-dose exposures. When one set of researchers set out to assess the accuracy of high-dose to low-dose extrapolation models, however, they found that the predicted low-dose results vary by a factor of a million. This, they note, “is like not knowing whether you have enough money to buy a cup of coffee or pay off the national debt.”10

  In 1995, three well-known and respected risk assessors—Anna Fan, Robert Howd, and Brian Davis—published a detailed summary of the status of risk assessment, in which they pointed out that there is no scientific agreement on which tests to use to determine whether someone has suffered immune system, nervous system, or genetic damage. In other words, the best available science lacks the tools with which to provide definite, quantitative answers to the questions that are at the heart of risk assessment. “There are other problems with risk assessments,” Montague observes. “Science has no way to analyze the effects of multiple exposures, and almost all modern humans are routinely subjected to multiple exposures: pesticides, automobile exhaust, dioxins in meat, fish and dairy products; prescription drugs; tobacco smoke; food additives; ultraviolet sunlight passing through the earth’s damaged ozone shield; and so on. Determining the cumulative effect of these insults is a scientific impossibility, so most risk assessors simply exclude these inconvenient realities. But the resulting risk assessment is bogus. . . . Risk assessment, it is now clear, promises what it cannot deliver, and so is misleading at best and fraudulent at worst. It pretends to provide a rational assessment of ‘risk’ or ‘safety,’ but it can do no su
ch thing because the required data are simply not available, nor are standardized methods of interpretation.”11

  Publicly, industry and government remain committed to risk assessment, but defectors are increasingly willing to admit that it is an art rather than a science. Different risk assessors, using the same evidence, can easily come up with radically opposed conclusions as to the costs and benefits of a course of action. Where uncertainty reigns, spin doctors rush in to fill the information vacuum. Notwithstanding its limitations, the methodology of risk assessment offers important advantages to the corporate spin doctor. “These methods are especially valuable politically in that their use tends to obscure the basic policy questions of government regulation of business in a technocratic haze of numbers (numbers readily manipulated), focusing attention upon the statistics rather than the issues,” observes science historian David Noble. “The methods offer other advantages as well, not least of which is the seeming monopoly on rationality itself. All qualitative or subjective decision-making is relegated to the realm of irrationality and dismissed without a hearing. By invalidating experience and intuition, they thereby disqualify all but the technically initiated from taking part in the debate, which becomes enshrouded in an impenetrable cloak of mystery. People are encouraged to suspend their own judgment and abandon responsibility to the experts (who have already surrendered their responsibility to their paymasters).”12

  Risk analysis comes in a variety of flavors. One approach seeks to quantify everything in the analysis, assigning dollar values to such unquantifiable, qualitative things as human lives and environmental beauty, along with genuinely quantifiable factors such as corporate profits and wealth created. The analyst then totals up the sum of various alternatives, and whichever one costs the least is deemed the most “acceptable” risk. Another approach relies heavily on comparisons between different types of risks. If the risk to health posed by the use of a technology or chemical is questioned, the analyst calculates the likelihood of someone dying from exposure to that chemical and shows that it is less likely than the risk of dying from other events such as a car crash or drowning in a flood. Since people choose to drive cars and live downstream from dams, those risks must be acceptable to the public, the analyst concludes, and therefore this chemical must be acceptable too.

  “If a person is horrified by the consequences of a carcinogenic pollutant, he is reminded that every day he takes greater risks driving to work, so what’s all the fuss: Be consistent,” Noble observes. “The appealing thing about such methods for the analyst aside from the fact that they reinforce his prerogatives is that they so often yield counter-intuitive results; the answers come out in ways one would not have anticipated (unless, of course, one were the analyst). The happy consequence of this, for the promoters of the techniques, is that the naïveté of the non-specialist is forever being revealed; the public is thus further cautioned about relying upon their experience and intuition and encouraged instead to rely upon the wisdom of the expert who alone can put things in perspective.”13

  H. W. Lewis, a professor of physics at the University of California-Santa Barbara who has chaired numerous government risk-assessment committees on defense, nuclear power, and other matters, exemplifies the attitudes of the modern risk assessor. He has written a book, Technological Risk, which promises to reveal the real dangers, “if any, of toxic chemicals, the greenhouse effect, microwave radiation, nuclear power, air travel, automobile travel, carcinogens of all kinds, and other threats to our peace of mind.” It offers mortality tables and a lesson in the statistical techniques used to measure risk and is in many ways a useful and thoughtful guide. Lewis believes that the problem of overpopulation is more serious and pressing than technological risk, a judgment with which many reasonable people would certainly agree. He points out that some of the largest risks confronting individuals today stem from activities such as smoking and automobile use, facts that are indisputable. He notes furthermore that it is impossible to eliminate all risk from life, which is also indisputable. Why, then, he asks, do people worry about little things like nuclear waste and pesticides, which he regards as trivial risks? The answer, he concludes, is that the public is irrational and poorly educated. “The fraction of our population that believes in UFOs and reincarnation is mind-boggling, less than half of us know that the earth goes around the sun once a year, and it is an unending struggle to keep the teaching of evolution legal in the schools,” he writes. “Our very literacy as a nation is in danger.”14

  The ignorance of the masses is such a serious problem, Lewis believes, that democracy itself is a dangerous proposition. “We are a participatory democracy and it is everyone’s country, not just the educated,” he writes. “The common good is ill served by the democratic process. The problem is exacerbated by the emergence of groups of persuasive people who specialize in technology-bashing and exploitation of fear, make their livings thereby, and have been embraced by large segments of the media as experts.”15

  Paradoxically, however, Lewis also believes that “the core of the anti-technology movement today” is composed not of society’s least-educated members, but of the wealthiest and therefore the best-educated. “It seems to be an upper-middle-class phenomenon,” he writes. “We in the affluent societies are preoccupied with safety, while risk is recognized as a normal condition of existence by the less affluent. . . . Such people are genuinely concerned that technology may be destroying the environment, and have presumably never seen the environment in other, less technically advanced, countries.”16

  Following this logic to its conclusion would seem to suggest that we should be taking our cues on matters pertaining to risk from impoverished sweatshop laborers in Central America, but since many of them are indeed genuinely illiterate and in any case rarely receive invitations to write books or serve on risk-assessment committees, the burden falls upon Lewis himself—a member of the educated upper middle class—to speak on their behalf.

  When Risk Turns to Crisis

  One problem with efforts to assess risk is that many factors—notoriously, the human factor—can never be quantified. Take, for example, the case of the 1984 poison leak in Bhopal, India, which is widely recognized as the world’s worst industrial accident. The Bhopal disaster killed more than 2,000 people and seriously injured an estimated 200,000, many of whom suffered permanent blindness and damage to their respiratory systems. The disaster occurred when a pesticide plant owned by Union Carbide released methyl isocyanate gas, creating what Time magazine called “a vast, dense fog of death” that wiped out whole neighborhoods. “Even more horrifying than the number of dead,” wrote Fortune magazine, “was the appalling nature of their dying—crowds of men, women and children scurrying madly in the dark, twitching and writhing like the insects for whom the poison was intended.”17

  Peter Sandman, who helped advise Union Carbide in the aftermath of the disaster, believes that the accident was triggered by deliberate employee sabotage. “Union Carbide has persuasive evidence,” he claims. “The guilty party probably didn’t intend to kill and maim thousands of people; he just wanted to get even for some real or imagined mistreatment by ruining a batch of methyl isocyanate.”18 In making this claim, he is repeating a theory that Union Carbide has repeatedly floated over the years. However, the company has never provided enough specifics to enable independent verification of whether this was indeed what happened.19 Even if this version of events is true, of course, it in no way mitigates the company’s responsibility for the disaster. A whole cascade of failed safety measures went into the Bhopal tragedy. At the time of its occurrence, a refrigeration unit designed to prevent just such a catastrophe was shut down and had been inoperative for five months. Other fail-safe devices were also out of commission. The plant was understaffed, and employees were inadequately trained due to budget cutbacks. The plant lacked a computerized monitoring system for detecting toxic releases. Instead, workers were in the habit of recognizing leaks when their noses would burn and th
eir eyes would water. No alarm system existed for warning the surrounding community, and no effort had been made to develop evacuation procedures and other emergency plans that could have saved many lives. As the New York Times concluded in its report, Bhopal was “the result of operating errors, design flaws, maintenance failures, and training deficiencies,” all of which reflected corporate management decisions—human factors, in other words, not technical ones.20

  “There are two kinds of uncertainty,” Montague notes. “First, there is risk, which is an event with a known probability (such as the risk of losing your life in a car this year—the accident and death rates are known). Then there is true uncertainty, which is an event with unknown probability.” The human factor, and many of the risks associated with environmental problems, involve true uncertainty. Since these risks cannot be quantified, they tend to be treated as ghosts within the machine of risk assessment—minimized, or subjected to arbitrary estimates based on guess-work rather than hard knowledge.

  In the wake of most major accidents it is usually easy to find embarrassing examples of experts who predicted beforehand that such an event could never, ever occur. “I cannot imagine any condition which would cause a ship to founder. . . . Modern shipbuilding has gone beyond that,” said Edward J. Smith, captain of the Titanic.21 A year before the nuclear meltdown at Chernobyl, a Soviet deputy minister of the power industry announced that Soviet engineers were confident that you’d have to wait 100,000 years before the Chernobyl reactor had a serious accident. 22 Shortly before the explosion of the Challenger space shuttle, Bryan O’Connor, NASA’s Washington-based director of the shuttle program, recalls that he “asked someone what the probability risk assessment was for the loss of a shuttle. I was told it was one in ten thousand.”23

 

‹ Prev