Trust Us, We're Experts PA

Home > Other > Trust Us, We're Experts PA > Page 26
Trust Us, We're Experts PA Page 26

by Sheldon Rampton


  Berke was identified alongside the review as “Jerry H. Berke, MD, MPH.” The NEJM failed to disclose, however, that Berke was director of toxicology for W. R. Grace, one of the world’s largest chemical manufacturers and a notorious polluter. A leading manufacturer of asbestos-containing building products, W. R. Grace has been a defendant in several thousand asbestos-related cancer lawsuits and has paid millions of dollars in related court judgments. It is probably best known as the company that polluted the drinking water of the town of Woburn, Massachusetts, and later paid an $8 million out-of-court settlement to the families of seven Woburn children and one adult who contracted leukemia after drinking contaminated water. During the Woburn investigation, Grace was caught in two felony lies to the U.S. Environmental Protection Agency.

  When questioned about its failure to identify Berke’s affiliation, the New England Journal of Medicine offered contradictory and implausible explanations. First it attributed the omission to an “administrative oversight” and claimed that it didn’t know about Berke’s affiliation with W. R. Grace. Later, a journal representative admitted that they did know but said they thought Grace was a “hospital or research institute.” If so, this ignorance would itself be remarkable, since the NEJM is located in Boston, and Grace had been the subject of more than a hundred news stories in the Boston Globe between 1994 and 1997. Moreover, NEJM editor Marcia Angell lives in Cambridge, Massachusetts, the world headquarters of W. R. Grace. Her home is only eight miles away from Woburn, whose leukemia lawsuit is also the central subject of A Civil Action, Jonathan Harr’s best-selling book that was made into a movie starring John Travolta. During the months immediately preceding the publication of Berke’s review, in fact, the film crew for A Civil Action was working in the Boston area and was itself the subject of numerous prominent news stories.16

  In response to criticism of these lapses, NEJM editor Jerome P. Kassirer insisted that his journal’s conflict-of-interest policy was “the tightest in the business.”17 The sad fact is that this boast is probably correct. In 1996, Sheldon Krimsky of Tufts University did a study of journal disclosures that dug into the industry connections of the authors of 789 scientific papers published by 1,105 researchers in 14 leading life science and biomedical journals. In 34 percent of the papers, at least one of the chief authors had an identifiable financial interest connected to the research, and Krimsky observed that the estimate of 34 percent was probably lower than the true level of financial conflict of interest, since he was unable to check if the researchers owned stock or had received consulting fees from the companies involved in commercial applications of their research. None of these financial interests were disclosed in the journals, where readers could see them.18 In 1999, a larger study by Krimsky examined 62,000 articles published in 210 different scientific journals and found that only one half of one percent of the articles included information about the authors’ research-related financial ties. Although all of the journals had a formal requirement for disclosure of conflicts of interest, 142 of the journals had not published a single disclosure during 1997, the year under study.19

  Corporate-sponsored scientific symposiums provide another means for manipulating the content of medical journals. In 1992, the New England Journal of Medicine itself published a survey of 625 such symposiums which found that 42 percent of them were sponsored by a single pharmaceutical sponsor. There was a correlation, moreover, between single-company sponsorship and practices that commercialize or corrupt the scientific review process, including symposiums with misleading titles designed to promote a specific brand-name product. “Industry-sponsored symposia are promotional in nature and . . . journals often abandon the peer-review process when they publish symposiums,” the survey concluded. 20 Drummond Rennie, a deputy editor of the Journal of the American Medical Association, describes how the process works in plainer language:I’m the advertising guy for the drug. I tell a journal I will give them $100,000 to have a special issue on that drug. Plus I’ll give the journal so much per reprint, and I’ll order a lot of reprints. I’ll select the editor and all the authors. I phone everyone who has written good things about that drug. I say, “I’ll fly you and your wife first class to New Orleans for a symposium. I’ll put your paper in the special issue of the journal, and you’ll have an extra publication for your c.v.” Then I’ll put a reprint of that symposium on some doctor’s desk and say, “Look at this marvelous drug.”21

  Does Money Matter?

  As these examples illustrate, many of the factors that bias scientific results are considerably more subtle than outright bribery or fraud. “There is distortion that causes publication bias in little ways, and scientists just don’t understand that they have been influenced,” Rennie says. “There’s influence everywhere, on people who would steadfastly deny it.”22 Scientists can be naive about politics and other external factors shaping their work and become indignant at the suggestion that their results are shaped by their funding. But science does not occur in a vacuum. In studying animal populations, biologists use the term “selection pressure” to describe the influence that environmental conditions exert upon the survival of certain genetic traits over others. Within the population of scientists, a similar type of selection pressure occurs as industry and government support, combined with the vicissitudes of political fashion, determine which careers flourish and which languish. As David Ozonoff of the Boston University School of Medicine has observed, “One can think of an idea almost as one thinks of a living organism. It has to be continually nourished with the resources that permit it to grow and reproduce. In a hostile environment that denies it the material necessities, scientific ideas tend to languish and die.”23

  Like other human institutions, the development of the scientific enterprise has seen both advances and reversals and is exquisitely sensitive to the larger social environment in which it exists. Germany, for example, was a world leader in science in the nineteenth and early twentieth centuries but went into scientific decline with the rise of fascism. Under the Nazis, scientists were seen as too “cosmopolitan,” and the idea of a culturally rooted “German science” transformed applied scientists into “folk practitioners,” elevated astrology at the expense of astronomy, and impoverished the country’s previously renowned institutions for the study of theoretical physics. Something similar happened in Soviet Russia when previously accepted theories in astronomy, chemistry, medicine, psychology, and anthropology were criticized on the grounds that they conflicted with the principles of Marxist materialism. The most notorious example in the Soviet case was the rise of Lysenkoism, which rejected the theories of Mendelian genetics with catastrophic results for Russian agriculture. In the United States, political and social movements have also given rise to a number of dubious scientific trends, including the “creation science” of Christian fundamentalists as well as such movements as parapsychology and scientology.

  The most dramatic trend influencing the direction of science during the past century, however, has been its increasing dependence on funding from government and industry. Unlike the “gentleman scientists” of the nineteenth century who enjoyed financial independence that allowed them to explore their personal scientific interests with considerable freedom, today’s mainstream scientists are engaged in expensive research that requires the support of wealthy funders. A number of factors have contributed to this reality, from the rise of big government to the militarization of scientific research to the emergence of transnational corporations as important patrons of research.

  The Second World War marked a watershed in the development of these trends, with the demands of wartime production, military intelligence, and political mobilization serving as precursors to the “military-industrial complex” that emerged during the Cold War in the 1950s. World War II also inaugurated the era of what has become known as “big science.” Previously, scientists for the most part had been people who worked alone or with a handful of assistants, pursuing the inquiries that fit their
interests and curiosity. It was a less rigorous approach to science than we expect today, but it also allowed more creativity and independence. Physicist Percy Bridgman, whose major work was done before the advent of “big science,” recalled that in those days he “felt free to pursue other lines of interest, whether experiment, or theory, or fundamental criticism. . . . Another great advantage of working on a small scale is that one gives no hostage to one’s own past. If I wake up in the morning with a new idea, the utilization of which involves scrapping elaborate preparations already made, I am free to scrap what I have done and start off on the new and better line. This would not be possible without crippling loss of morale if one were working on a large scale with a complex organization.” When World War II made large-scale, applied research a priority, Bridgman said, “the older men, who had previously worked on their own problems in their own laboratories, put up with this as a patriotic necessity, to be tolerated only while they must, and to be escaped from as soon as decent. But the younger men . . . had never experienced independent work and did not know what it was like.”24

  The Manhattan Project took “big science” to unprecedented new levels. In the process it also radically transformed the assumptions and social practices of science itself, as military considerations forced scientists to work under conditions of strict censorship. “The Manhattan Project was secret,” observe Stephen Hilgartner, Richard Bell, and Rory O’Conner in Nukespeak, their study of atomic-age thinking and rhetoric. “Its cities were built in secret, its research was done in secret, its scientists traveled under assumed names, its funds were concealed from Congress, and its existence was systematically kept out of the media. . . . Compartmentalization, or the restriction of knowledge about various aspects of the project to the ‘compartments’ in which the knowledge was being developed, was central to this strategy. . . . Press censorship complemented compartmentalization.” 25 President Truman described the development of the atom bomb as “the greatest achievement of organized science in history.” It was also the greatest regimentation of science in history, and spawned the need for further regimentation and more secrecy.

  Prior to the development of the atomic bomb, the scientific community believed with few exceptions that its work was beneficial to humanity. “Earlier uses of science for the development of new and deadlier weapons had, upon occasion, brought forth critical comments by individual scientists; here and there, uncommonly reflective scientists had raised some doubts about the generalized philosophy of progress shared by most of the scientific community, but it was only in the aftermath of Hiroshima that large numbers of scientists were moved to reflect in sustained ways on the moral issues raised by their own activities,” notes historian Lewis Coser.26

  Even before the bombing of Japan, a group of atomic scientists had tried unsuccessfully to persuade the U.S. government against its use. In its aftermath, they began to publish the Bulletin of the Atomic Scientists, which campaigned for civilian control of atomic energy. Some of its members called for scientists to abstain from military work altogether. In the 1950s, however, the Red Scare and McCarthyism were brought to bear against scientists who raised these sorts of questions. “Furthermore, as more and more scientific research began to be sponsored by the government, many scientists considered it‘dangerous’to take stands on public issues,” Coser notes. By 1961, some 80 percent of all U.S. funds for research and development were being provided directly or indirectly by the military or by two U.S. agencies with strong military connections, the Atomic Energy Commission and the National Aeronautics and Space Administration.27

  The terrifying potential of the new weaponry became a pretext for permanently institutionalizing the policy of secrecy and “need-to-know” classification of scientific information that had begun with the Manhattan Project. In 1947, the Atomic Energy Commission expanded its policy of secrecy beyond matters of direct military significance by imposing secrecy in regard to public relations or “embarrassment” issues as well as issues of legal liability. When a deputy medical director at the Manhattan Project tried to declassify reports describing World War II experiments that involved injecting plutonium into human beings, AEC officials turned down the request, noting that “the coldly scientific manner in which the results are tabulated and discussed would have a very poor effect on the public.”28

  Alvin Weinberg, director of the Oak Ridge National Laboratory from 1955 to 1973, bluntly laid out the assumptions of atomic-age science. In order to avert catastrophe, he argued, society needed “a military priesthood which guards against inadvertent use of nuclear weapons, which maintains what a priori seems to be a precarious balance between readiness to go to war and vigilance against human errors that would precipitate war.”29 He did not mean the word “priesthood” lightly or loosely. “No government has lasted continuously for 1,000 years: only the Catholic Church has survived more or less continuously for 2,000 years or so,” he said. “Our commitment to nuclear energy is assumed to last in perpetuity—can we think of a national entity that possesses the resiliency to remain alive for even a single half-life of plutonium-239? A permanent cadre of experts that will retain its continuity over immensely long times hardly seems feasible if the cadre is a national body. . . . The Catholic Church is the best example of what I have in mind: a central authority that proclaims and to a degree enforces doctrine, maintains its own long-term social stability, and has connections to every country’s own Catholic Church.”30

  The idea of a “central authority” that “proclaims and enforces doctrine” runs contrary, of course, to the spirit of intellectual freedom and scientific inquiry that led Galileo to defy the Catholic Church in his defense of Copernican astronomy. Weinberg’s comments show how much the practice and philosophy of science had changed under the pressures of government bureaucracy and military secrecy. Instead of a process for asking questions, it had become a dogma, a set of answers imposed by what was becoming a de facto state religion.

  Nuts About Nukes

  Just as Edward Bernays had used the theories of Sigmund Freud to develop a theory of public relations based on the belief that the public was irrational and pliable, the Atomic Energy Commission also turned to mental health experts in an effort to consign the public to the psychiatric couch. In 1948, AEC commissioner Sumner T. Pike appealed to the American Psychiatric Association to “cool off anyone who seems hysterical about atomic energy.”31 In 1957, the World Health Organization convened a Study Group on Mental Health Aspects of the Peaceful Uses of Atomic Energy, in the hope that “the behavioural sciences can make a valuable and concrete contribution to the adaptation of mankind to the advent of atomic power” by using expert knowledge of “personality dynamics” to build “positive morale.”32 The study group, composed of psychiatrists, professors, and representatives of the AEC and the European nuclear industry, began from the premise that the public’s “irrational fears, irrational hopes, or irrational tendencies” were an “abnormal emotional response to atomic energy” which was “quite unjustified. . . . Even if all the objective evidence were interpreted in the most pessimistic way possible, the weight of evidence would not justify anxiety in the present, and only vaguely and remotely in the future. Yet anxiety exists and persists to a quite extraordinary degree. This can only be accounted for by looking into the psychological nature of man himself.”33

  What was it about our human nature that made us so irrational about nuclear power? The study group concluded that its very power made adults “regress to more infantile forms of behavior,” so that they acted like “the very young child first experiencing the world.” The split atom, they said, somehow evoked primal fears related to such “everyday childhood situations . . . as feeding and excretion.” Thus, “of all the fears rising from radiation, whether it be from atomic bomb fall-out or from nuclear plant mishap, it is the danger to food which is generally the most disquieting.” The same principle also applied to nuclear waste: “As with feeding, so with excretion. Public concern with a
tomic waste disposal is quite out of proportion to its importance, from which there must be a strong inference that some of the fear of ‘fall-out’ derives from a symbolic association between atomic waste and body waste.”34

  “This explanation is the most ludicrous kind of dime-store Freudian-ism; it trivializes people’s concern about fallout and nuclear war,” observe Hilgartner et al. “But the study group was deadly serious about the richness of insight which this crude, narrow-minded analysis provided.”35 Indeed, after an accidental radiation release at the Windscale nuclear reactor in England, the government was forced to confiscate and dump milk contaminated with radioiodine. A psychiatrist on the study group explained the negative newspaper headlines that accompanied the dumping by commenting, “Obviously all the editors were breast fed.” It was, to him, a perfect example of “regression.”36

  These analyses share a retreat from the empiricist notion that experts should begin first with evidence and reason from it to their conclusions. For the experts in charge of nuclear planning, the political goals came first, the evidence second. Anyone who thought otherwise could simply be diagnosed as neurotic.

  From Military Secrets to Trade Secrets

  “The expansion of university research in the 1950s was largely the result of support from the military,” wrote Dorothy Nelkin in her 1984 book Science as Intellectual Property. “In the context of the times, most university scientists supported collaboration with military objectives, a collaboration they deemed crucial to the development of the nation’s scientific abilities. However, even during this period, university-military relations were a source of nagging concern. Doubts turned to disenchantment during the Vietnam War.”37

 

‹ Prev