Trust Us, We're Experts PA

Home > Other > Trust Us, We're Experts PA > Page 25
Trust Us, We're Experts PA Page 25

by Sheldon Rampton


  —Robert Proctor, Cancer Wars1

  According to historian Stephen Mason, science has its historical roots in two primary sources: “Firstly, the technical tradition, in which practical experiences and skills were handed on and developed from one generation to another; and secondly, the spiritual tradition, in which human aspirations and ideas were passed on and augmented.” The technical tradition is the basis for the claim that science provides useful ways of manipulating the material world. The spiritual tradition is the basis for the claim that science can explain the world in “objective,” unbiased terms. Sometimes, however, these two traditions are at odds.

  Modern science considers itself “scientific” because it adheres to a certain methodology. It uses quantitative methods and measurable phenomena; its data is empirically derived and verifiable by others through experiments that can be reproduced; and, finally, its practitioners are impartial. Whereas ideological thinkers promulgate dogmas and defend them in the face of evidence to the contrary, scientists work with “hypotheses” that they modify whenever the evidence dictates.

  The standard description of the scientific method makes it sound like an almost machinelike process for sifting and separating truth from error. The method is typically described as involving the following steps:1. Observe and describe some phenemenon.

  2. Form a hypothesis to explain the phenemonon and its relationship to other known facts, usually through some kind of mathematical formula.

  3. Use the hypothesis to make predictions.

  4. Test those predictions by experiments or further observations to see if they are correct.

  5. If not, reject or revise the hypothesis.

  “Recognizing that personal and cultural beliefs influence both our perceptions and our interpretations of natural phenomena, we aim through the use of standard procedures and criteria to minimize those influences when developing a theory,” explains University of Rochester physics professor Frank Wolfs. “The scientific method attempts to minimize the influence of bias or prejudice in the experimenter when testing a hypothesis or a theory.” One way to minimize the influence of bias is to have several independent experimenters test the hypothesis. If it survives the hurdle of multiple experiments, it may rise to the level of an accepted theory, but the scientific method requires that the hypothesis be ruled out or modified if its predictions are incompatible with experimental tests. In science, Wolfs says, “experiment is supreme.”2

  Experience shows, however, that this commonly accepted description of the scientific method is often a myth. Not only is it a myth, it is a fairly recent myth, first elaborated in the late 1800s by statistician Karl Pearson.3 Copernicus did not use the scientific method described above, nor did Sir Isaac Newton or Charles Darwin. The French philosopher and mathematician René Descartes is often credited with ushering in the age of scientific inquiry with his “Discourse on the Method of Rightly Conducting the Reason and Seeking the Truth in the Sciences,” but the method of Descartes bears little relation to the steps described above. The molecular structure of benzene was first hypothesized not in a laboratory but in a dream. Many theories do not originate through some laborious process of formulating and modifying a hypothesis, but through sudden moments of inspiration. The actual thought processes of scientists are richer, more complex, and less machinelike in their inevitability than the standard model suggests. Science is a human endeavor, and real-world scientists approach their work with a combination of imagination, creativity, speculation, prior knowledge, library research, perseverance, and, in some cases, blind luck—the same combination of intellectual resources, in short, that scientists and nonscientists alike use in trying to solve problems.

  The myth of a universal scientific method glosses over many far-from-pristine realities about the way scientists work in the real world. There is no mention, for example, of the time that a modern researcher spends writing grant proposals; coddling department heads, corporate donors, and government bureaucrats; or engaging in any of the other activities that are necessary to obtain research funding. Although the scientific method acknowledges the possibility of bias on the part of an individual scientist, it does not provide a way of countering the effects of systemwide bias. “In a field where there is active experimentation and open communication among members of the scientific community, the biases of individuals or groups may cancel out, because experimental tests are repeated by different scientists who may have different biases,” Wolfs states. But what if different scientists share a common bias? Rather than canceling it out, they may actually reinforce it.

  The standard description of the scientific method also tends to idealize the degree to which scientists are even capable of accurately observing and measuring the phenomena they study. “Anyone who has done much research knows only too well that he never seems to be able himself to reproduce the beautiful curves and straight lines that appear in published texts and papers,” admits British biologist Gordon D. Hunter. “In fact, scientists who would be most insulted if I accused them of cheating usually select their best results only, not the typical ones, for publication; and some slightly less rigorous in their approach will find reasons for rejecting an inconvenient result. I well remember when my colleague David Vaird and I were working with a famous Nobel Prize winner (Sir Hans Krebs himself) on bovine ketosis. The results from four cows were perfect, but the fifth wretched cow behaved quite differently. Sir Hans shocked David by stating that there were clearly additional factors of which we were ignorant affecting the fifth cow, and it should be removed from the analysis. . . . Such subterfuges rarely do much harm, but it is an easy step to rejecting whole experiments or parts of experiments by convincing oneself that there were reasons that we can identify or guess at for it giving ‘the wrong result.’ ”4

  The idea that all scientific experiments are replicated to keep the process honest is also something of a myth. In reality, the number of findings from one scientist that get checked by others is quite small. Most scientists are too busy, research funds are too limited, and the pressure to produce new work is too great for this type of review to occur very often. What occurs instead is a system of “peer review,” in which panels of experts are convened to pass judgment on the work of other researchers. Peer review is used mainly in two situations: during the grant approval process to decide which research should get funding, and after the research has been completed to determine whether the results should be accepted for publication in a scientific journal.

  Like the myth of the scientific method, peer review is also a fairly new phenomenon. It began as an occasional, ad hoc practice during the middle of the nineteenth century but did not really become established until World War I, when the federal government began supporting scientists through the National Research Council. As government support for science increased, it became necessary to develop a formal system for deciding which projects should receive funding.

  In some ways, the system of peer review functions like the antithesis of the scientific method described above. Whereas the scientific method assumes that “experiment is supreme” and purports to eliminate bias, peer review deliberately imposes the bias of peer reviewers on the scientific process, both before and after experiments are conducted. This does not necessarily mean that peer review is a bad thing. In some ways, it is a necessary response to the empiricist limitations of the scientific method as it is commonly defined. However, peer review can also institutionalize conflicts of interest and a certain amount of dogmatism. In 1994, the General Accounting Office of the U.S. Congress studied the use of peer review in government scientific grants and found that reviewers often know applicants and tend to give preferential treatment to the ones they know.5 Women and minorities have charged that the system constitutes an “old boys’ network” in science. The system also stacks the deck in favor of older, established scientists and against younger, more independent researchers. The process itself creates multiple opportunities for conflict of interest. Pee
r reviewers are often anonymous, which means that they do not have to face the researchers whose work they judge. Moreover, the realities of science in today’s specialized world means that peer reviewers are often either colleagues or competitors of the scientist whose work they review. In fact, observes science historian Horace Freeland Judson, “the persons most qualified to judge the worth of a scientist’s grant proposal or the merit of a submitted research paper are precisely those who are the scientist’s closest competitors.”6

  “The problem with peer review is that we have good evidence on its deficiencies and poor evidence on its benefits,” the British Medical Journal observed in 1997. “We know that it is expensive, slow, prone to bias, open to abuse, possibly anti-innovatory, and unable to detect fraud. We also know that the published papers that emerge from the process are often grossly deficient.”7

  In theory, the process of peer review offers protection against scientific errors and bias. In reality, it has proven incapable of filtering out the influence of government and corporate funders, whose biases often affect research outcomes.

  Publication Bias

  If you want to know just how craven some scientists can be, the archives of the tobacco industry offer a treasure trove of examples. Thanks to whistle-blowers and lawsuits, millions of pages of once-secret industry documents have become public and are freely available over the Internet. In 1998, for example, documents came to light regarding an industry-sponsored campaign in the early 1990s to plant sympathetic letters and articles in influential medical journals. Tobacco companies had secretly paid 13 scientists a total of $156,000 simply to write a few letters to influential medical journals. One biostatistician, Nathan Mantel of American University in Washington, received $10,000 for writing a single, eight-paragraph letter that was published in the Journal of the American Medical Association. Cancer researcher Gio Batta Gori received $20,137 for writing four letters and an opinion piece to the Lancet, the Journal of the National Cancer Institute, and the Wall Street Journal—nice work if you can get it, especially since the scientists didn’t even have to write the letters themselves. Two tobacco-industry law firms were available to do the actual drafting and editing. All the scientists really had to do was sign their names at the bottom. “It’s a systematic effort to pollute the scientific literature. It’s not a legitimate scientific debate,” observed Dr. Stanton Glantz, a professor of medicine at the University of California-San Francisco and longtime tobacco industry critic. “Basically, the drill is that they hired people to write these letters, then they cited the letters as if they were independent, disinterested scientists writing.”8

  In some cases, scientists were paid to write not just letters but entire scientific articles. In at least one case, the going rate for this service was $25,000, which was paid to one scientist for writing an article for the publication Risk Analysis. The same fee went to former EPA official John Todhunter and tobacco consultant W. Gary Flamm for an article titled “EPA Process, Risk Assessment-Risk Management Issues,” which they published in the Journal of Regulatory Toxicology and Pharmacology, where Flamm served as a member of the journal’s editorial board. Not only did they fail to disclose that their article had been commissioned by the tobacco industry, journal editor C. Jelleff Carr says he “never asked that question, ‘Were you paid to write that?’ I think it would be almost improper for me to do it.”9

  The tobacco industry is hardly alone in attempting to influence the scientific publishing process. A similar example of industry influence came to light in 1999 regarding the diet-drug combo fen-phen (a combination of fenfluramine, dexfenfluramine, and phentermine), developed by Wyeth-Ayerst Laboratories. Wyeth-Ayerst had commissioned ghostwriters to write ten articles promoting fen-phen as a treatment for obesity. Two of the ten articles were actually published in peer-reviewed medical journals before studies linked fen-phen to heart valve damage and an often-fatal lung disease, forcing the company to pull the drugs from the market in September 1997. In lawsuits filed by injured fen-phen users, internal company documents were subpoenaed showing that Wyeth-Ayerst had also edited the draft articles to play down and occasionally delete descriptions of side effects associated with the drugs. The final articles were published under the names of prominent researchers, one of whom claimed later that he had no idea that Wyeth had commissioned the article on which his name appeared. “It’s really deceptive,” said Dr. Albert J. Stunkard of the University of Pennsylvania, whose article was published in the American Journal of Medicine in February 1996. “It sort of makes you uneasy.”10

  How did Stunkard’s name end up on an article without his knowing who sponsored it? The process involved an intermediary hired by Wyeth-Ayerst called Excerpta Medica, Inc., which received $20,000 for each article. Excerpta’s ghostwriters produced first-draft versions of the articles and then lined up well-known university researchers like Stunkard and paid them honoraria of $1,000 to $1,500 to edit the drafts and lend their names to the final work. Stunkard says Excerpta did not tell him that the honorarium originally came from Wyeth. One of the name-brand researchers even sent a letter back praising Excerpta’s ghostwriting skills. “Let me congratulate you and your writer on an excellent and thorough review of the literature, clearly written,” wrote Dr. Richard L. Atkinson, professor of medicine and nutritional science at the University of Wisconsin Medical School. “Perhaps I can get you to write all my papers for me! My only general comment is that this piece may make dexfenfluramine sound better than it really is.”11

  “The whole process strikes me as egregious,” said Jerome P. Kassirer, then-editor of the New England Journal of Medicine—“the fact that Wyeth commissioned someone to write pieces that are favorable to them, the fact that they paid people to put their names on these things, the fact that people were willing to put their names on it, the fact that the journals published them without asking questions.” Yet it would be a mistake to imagine that these failures of the scientific publishing system reflect greed or laziness on the part of the individuals involved. Naïveté might be a better word to describe the mind-set of the researchers who participate in this sort of arrangement. In any case, the Wyeth-Ayerst practice is not an isolated incident. “This is a common practice in the industry. It’s not particular to us,” said Wyeth spokesman Doug Petkus.

  Medical editor Jenny Speicher agrees that the Wyeth-Ayerst case is not an aberration. “I used to work at Medical Tribune, a news publication for physicians,” she said. “We had all these pharmaceutical and PR companies calling, asking what are the writing guidelines for articles, because they wanted to have their flack doctors write articles, or assign a freelance writer to write under a doctor’s name. I’ve even been offered these writing jobs myself. We always told them that all of our articles had to have comments from independent researchers, so of course they weren’t interested. But they kept on trying.”

  “Pharmaceutical companies hire PR firms to promote drugs,” agrees science writer Norman Bauman. “Those promotions include hiring freelance writers to write articles for peer-reviewed journals, under the byline of doctors whom they also hire. This has been discussed extensively in the medical journals and also in the Wall Street Journal, and I personally know people who write these journal articles. The pay is OK—about $3,000 for a six- to ten-page journal article.”

  Even the New England Journal of Medicine—often described as the world’s most prestigious medical journal—has been involved in controversies regarding hidden economic interests that shape its content and conclusions. In 1986, for example, NEJM published one study and rejected another that reached opposite conclusions about the antibiotic amoxicillin, even though both studies were based on the same data. Scientists involved with the first, favorable study had received $1.6 million in grants from the drug manufacturer, while the author of the critical study had refused corporate funding. NEJM proclaimed the pro-amoxicillin study the “authorized” version, and the author of the critical study underwent years of discipline and
demotions from the academic bureaucracy at his university, which also took the side of the industry-funded scientist. Five years later, the dissenting scientist’s critical study finally found publication in the Journal of the American Medical Association, and other large-scale testing of children showed that those who took amoxicillin actually experienced lower recovery rates than children who took no medicine at all.12 In 1989, NEJM came under fire again when it published an article downplaying the dangers of exposure to asbestos while failing to disclose that the author had ties to the asbestos industry.13 In 1996, a similar controversy emerged when the journal ran an editorial touting the benefits of diet drugs, again failing to note that the editorial’s authors were paid consultants for companies that sell the drugs.14

  In November 1997, questions of conflict of interest arose again when the NEJM published a scathing review of Sandra Steingraber’s book Living Downstream: An Ecologist Looks at Cancer and the Environment. Authored by Jerry H. Berke, the review described Steingraber as “obsessed . . . with environmental pollution as the cause of cancer” and accused her of “oversights and simplifications . . . biased work . . . notoriously poor scholarship. . . . The focus on environmental pollution and agricultural chemicals to explain human cancer has simply not been fruitful nor given rise to useful preventive strategies. . . . Living Downstream frightens, at times misinforms, and then scorns genuine efforts at cancer prevention through lifestyle change. The objective of Living Downstream appears ultimately to be controversy.”15

 

‹ Prev