Book Read Free

The Panic Virus

Page 17

by Seth Mnookin; Dan B. Miller


  There was also the indiscriminate list of potential outcomes Verstraeten had to consider: In an effort to provide as definitive an analysis as possible, he looked at all of the “neurologic and renal disorders” he could “not exclude” as being linked to mercury, regardless of whether it was from direct poisoning, secondary exposure, or external contact. He also considered conditions like autism that had not been linked to mercury poisoning but that had captured the public’s imagination. The final scope of his study encompassed everything from barely noticeable irritations to potentially deadly afflictions.

  Not surprisingly, Verstraeten’s results were all over the map. Rates of kidney disorders, night terrors, and cerebral palsy declined as the total amount of thimerosal went up, rates for developmental disorders in premature babies stayed the same, and rates for speech delays, autism spectrum disorders, and attention deficit disorders went up. In every case, however, the figures were within a statistical margin of error. Still, Verstraeten said, at the very least he thought the hypothesis that thimerosal was contributing to a range of childhood disorders was “plausible.”

  That alone was the most persuasive argument for further research to be done, and by the end of the conference’s first day, there was a general agreement that any concerns about fear mongering were superseded by the need for a comprehensive round of new experiments. Robert Brent, a developmental biologist and practicing pediatrician at the Nemours/Alfred I. duPont Hospital for Children in Delaware, and one of the conference’s most outspoken attendees, encapsulated this view:

  Even if we put the vaccine in single vials and put no preservatives tomorrow, we still want the answer to this question, because remember, epidemiological studies sometimes give us answers to problems that we didn’t even know in the first place. Maybe from all this research we will come up with an answer for what causes learning disabilities, attention deficit disorders and other information. So I am very enthusiastic about pursuing the data and the research for solving this problem.

  Brent’s opinion in favor of moving forward carried special weight: As someone who’d testified as an expert witness in three vaccine-related lawsuits, he had personal experience with the dangers of “junk science.” “It is amazing who you can find to come and testify that such and such is due to a measles vaccine,” he said. “They are horrendous. But the fact is those scientists are out here in the United States.”

  It didn’t take long for the Simpsonwood participants’ premonitions about the misinterpretation of the VSD data to prove to be prescient. When, just two weeks after the conference, Verstraeten gave a boiled-down version of his report to an open meeting of the CDC’s Advisory Committee on Immunization Practices, Lyn Redwood was among those in attendance. Despite Verstraeten’s warnings that “it is difficult to interpret the crude results,” Redwood drove away convinced the issue had been settled once and for all. “My God,” she said to herself, “this really did happen. Our worst fears were true. I guess we’re not so crazy after all.”

  Of all the issues that arise after a vaccine is put into widespread use, one of the trickiest is figuring out how to investigate newly raised safety concerns. In order to test whether a vaccine carries a specific risk, researchers need a representative control group to forgo the vaccine in question—an ethically dicey proposition when you’re talking about a drug with potentially lifesaving benefits. The difficulty of putting together a useful post-licensure study increases with the rarity of the condition to be investigated. Take, for example, a hypothetical mosquito bite vaccine suspected of causing blindness in 50 percent of recipients. In that scenario, an analysis of a few dozen subjects might be sufficient to indicate whether there was actual cause for alarm.

  Now consider the situation as it relates to autism. With both MMR and thimerosal, the hypothesis was that some component or combination of vaccines served as a trigger in a small subset of children who were genetically predisposed to a specific variant of a disorder with inconsistently applied diagnostic criteria and a broad range of date of onset. In order to reach any reliable conclusions, you’d need to study hundreds of thousands of children over a period of five or more years. Typically, such a challenge would be tackled with retrospective studies, but in this case, even that was tricky: In the United States, mandatory vaccination laws limited the pool of unvaccinated controls, inconsistent record keeping compromised whatever data was available, and VAERS’s passive design meant that some injuries would be underreported, while others—especially those that received a lot of attention in the media—would likely be overrepresented. (An extreme example of underreporting is post-vaccine measles rashes, which some researchers estimate go undocumented 99 percent of the time.) Even if every single “adverse event” were registered, there was no way to get a reliable count of the number of vaccines that had actually been administered, which made it impossible to calculate with a high degree of accuracy the frequency with which the theorized injuries would have occurred.

  There was, however, one area of the world without those limitations: Scandinavia, where governments’ dedication to social welfare was matched only by their fervor for record keeping. Denmark was an especially attractive option for researchers. Its Civil Registration System kept detailed records of every child born in the country; immunizations were administered through its National Board of Health; a government-run agency called the Statens Serum Institut (SSI) produced its vaccine supplies; and it did not have compulsory vaccination laws. There was even a ready-made way of comparing children who’d received thimerosal-containing vaccines with those who hadn’t: Until March 1992, the three-part, whole cell pertussis vaccine produced by the SSI contained a total of 250 μg of thimerosal; from April 1992 on, the pertussis vaccine distributed in the country was thimerosal-free.

  By late 2000, epidemiologists from around the world had begun sifting through millions of pages of medical records produced by several decades’ worth of Danish children. Even before this work had been completed, activists were looking for ways to spin the results in their favor, regardless of the studies’ outcomes. One tactic was to promote themselves as insurrectionist truth tellers doing battle with an establishment more concerned with defending its turf than with improving people’s lives. The success of such a gambit depended on the public’s misunderstanding of the approach scientists rely on to understand the world.

  The steps of the scientific method are the same whether you’re a sixth grader prepping for a science fair or a physicist proposing a new framework for the universe: Observations lead to hypotheses, hypotheses are tested by experimentation, results are analyzed, conclusions are submitted for publication, and the whole process undergoes peer review. (To be fair, for a typical twelve-year-old, “publication and peer review” usually means “write it up on a piece of poster board and appeal to your teacher for a good grade.”) It’s a formula so central to the very definition of science that it’s easy to assume it has been accepted as gospel throughout history.

  That, however, is not the case. The scientific method is actually a relatively new construct, the product of several millennia worth of arguments about the merits of purely hypothetical analysis versus the observation of the world outside ourselves. Aristotle, who believed the only way to truly understand the universe was through a set of abstract “first principles,” was a proponent of the first camp; the tenth-century Persian polymath Ibn-al Haytham, whose evidence-based experiments disproved Aristotle’s speculative theory of light and vision, showed the advantages of the second. During the Renaissance, Descartes’ philosophy of rationalism was based on the primacy of deductive reasoning, while Galileo and Francis Bacon came down on the side of experience-based investigation. By the twentieth century, the empiricists had pretty much carried the day. Their victory was due, in large part, to the fact that unlike every metaphysical creed the world has ever known, scientific proofs rely on evidence and not on faith.

  One way to understand the distinction between science and the ideologies
it superseded is through the theory of falsifiability, which states that in order for a hypothesis to be a legitimate subject of inquiry, it has to have a single, corresponding null hypothesis—that is, it needs to be disprovable. (That’s why “God exists” is not a legitimate scientific hypothesis: The null hypothesis—“God does not exist”—can’t be proven.) There’s no way to overstate how fundamental this concept is to the scientific process. Since it’s impossible to prove a negative, the closest one can come to absolute proof for any theory is through an exhaustive, and unsuccessful, effort to prove the null hypothesis.

  A good illustration of how this works is found in the Austrian philosopher Karl Popper’s anecdote about seventeenth-century Europeans, who assumed that all swans were white. Since there’s no way for anyone to know for certain that he’s identified every swan in the universe, and since there’s no way to know what the swans of the future will look like, the best anyone attempting to prove the white swan hypothesis could do was fail to prove the null hypothesis: that not all swans are white. In 1697, a Dutch sea captain chanced upon a black swan in Australia, at which point the null hypothesis was proven true and the white swan theory had to be discarded. This story also illustrates the necessity of framing a null hypothesis in the broadest way possible: “At least one swan is black” would not have been a valid null hypothesis, since it would have left room for the discovery of a red swan or a green swan or any other color of swan that would have invalidated the original hypothesis without satisfying the null hypothesis.

  One of the most famous examples of the null hypothesis at work involves two men often referred to as the greatest scientists the world has ever known: Albert Einstein and Isaac Newton. While still in his mid-twenties, Einstein became obsessed with an apparent contradiction between two widely accepted theories explaining the workings of the physical universe: Newton’s laws of motion and a series of equations formulated by a nineteenth-century Scottish physicist named James Clerk Maxwell. On the one hand, Newton claimed that the velocity of a body in motion remains constant in the absence of any other forces.37 A corollary to the theory that the force exerted on a body is equal to its mass times its acceleration, or F=ma, is that the speed of a body in motion increases in direct proportion to the amount of force exerted upon it—which, in turn, means there should not be a limit to how fast a given object can travel. On the other hand, Maxwell’s equations showed that electric and magnetic waves traveled at a constant speed—the speed of light. In 1905, Einstein, who at the time was working as a patent clerk, began to focus on the incompatibility of these two theories: If everything in motion has a measurable mass, and if the speed of light is constant, then no amount of force can make light waves travel any faster (or slower) than they already do. In order to reconcile this contradiction, Einstein proposed that energy (E) and mass (m) were analogous concepts, and they related to each other through the speed of light in a vacuum (c), squared—a hypothesis that, if true, would mean that force does not equal mass times acceleration.38

  This hypothesis raised some perplexing questions. If Einstein was correct, why had the laws of motion appeared to be accurate for the past 220-odd years? For that matter, were science teachers around the world lying to their students when they taught them that F=ma? The answer, as you’ve probably guessed, is yes—and no. Objects do stay at rest unless some external force is applied, and the more force you apply, the faster that object will travel—as long as you’re not talking about really, really precise calculations relating to things that are really, really small or traveling really, really fast. Newton’s laws applied to everything he could measure in his known universe, and they apply to everything that those of us not playing around with photons or particle accelerators can measure in ours. Instead of thinking of F=ma as being wrong, think of E=mc2 as being more right.39

  The story of Einstein and Newton is another example of why scientists are so inflexible in their insistence that there are no absolute certainties in the world, and why concepts we accept as true—like quantum mechanics and evolution and the big bang—are referred to as theories: There’s always the chance that someone, somewhere will discover a scenario in which they no longer apply.40 This is actually a good thing. Scientific progress, as Carl Sagan wrote in his book The Demon-Haunted World, thrives on errors.

  False conclusions are drawn all the time, but they are drawn tentatively. Hypotheses are framed so they are capable of being disproved. A succession of alternative hypotheses is confronted by experiment and observation. Science gropes and staggers toward improved understanding. Proprietary feelings are of course offended when a scientific hypothesis is disproved, but such disproofs are recognized as central to the scientific enterprise.

  The result of all this groping and staggering is that scientists with widely accepted theories spend their careers in a state of cautiously optimistic limbo: Regardless of how many times their work is corroborated, a single contrary result will cause it all to come crashing down. The centrality of “disproofs” also highlights the crucial importance of those final two steps in the scientific process: publication and peer review. Until the authors of a given theory have provided a detailed explanation of exactly how they got their results, they’re essentially telling the rest of the world to accept their conclusions on faith—which puts them back on the side of the ideologues who define “truth” as whatever they happen to believe at the moment.

  This emphasis on disproving what your colleagues had previously believed to be accurate can make listening in on scientific debates feel a little like eavesdropping on a newly divorced couple arguing over child visitation rights. The realities of the scientific method also present an uncomfortable challenge for anyone tasked with explaining to the public why this inherent open-endedness doesn’t negate the high degree of certainty that accompanies widely accepted conclusions. The combination of ambiguity and authority implicit in science is hard enough to understand if you are sitting across the table from a scientist; it is an exponentially more challenging point to convey when filtered through media outlets that eschew nuance and depth in favor of attention-grabbing declarations.

  35 A Special Master is someone who has been granted the authority to carry out a course of action designated by a court. They do not need to be judges: Kenneth Fineberg, an attorney specializing in mediation, acted as a Special Master when he oversaw the September 11th Victim Compensation Fund. In the case of the Vaccine Court, the Special Masters are appointed by the United States Court of Federal Claims.

  36 As it turns out, ethylmercury’s half-life ranges from ten to twenty days in children and is as short as seven days in infants, while methylmercury’s half-life is around seventy days.

  37 This might seem obviously incorrect: If you throw a ball, it won’t continue moving at the same rate of speed forever. That is due to various forces working on objects within the earth’s atmosphere, including gravity (which pulls objects downward) and friction (which slows objects in motion). The precise workings of this defy easy explanation; suffice it to say that one omnipresent source of friction is air resistance. This also helps explain why objects outside the earth’s atmosphere can travel forever without losing speed—at least in theory.

  38 Proof of this theory, which Einstein developed between 1905 and 1907, didn’t come until 1919, when a British astrophysicist’s photograph of a solar eclipse documented the sun’s gravitational pull “bending” starlight; this led to a banner headline in The Times (London) that proclaimed “Revolution in Science—New Theory of the Universe—Newtonian Ideas Overthrown.”

  39 Don’t worry if you’re having a hard time following this oversimplified explanation of physics’ most challenging problem. For most of us, understanding special relativity is a little like true love: We should consider ourselves lucky if we can grasp hold of it for even one fleeting moment.

  40 This also illustrates the uni-directional nature of scientific discoveries: If someone does find evidence that special relativity isn�
�t always applicable—that Einstein’s theory about how the universe works isn’t true all of the time—we wouldn’t revert to accepting Newton’s theories as being correct just because the hypothesis that first superseded the laws of motion was shown to have its own shortcomings. An illogical bastardization of this misapprehension is often used by creationists as “proof” that evolution is incorrect: Since Darwin was wrong about some things, they argue, he must be wrong about everything, and if he’s wrong about everything, then a biblical understanding of the world must be correct.

  CHAPTER 13

  THE MEDIA AND ITS MESSAGES

  On Sunday, February 3, 2002, the BBC-TV news magazine show Panorama aired a special report titled “MMR: Every Parent’s Choice.” The program’s news peg was an as yet unreleased study in Molecular Pathology that included as one of its authors a “controversial doctor” who was, once again, loudly proclaiming that “MMR should not be used until researchers rule out the possibility the triple jab could cause autism”: Andrew Wakefield. In the four years since Wakefield’s Lancet paper had been published, only one research team had claimed to have identified the measles virus in the stomachs of children after they’d been vaccinated—and Wakefield had worked on that paper as well.

  This new study was Wakefield’s latest effort to prove to the world that measles—and by extension the measles component of the MMR vaccine—was responsible for a “new variant [of] inflammatory bowel disease.” As part of its work, the research team used a highly sensitive technique that relied on something called a polymerase chain reaction (PCR). Because working with PCRs is so complicated, and because Wakefield and his collaborators were claiming to have detected tiny quantities of something other labs had been unable to find, it was especially important for the researchers to provide a precise explanation of how they had obtained their results. Instead, the paper left out information so basic it would have been expected to be included in a lab report written by an undergraduate. There were no details about how tissue samples were stored, nothing about the amount of time the samples remained in a freezer before being tested, no description of the quality of the material being tested, and no explanation of how results were interpreted. It appeared as if the research team hadn’t employed positive controls of tissue samples known to include the measles virus or negative controls of samples without the virus, which meant there was no way for them, or anyone else, to establish a baseline for the accuracy of their results. Any other scientists interested in checking the paper’s results were essentially left to guess how to go about it.

 

‹ Prev