Here was another case similar to Fegan’s and an important clue to the real diagnosis of asystole: the “seizures” stopped as soon as the VNS was turned off. Nonetheless, Cyberonics concluded that it was “exceedingly unlikely that the VNS device contributed to the episode of asystole.” Another case report, dated January 1, 1999, described a patient who developed drop seizures after being implanted with the VNS. Once again, the device was turned off and the “seizures” stopped.
And there were more such cases. Many more.
The trail of clues that the device was indeed causing asystole was there for anybody to see in the MAUDE reports. Over time, Fegan became increasingly savvy about the FDA’s MAUDE database. He soon learned how to sort “adverse events” according to the type of event. When he filtered the results for deaths, he was stunned by what he got back. There were hundreds of deaths associated with the VNS device. And many cases were similar to his experience.
By 2010, Fegan had plowed through approximately eight hundred VNS-related death reports on MAUDE. Because only fifty thousand or so patients had been implanted worldwide at the time, and because the device had only been on the market for eight years, the number of deaths was disturbing, especially since the average age of people implanted with the device during clinical trials was thirty-three. But despite hundreds of reports of death that contained very little information that could shed light on the cause, the company never attributed a single death to its device. In almost every instance, Cyberonics blamed sudden unexpected death in epilepsy (SUDEP) and concluded each report with the same sentence: “There is no evidence at this time that the [VNS] caused or contributed to the reported event.”
But Fegan wondered, how could they know that?
For example, report 1246223:
Patient was found unresponsive and later passed away. Per reporter, patient had a lot of neurological problems, but was doing well with VNS with respect to seizures. An autopsy was performed and the autopsy report was obtained by the manufacturer. X-rays were taken and no anomalies were noted. The autopsy report noted the patient died of natural causes ascribed to a seizure disorder due to cerebral palsy. Due to the autopsy being an external examination versus complete autopsy and the circumstances surrounding the patient death, the manufacturer has classified this death as a probable SUDEP event.
No other information was provided. Fegan was baffled—an “autopsy” that consisted of an “external examination” and X-rays? They looked at someone’s dead body externally and decided the patient died of SUDEP? When I later asked Marcia Angell, a pathologist and former editor in chief of the New England Journal of Medicine, to comment on this report, she said wryly that she would love to have been able to conduct an autopsy by just looking at a dead body. “It would have made my work so much easier.”
Neither sex nor age is indicated on MAUDE reports, but if report 1246223 was accurate, and the patient was “doing well with VNS,” as stated, did that mean the patient didn’t have a seizure—the very thing most associated with SUDEP? And if the patient didn’t have a seizure,* how could anyone conclude that the death was “probable SUDEP”? Wasn’t it equally plausible that the sudden and unexpected death was the fault of the VNS device?
Other deaths that Fegan found on MAUDE were equally concerning. For example:
[Patient] was under 24-hour video monitoring…died in 2009, due to [unknown] reasons…It is currently believed he died of a “terminal seizure,” but the video…did not record a seizure.
And this:
Patient was found dead in their bed by caregiver. Cause of death is unknown…Manufacturer has determined that SUDEP is probable. The reporter has stated the [VNS device] was unrelated to the death.…patient was walking into a room and simply dropped dead. Treating neurologist indicted [sic] that the death may be cardiac-related.
As in so many other cases, Cyberonics reached its standard conclusion, stating, “There is no evidence at this time that the [VNS device] caused or contributed to the reported event.”
Some reports consist of just a single sentence—“Patient found dead in bed” (or “in bathtub” or “on floor”)—followed by the claim that the death was attributable to “probable SUDEP,” which in turn would be followed by Cyberonics’ standard conclusion that there was no evidence that the device “caused or contributed to” the patient’s demise.
Fegan knew that if his parents hadn’t shown up on that morning of July 2, 2006, and if he had died at home, there would be no EKG evidence to tell the real story, no observation by three doctors who all saw the same thing. “If I had died that day,” he says, “they’d say I died of SUDEP.” And Cyberonics would have reached the same conclusion as it did for everyone else: the VNS device never killed anyone.
There was an odd contradiction in the way Cyberonics interpreted bad versus good outcomes that may have contributed to underreporting of complications. When a patient developed problems months or years after implantation, Cyberonics would suggest that the VNS device was an unlikely cause, since the patient had, according to Cyberonics, “tolerated the VNS well” until that time. On the other hand, if a patient experienced a reduction in seizures months or even years after implantation, the company would attribute the reduction to the therapeutic effect of the device. Just why an electrical impulse, with its instantaneous effects, should fail to do the job for a year or longer is never explained. In the world of Cyberonics, delayed benefits could occur, but delayed harm could not.
In 2005, a year before Fegan had been hospitalized with asystole, he told his neurologist that the device wasn’t helping him and asked to have it removed. According to Fegan, Bahamon encouraged him to hang on, saying it could take years for some patients to achieve benefit (Fegan had already had it in for more than four years at the time). The following year he was taken to the hospital with asystole. After that he wasn’t about to fool around—he insisted on having it removed.
Bahamon referred him to a surgeon. But the surgeon told Fegan he wouldn’t attempt to remove the wire leads from his vagus nerve. That was too risky. The surgeon explained that the leads often trigger inflammation and fibrosis (or scarring) that can progress for years after implantation. It was also possible that, with movement of the neck, the irritant effects of the leads pulling on the vagus could cause progressive enmeshment of the nerve in scar tissue and cause nerve dysfunction. He could take out the generator under his collarbone, but he’d leave the wires in place.
Fegan’s case was not unusual. Some surgeons decline to remove the wire leads after disastrous experiences with previous patients—either their own or their colleagues’.
Fegan continued to explore the FDA’s website. Eventually he came across a warning letter from the FDA to Cyberonics about the sixty unreported deaths. The letter, dated March 23, 2001, was addressed to then president and CEO Skip Cummins, and it summarized the findings of a site visit to Cyberonics headquarters in Houston that was conducted from January to early February of 2001. During the visit, the FDA uncovered files on sixty deaths that Cyberonics failed to report. And the agency determined that the VNS device “may have caused or contributed” to the deaths. The FDA reviewer found many other unreported instances of serious infections and adverse events.
Within weeks of the FDA’s site visit, the company came up with records of twenty-three additional deaths, bringing the total number of previously unreported deaths to eighty-three. This was in early 2001, not long after mid-July of 1997, when the FDA first approved the VNS device to treat epilepsy. Eighty-three unreported deaths had to comprise a substantial portion of all deaths among patients with a VNS device at the time. It wouldn’t be the last time the company would fail to report deaths and injuries among patients using the VNS device.
* * *
Most healthcare consumers assume that their physicians are so knowledgeable, caring, and dedicated that they would never recommend a course of treatment that could be risky or unlikely to provide significant benefit to them—and of cou
rse in some cases this assumption is perfectly valid. But in a world where new drugs, therapies, and devices are continually being devised and marketed—often by companies with aggressive growth ambitions—it’s not always possible for physicians to be experts about every new product or service that becomes available.
Under the circumstances, ordinary citizens and some healthcare professionals might assume that government agencies such as the FDA serve as a protective shield, scrutinizing and evaluating new treatments before they go to market and permitting only those backed by solid scientific evidence to be sold.
This would be a wonderful thing if it were true. But in the real world, the medical research establishment and government regulatory agencies play this protective role only to a very limited extent.
Despite the aura of science that surrounds modern medicine, its practice has traditionally had only a modest relationship to the most rigorous, evidence-based scientific disciplines, such as chemistry and physics. Medicine is orders of magnitude more complex in many ways than chemistry or physics. Give a drug to—or implant a device in—a human being, and numerous physiochemical, neurological, and endocrine effects are triggered, which in turn set off feedback loops as the body attempts to regain homeostasis, the state of equilibrium or balance necessary for normal functioning. And then there are human feelings and perceptions and environmental input, all of which means that, unlike a chemistry experiment, in which combining the right chemicals in the right manner will reliably produce the same desired result every time, a medical experiment in which you combine a drug or medical device with a human being might have any number of outcomes.
But that doesn’t mean that science can or should be ignored when drugs and devices are developed. Medicine is all based on likelihoods: Is a person more likely to die if he or she has a pacemaker implanted? Will a person with epilepsy be more—or less—likely to die suddenly after implantation with a VNS device? Scientific evidence can help to answer such questions when well-designed studies are implemented. Yet history shows that hard evidence has played, and continues to play, only a modest role in the process by which medical treatments become widely popular. In fact, the search for truth based on experiment, or clinical trials, has shown up like the light of a firefly throughout medical history—flashing on for a moment, then going dark for long periods.
During the early modern period—the eighteenth and nineteenth centuries—experienced doctors simply taught younger doctors what they believed to be true from their own experience. In general, there was very little, if any, science involved in their practices. In this way, opinion and anecdote guided the choices doctors made about what tests and surgeries to perform and what drugs to administer. But experience can be misleading, paving the way for false conclusions and conflicting claims.
During this same period, the role of government in testing and regulating medical treatments was minimal. The Division of Chemistry, later the Bureau of Chemistry, the forerunner of today’s FDA, was focused simply on ensuring that drugs weren’t adulterated or “misbranded,” meaning that a product contained the actual ingredients stated on the label. In 1906, the passage of the Federal Food and Drugs Act gave the bureau its first modest regulatory responsibilities.147 But it had no role in ensuring that products were either safe or effective. Indeed, the US Supreme Court ruled in 1911, in United States v. Johnson, that the 1906 act did not prohibit false therapeutic claims.
The mandate of the Food and Drug Administration (as it was named in 1930) did not include safety requirements until the passage of the 1938 Food, Drug, and Cosmetic Act, after a disaster with a drug known as Elixir Sulfanilamide, a preparation used to treat strep throat. The drug contained diethylene glycol, a substance related to antifreeze, which killed more than one hundred children and adults in 1937. The safety standards set by the FDA in 1938 were far from rigorous, however. Manufacturers could, and often did, submit statements of expert opinion rather than well-controlled experiments as evidence of safety.
This began to change with the thalidomide disaster of 1961. Thalidomide was widely prescribed in Europe at the time to treat nausea. Manufacturers gave samples of the drug to thousands of doctors in the US, who in turn gave them to their patients (the drug had not been approved by the FDA at the time). Women who were pregnant when they took the drug gave birth to babies with deformities known as phocomelia, in which the arms and legs fail to develop. Many were born with what looked like small flippers or webbed fingers growing out of their shoulders and hips.
The FDA determined that mothers had been given the drug without being informed that it was “experimental,” because it hadn’t been approved in the US. This led to the 1962 Kefauver-Harris Amendments to the Food, Drug, and Cosmetic Act, which required—for the first time—that drugs have to be “efficacious” in order to win FDA approval. As with safety requirements, however, the bar was not set very high.
The first truly scientific test of a proposed new drug, a randomized controlled trial of streptomycin in the treatment of pulmonary tuberculosis, was conducted in 1946. Randomization means that researchers assign volunteers to either an experimental treatment group or a control group through a process intended to ensure that both groups are likely to be the same in terms of disease severity, age, and other factors that could affect outcomes. Use of control groups means that researchers can observe whether the experimental treatment actually makes a difference.
Too often however, devices and drugs are tested with either no control group or against a “straw-man comparator,” in which a new treatment is tested against an older treatment (i.e., a comparator drug) in which the comparator drug is given in a dose or manner—such as a very low dose—that is less likely to be effective, making the new treatment look superior. Joe Lex, an emergency physician, once described the problem of strawman comparators after he reviewed all studies of a class of non-narcotic painkillers. “Guess what?” Lex said. “Whoever sponsored the study always got a better result than [the competitor]. In forty-eight percent of trials, the reason for that was because the dose of the sponsored drug was appropriate, but the dose of the drug that it was being compared to was less than appropriate. The straw-man study.”148, 149 Richard Lehman, a renowned British physician, said, “Straw-man comparators are a breach of ethics. Or, to put it another way, standard practice.”
Patients and doctors routinely overestimate the dangers of various diseases and underestimate the harmful effects of treatments, making it easier for manufacturers to promote their products even when those products are never tested against a control group.150, 151 A cancer drug might be reported as having an excellent 75 percent survival rate (which generally means that five years after treatment, 75 percent of patients are still alive). But the question is, how many people would have died without treatment? Many of the most deadly pandemics in history have had mortality rates of 25 percent—meaning that 75 percent survive anyway. Proper control groups, composed of either untreated patients (if no effective treatment is available for comparison) or patients treated with a known cure or therapy, are critical to determining whether a drug is making a difference.
Cyberonics was right to include the “sham” low-stimulation group, which was intended to reduce the possibility of the placebo effect by keeping patients in the dark about which group they were in. But the failure to include a third group, treated with medicines only, leaves the most basic questions unanswered: Does the device improve outcomes, and is it safe when compared to optimal drug treatment? It would be many years before an answer to those questions would be published, and it would come from a surprising source. (More on that later.)
The failure to use control or comparison groups continues to be a problem. For example, in 2015, the Centers for Disease Control and Prevention launched a campaign urging the public to take the “lifesaving” flu drug oseltamivir (Tamiflu), which is manufactured by Roche.152 However, years earlier the FDA had issued a warning to Roche that it could not claim the drug saves lives or r
educes pneumonia from flu, because the studies they provided to the FDA failed to demonstrate these outcomes.152 Subsequent high-quality independent analyses of both published and unpublished Tamiflu data have failed to find any lifesaving benefit to the drug.153 Despite this, the CDC went ahead with its Tamiflu campaign—basing its claim on case reports and observational studies. But without a control group for comparison, it’s impossible to say whether the drug has actually saved a single life.153 Unbeknownst to doctors and the public, the highly visible CDC campaign promoting Tamiflu was quietly paid for by Roche, the manufacturer of Tamiflu.152
To this day, drug and device manufacturers commonly promote their wares using studies that are inherently biased. John P. A. Ioannidis, professor of health research and policy at Stanford University School of Medicine, has studied the claims of medical researchers published in the most prestigious medical journals. He found that most published research findings are false or exaggerated and that many clinical trials fail to yield the same results when replicated.154, 155
With industry bias wielding increasing influence in the halls of academe, the hope of genuinely independent inquiry has gradually faded. Many practicing doctors now feel they don’t know which research claims they can trust. In 2006, the American Journal of Psychiatry published an amusing take on the problem of research bias. The authors noted that when manufacturers compare their drugs in head-to-head clinical trials, virtually all drug makers claim their drug is superior to all other drugs in the same class. The article was playfully entitled “Why Olanzapine Beats Risperidone, Risperidone Beats Quetiapine, and Quetiapine Beats Olanzapine.”156 The result is what Shannon Brownlee, vice president of the Lown Institute, calls the Lake Wobegon effect: where all drugs are above average.
The Danger Within Us Page 10