by Sonia Shah
It is possible to conduct RCTs successfully by providing an alternative treatment for subjects in control groups, but according to Temple and Kelly, using placebos as a control renders the most unequivocal data. And yet, deciding what to use as a control has not always been driven primarily by scientific considerations. Politics often grabs front seat.
Take, for example, the 1954 trials of Jonas Salk’s experimental polio vaccine, the RCT’s big national debut. Polio wasn’t a huge killer at the time, but as a crippler of the young, particularly those of the upper and middle classes, it was a terror-inducing scourge, forcing communities to shut down their swimming pools and movie theaters at the height of the summer’s polio season. Salk’s sponsor, the March of Dimes, aware that leading virologists were skeptical of the experimental vaccine—it consisted of the entire, virulent polio virus itself rather than a similar virus that could train the body to fend off more dangerous foes—was eager to produce the most convincing data possible.51 A double-blind RCT would be ideal, a “beautiful . . . experiment over which the epidemiologist could become quite ecstatic,” as Salk put it.52 The plan was to give the control group placebos. Nobody really knew if Salk’s vaccine worked anyway, so it wasn’t as if they’d be depriving anyone of some known effective medicine. For all they knew the vaccine might even hurt the children: those randomized to placebo might have better outcomes than those in the vaccine group.
But just as Romark and the NIH discovered years later when they attempted to run a placebo trial for nitazoxanide among U.S. AIDS patients, the majority of the state health departments approached about running the trial in their public school systems objected to the placebo control. So great was their faith in Salk and the March of Dimes that they wanted all of their enrolled children to get the vaccine, experimental or not.53
Salk, publicly at least, agreed with the underlying sentiment: denying the experimental vaccine to any child would be a travesty, indeed. The placebo-controlled design, he said, “would make the humanitarian shudder,” he opined.54 The compromise the foundation settled on was less rigorous but more politically palatable, involving an awkward mix of two concurrent trials: a large-scale trial in which all participants would be vaccinated, and a smaller one that compared vaccinated children to those injected with placebos.55
The same year, Louis Lasagna, MD, called by some the father of modern pharmacology, discovered the “placebo effect,” the phenomenon by which patients are healed by inert compounds, and became a forceful advocate for more rigorous standards for drug approvals. For Lasagna, as for Temple later, trials that pitted an experimental drug against an alternative treatment too often told scientists nothing. “In the absence of placebo controls, one does not know if the ‘inferior’ new medicine has any efficacy at all,” Lasagna wrote in a 1979 editorial. “‘Equivalent’ performance may reflect simply a patient population that cannot distinguish between two active treatments that differ considerably from each other, or between active drug and placebo.” In his myriad appearances at congressional hearings, Lasagna urged the FDA to require placebo-controlled trials for all new drugs.56
Today, the FDA’s position is that it prefers placebo-controlled trials, if they are ethical and feasible.57 Temple took the baton as placebo-control advocate from Lasagna, who died in 2003. And now, the placebo-control orthodoxy is firmly entrenched, with a number of novel arguments forwarded in its favor. Kelly, for example, was certain that using placebos in Lusaka was the right thing to do. “There is no other way of being absolutely sure that the stuff actually works,” he says. “It is very very important to do this in third world countries, for two reasons. One, because if you misguide people into thinking that your drug works when it doesn’t, you’ll be responsible for diverting precious resources away from something else which may also be important. Two, we cannot assume that something which works in other countries will work here. . . . There are geographical differences and we have to be sure that it works where we’re planning to use it.”58
There are other rationales that are somewhat less lofty. Most new drugs are not miraculous cures like penicillin, or a shot of insulin to a comatose diabetic. For most, the margin of effectiveness is narrow, colored in shades of gray. “I’m not used to finding black and white,” agrees Rosemary Soave. “You usually have to struggle to find the difference” between patients who got the drug and those who didn’t.59 Anecdotal evidence might be sufficient in the case of a wonder drug, but discerning how a weakly acting drug works requires the precision, and relatively low expectations, of a placebo-controlled trial.
For drugmakers, the choice is obvious. “Why risk trying to be better than something,” an FDA medical officer noted, “when all you need to show is that you are better than nothing?” Sure, patients randomly selected to get placebo rather than an active drug might suffer a bit—in trials for new diabetes drugs, for example, investigators withheld active drugs from their subjects in order to worsen their hyperglycemia before testing a new drug on them—but “in the absence of permanent harm, why should a federal agency restrict the right of a patient to participate in a clinical trial?”60 So long as the patients are adequately informed that they might get a placebo, Temple insists, there is nothing ethically troublesome about a placebo-controlled trial. “I think it is usually good for people to be in clinical trials,” Temple says optimistically.
Romark brought its results from the nitazoxanide trial in Zambia, along with data from its trials in Egypt and Peru, to the FDA in May 2002, hoping to prove to the agency that the drug was worthy of approval. The regulators agreed. The drug, now dubbed Alinia, was launched in the United States in December of that year as a treatment for children infected with Cryptosporidium and another parasite called Giardia. For those children splashing in pools alongside toddlers wearing leaky diapers and the parents who had to look after them the short, three-day course of treatment would mean “less discomfort and less time away from work, school, and other activities,” Romark’s Web site announced. The company expected sales of $20 million in the first year, $50 million the following year, and sometime in the not-so-distant future, $100 million a year.
Alinia’s value for children suffering from infectious diarrhea in Zambia and other developing countries is less clear. For scientists at the forefront of researching drugs and vaccines to treat infectious diarrhea in developing countries, such as those at the nonprofit Institute for OneWorld Health in San Francisco, nitazoxanide is an irrelevancy.61 That’s because, in most developing countries Crypto causes only about 5 percent of diarrhea cases in children under five years old, according to Johns Hopkins pediatric infectious diseases specialist Robert Black, MD. “And these are not particularly severe cases of diarrhea,” he says.62 Many of those who are harboring Crypto or Giardia are infected with a host of other intestinal parasites as well. A significant proportion is also infected with HIV. A drug whose effective use requires not only that patients harbor only one parasite but that clinicians actually know which one it is would understandably be of questionable value in places with limited diagnostic capabilities. In India, for example, where a vigorous local drug industry quickly made the drug available, the med is “nearly useless,” says medical analyst Chandra Gulhati, MD.63 Not to mention the fact that, according to independent researchers in Mexico, nitazoxanide had worse side effects with no greater efficacy than cheaper, older medicines.64
And yet it is true that unlike the scores of drugmakers seeking to muscle in to drug markets aimed at the aging rich of the developed world, Romark had developed a drug for a rare, parasitic disease, however imperfect. Few drug companies spent any time or money making drugs to neutralize parasites, aid workers struggling with onslaughts of parasitic diseases in tropical countries complain. But if Romark’s hunt for experimental bodies had ended in Zambia, their market clearly began elsewhere. The children of Zambia shouldered the burden for nitazoxanide’s development, but they are hardly beneficiaries of the drug’s advantages, however fleeting. In Zam
bia, save for at the University Teaching Hospital, clinicians don’t even bother trying to diagnose cryptosporidiosis in children with diarrhea. Nitazoxanide is not licensed for use in the country. Five years after the hospital had run the trial for Romark, they still had no supply of the drug.65
3
Growing the Pharma Monolith
Jill Weschler is a gray-haired, slightly hunched woman with a jokey manner. As an editor at Pharmaceutical Executive magazine she’s well aware of the clinical research industry’s reputation. Bioethicists think CROs are the “incarnation of evil,” she says conspiratorially, in a nasal voice. “I mean, the gang at the New England Journal of Medicine think no trial should be done unless [health activist] Sid Wolfe does it!” She laughs. “I mean, who is clean enough?”
Weschler’s point echoed a consensus that quickly emerged among the CRO investigators and executives chatting informally after Wurzlemann’s talk in Washington, DC. The bad reputation is unfair, because subjects in industry trials are lucky. If patients are poor and medicine deprived, running a drug experiment on them is positively an act of charity. Isn’t it more ethical, one demanded, “if patients are not getting any treatment that they are in clinical trials, if this is the only way they can get treatment?” “I was criticized for doing a Shigella trial,” a former researcher for Schering commiserated. Shigella is a diarrhea-inducing bacterium that kills one million people around the world every year.1 “They said you are taking advantage! But without that trial, those children would be dead!”2
The notion that clinical trials are not a burden for subjects but rather a fortunate opportunity to access new drugs pervades the clinical research industry. It stems, in part, from an underlying faith in the system of drug development to reliably churn out new drugs that are safe, effective, and useful. And yet, our patchwork regulations do little to ensure that this is the case. Rather than shaping the industry to reliably produce socially beneficial medicines, regulations have generally been applied in fits and bursts in the wake of drug-induced disasters. The industry has never been reined in coherently by law or incentive to produce the medicines we most need, at prices we can afford, and in recent years, many of our most stringent regulations and oversight mechanisms ensuring safety and efficacy have deteriorated in the shadow of an ever-growing pharma monolith.
That isn’t to say that all new drugs are dangerous and ineffective. But it may not be feasible to rely on the drug-development system to ensure that the benefits of experimentation outweigh the risks. What’s more, trends in the industry suggest that the margin of benefit for new drugs is rapidly shrinking, while the risks of experimentation remain constant or are even growing. And when there is a gap between risks and benefits, the global poor who are the subjects of today’s body hunt pay the price.
Today, even though they aren’t regulated as such, pharmaceutical products are valued in Western society as life-saving necessities like electricity and clean water, underpinning its unspoken toleration of the industry’s growing hunt for bodies. That wasn’t always the case.
For much of their first century of existence, drug companies were considered vaguely contemptible snake-oil peddlers. It was a fair enough assessment. Since opening their doors in the mid to late 1800s, drugmakers like Eli Lilly and Merck flogged mysterious “secret formulas,” their actions and ingredients known only by catchy slogans and advertising jingles.3 According to an 1885 survey, the main ingredients in these medicines were quinine and morphine. Merck sold cocaine; Bayer sold heroin.4 Alcohol diluted with water was sold as a cure for colds, congestion, and tuberculosis. Only after these unregulated medicines had killed thousands of Americans, including countless infants who were given opiates, did Congress pass the 1906 Food and Drug Act, requiring drugmakers to list ingredients on their product labels.5
True “magic bullet” drugs, medicines that were selectively toxic rather than just diluted poisons, didn’t emerge until 1932.6 Sulfanilamide, a compound found in red textile dye, was the first compound that prevented bacterial cells from multiplying, allowing the host’s immune system to destroy them. Laying waste to streptococcus, pneumonia, meningitis, and gonorrhea, sulfanilamide was dubbed by the New York Times “the drug which has astounded the medical profession.”7
When one hundred children died from a sulfa concoction—Massengill had dissolved the drug in a sweet but poisonous solvent—Congress passed the Food, Drug, and Cosmetic Act of 1938, requiring toxicity tests for new drugs.8 Not long afterward sulfa drugs were rendered ineffective by resistant strains of bacteria, and were replaced by penicillin, a drug with profound bacteria-killing properties. Penicillin’s debut marked what Hilts aptly dubbed “the beginning of the faith.”9 Tuberculosis, already on the decline, was decimated. Syphilis, it appeared, might be stamped out as well. So effective was the drug against syphilis that the city of New York distributed the drug for free in its venereal diseases clinics starting in 1943. Within less than a decade the rate of syphilis infection in the United States had been quartered.10
Penicillin and the more potent antibiotics that followed it elevated public perception of drugmakers’ products from snake oils to social goods.11 Emboldened and now beloved, medical research—and the drug industry that relied upon it—stepped out of the shadows to become society’s darling. The budget of the National Institutes of Health swelled from $180,000 in 194512 to $874 million by 1970.13 NIH research provided new ideas and approaches for drug companies, and the resulting breakthroughs in industry labs garnered drug company scientists the Nobel Prize in medicine in both 1950 and 1952.14 There appeared to be no disease that drug companies in partnership with medical science could not surmount, if nourished with enough time and funding.15
When it came to pricing, the drugs that emerged from this frenzy of medical research were not considered ordinary commodities. New laws banned the advertisement of drug prices and reserved control over the use of the most potent drugs to physicians, rather than the patients and their insurers who would be handed the bill. Thus liberated from the yoke of sticker shock, all concerned consumed the novel meds—the new American birthright—in ever greater quantities.
By 1957, the drug industry was the most profitable industry in the country, with profit margins double the national average: 19 percent of investment after taxes. Drug sales “were unlike anything seen in the history of sales,” wrote Hilts.16
The thalidomide disaster marked a turning point for the growing drug industry. While the scandal certainly revealed the folly of relying on lightly regulated, profit-seeking drug companies to protect public health, the legislation it sparked didn’t require the industry to re-orient itself toward society’s good health. Instead, the new rules required the industry to vastly step up the experimental activities held in such high esteem. Now the hunt for experimental bodies would begin in earnest.
The German company Chemie Grunenthal first started selling the sedative thalidomide under the trade name Contergan in 1957, claiming it was as powerful as a barbiturate but with no noticeable side effects.17 In Europe and Africa Grunenthal promoted Contergan as being “as safe as mother’s milk.”18 Thousands acquired the drug over the counter. Soon a small company called Vick Chemical and its subsidiary Richardson-Merrell decided to market the drug to pregnant women as a nausea treatment. The regulatory hurdles were not particularly taxing. No drugmaker had to prove in any way that their drug actually worked.
Richardson-Merrell started testing the drug’s toxicity in animals. The results could not have been comforting: six of the eleven mice died; twenty-two out of thirty rats died; the dog died.19 Reports also started trickling in from Europe that thalidomide was poisoning patients’ nerves, resulting in tingling, numbed limbs, suggesting that the drug was penetrating the blood-brain barrier and could likewise cross into the placenta in pregnant women.20 Nevertheless, the company launched a major clinical trial in early 1960, shipping the drug to over one thousand American doctors to administer to about twenty thousand of their patien
ts. The company arranged for an obstetrician to sign off on a paper they had written about the drug, which would be published in the medical literature. The FDA, not convinced that the drug was truly safe, held up Richardson-Merrell’s application, but American docs involved in the clinical trial continued to receive their thalidomide samples in the mail.21
Meanwhile, doctors outside the United States had grown increasingly alarmed at the rash of babies born with a once rare condition called phocomelia, in which hands and feet sprout directly from the body, like seal flippers. Many of the babies had no openings for ears, deformed intestines, and no bowel openings.22 In 1961, an Australian obstetrician connected the outbreak to the use of thalidomide. It turned out that if taken even in a single dose during the first trimester of pregnancy, thalidomide could radically deform fetuses. The German news media splashed the story on its front pages.23
The American press didn’t pick up the story until eight months later. By then, about forty babies had been born with phocomelia in the United States, including a handful under the care of the obstetrician who had signed off on the pro-thalidomide company medical paper. The FDA sent out investigators to recall thalidomide doses from doctors’ offices, but the recall was a disaster. Most of the doctors hadn’t even kept records of how much thalidomide they had received or doled out, and few managed to track down the doses from their patients.
Now lawmakers were forced to act. The bungled recall swept through a bill on drugs regulation, heralded by President Kennedy for allowing “the immediate removal from the market of a new drug where there is an immediate hazard to public health.”24 But the 1962 amendments to the Food and Drug Act required much more than quicker recalls of dangerous drugs. New drugs would not only have to prove themselves safe before being allowed on the market, they’d have to prove themselves effective as well. Anecdotal evidence or expert opinion wouldn’t be sufficient either, but only randomized controlled trials in humans that proved a drug statistically better than a placebo. Companies would have to test their experimental drugs on animals first, then inform the FDA and secure consent from patients before embarking on human trials. Drugs approved between 1938 and 1962 would have to submit evidence retroactively showing that they worked or risk being forcibly banned from the market.25