Miracle Cure

Home > Other > Miracle Cure > Page 30
Miracle Cure Page 30

by William Rosen


  And so it went, until November 29, 1961, when Chemie Grünenthal sent Richardson-Merrell the first reports of phocomelia—“seal limb,” a birth defect that caused stunted arms and legs, fused fingers and thumbs, and death; mortality rates for the condition approached 50 percent. Phocomelia wasn’t, in the 1960s, unknown. But it had been an extremely rare genetic disorder, with fewer than a thousand reported cases worldwide. No longer. Eight West German pediatric clinics had reported no cases of phocomelia between 1954 and 1959. In 1959, they reported 12. In 1960, there were 83. In 1961, 302. No one needed a sensitive statistical test to tease out the cause.* The mothers of the malformed infants had all taken thalidomide.

  By the time the drug was removed from sale at the end of 1961, hundreds of thalidomide babies were struggling for life. Just as horrifying: Tens of thousands of expectant mothers who had taken the sedative spent the last months of their pregnancies consumed by a completely rational fear of how they would end. By the time the last exposed mothers gave birth, the total number of phocomelic infants exceeded ten thousand. Thanks entirely to Frances Kelsey’s stubbornness, fewer than thirty of them had been born in the United States.

  The reason for even that small number was that Richardson-Merrell had recruited physicians for “investigational use” of the drug prior to FDA approval, which was not only permissible but condoned by the existing 1938 Food, Drug, and Cosmetic Act. As a result, when the company withdrew its application at the end of 1961, the long tail of thalidomide risk hadn’t yet been reached. Kelsey, very much aware of this, sent the company a letter asking whether any quantity of Kevadon/thalidomide was still in the hands of physicians. The company was unable to provide anything but an embarrassingly incomplete answer; it had distributed more than 2.5 million thalidomide pills to more than a thousand doctors in the United States and had utterly failed to maintain adequate records of who, when, and how much. Most of the expectant mothers in the United States who had been given the sedative by their physicians hadn’t even been told that the drug was experimental.

  Despite the tragic stories of victims, and the embarrassing revelations about the holes in the approval process—in 1960 alone, the FDA had received thousands of applications, and it was only by great good luck that Frances Kelsey was the one to whom the Kevadon application had been assigned—thalidomide didn’t really become a scandal until July 15, 1962, when Morton Mintz of the Washington Post published a front-page story, with the headline: “‘HEROINE’ OF FDA KEEPS BAD DRUG OFF MARKET.” Its first sentence read:

  This is the story of how the skepticism and stubbornness of a Government physician prevented what could have been an appalling American tragedy, the birth of hundreds or indeed thousands of armless and legless children.

  The Post story generated hundreds of comments and opinion pieces throughout the country. On August 8, 1962, Frances Kelsey was honored with the President’s Award for Distinguished Federal Civilian Service; in the words of Senator Kefauver, she had exhibited “a rare combination of factors: a knowledge of medicine, a knowledge of pharmacology, a keen intellect and inquiring mind, the imagination to connect apparently isolated bits of information, and the strength of character to resist strong pressures.” Within weeks, SB1522 was taken off life support, and on August 23 the House and Senate passed the Kefauver-Harris Amendments (the bill had been introduced in the House of Representatives by Oren Harris of Arkansas). On October 10, 1962, Public Law 87-781, an “Act to protect the public health by amending the Federal Food, Drug, and Cosmetic Act to assure the safety, effectiveness, and reliability of drugs,” was signed into law by President John Kennedy. Standing behind him for the traditional signing photo was Frances Oldham Kelsey.

  —

  Kefauver-Harris wasn’t the first major piece of federal legislation to recognize that the world of medicine had been utterly transformed since 1938. In 1951, Senator Hubert Humphrey of Minnesota and Representative Carl Durham of North Carolina—both, not at all coincidentally, had been pharmacists before entering political life—cosponsored another amendment that drew, for the first time, a clear distinction between prescription drugs and those sold directly to patients.

  Credit: National Institutes of Health/National Library of Medicine

  Frances Oldham Kelsey (1914–2015) receiving the President’s Award for Distinguished Federal Civilian Service from President John F. Kennedy

  Until the 1950s, the decision to classify a drug as either a prescription drug, requiring physician authorization, or as what is now known as an over-the-counter medication, was entirely at the discretion of the drug’s manufacturer. This was one of the longer-lasting corollaries of the nineteenth-century principle that, because of the sanctity of consumer choice, people had an inalienable right to self-medicate. As a result, the decision to classify a drug as prescription only was just as likely to be made for marketing advantage as safety considerations: An American drug company could, and did, decide that prices could be higher on compounds that were sanctioned by physicians. Predictably, therefore, the same compound that Squibb made available by prescription only could be sold over-the-counter by Parke-Davis.

  After Humphrey-Durham, any drug that was believed by the FDA to be dangerous enough to require supervision or likely to be habit forming, or any new drug approved under the safety provision of the 1938 act, would be available only by prescription; further, the drug and any refills were required to carry the statement “Federal law prohibits dispensing without prescription.” All drugs that could be sold directly to consumers, on the other hand, had to include adequate directions for use and appropriate warnings, which is why even a bottle of ibuprofen tells users to be on the lookout for the symptoms of stomach bleeding.

  Humphrey-Durham was intended to protect pharmacists from prosecution for violating the many conflicting and ambiguous laws about dispensing drugs. By the end of the 1940s, American pharmaceutical companies were selling more than 1,500 barbiturates, all basically the same, but the regulations governing them barely deserved to be called a patchwork. Thirty-six states required prescriptions; twelve didn’t. Fifteen either prohibited refills or allowed them only with a prescription. And while some pharmacists viewed this as a loophole through which carloads of pills could be driven—one drug- store in Waco, Texas, dispensed more than 45,000 doses of Nembutal, none of them by prescription—others were arrested for what amounted to little more than poor record keeping. Even where pharmacies weren’t attempting to narcotize entire cities, risks didn’t vanish. A Kansas City woman refilled her original prescription (for ten barbiturate pills) forty-three times at a dozen different pharmacies before she was discovered in her home, dead, partially eaten by rats.

  In its original draft, the sponsors of Humphrey-Durham had tried to provide more than just a “clear-cut method of distinguishing between ‘prescription drugs’ . . . and ‘over-the-counter drugs.’” In the second draft of the bill, the FDA administrator, “on the basis of opinions generally held among experts qualified by scientific training and experience to evaluate the safety and efficacy of such drug,” was charged with deciding whether a drug was unsafe or ineffective without professional supervision.

  By the time the amendment was signed, however, any language about effectiveness had been negotiated away. In 1951, there was no constituency, either among patients, physicians, or pharmaceutical companies, urging the FDA to evaluate effectiveness. However, what had been untenable in 1951 became the law of the land in 1962. New drugs, finally, would have to be certified not merely as safe, but as effective.

  The new law didn’t restrict itself to new compounds. The 1962 amendment also required the FDA to review every drug that had been introduced between 1938 and 1963 and assign it to one of six categories: effective; probably effective; possibly effective; effective but not for all recommended uses; ineffective as a fixed combination; and ineffective. Chloramphenicol, for example, was designated as “probably effective” for meningea
l infections, “possibly effective” for treatment of staph infections, and, because of the risk of aplastic anemia, “effective but . . .” for rickettsial diseases like typhoid fever and plague. The review process, known as DESI (for Drug Efficacy Study Implementation), began in 1966, when the FDA contracted with the National Research Council to evaluate four thousand of the sixteen thousand drugs that the agency had certified as safe between 1938 and 1962.* Nearly three hundred were removed from the market.

  In 1963, Frances Kelsey was named to run one of five new branches in the FDA’s Division of New Drugs, the Investigational Drug Branch (now known as the Office of Scientific Investigations). She was tasked with turning the vague language of the Kefauver-Harris Amendments into a rule book. The explicit requirements of the law weren’t actually all that explicit. It required “substantial evidence” of effectiveness that relied on “adequate and well-controlled studies” without actually defining the term. Like the 1938 act, which called for only “adequate tests by all methods reasonably applicable,” the amendments didn’t specify any particular criterion for evaluating either safety or efficacy. It was a statement of goals, not strategies.

  Determining which strategies would be most effective was the next step. Though Bradford Hill’s streptomycin trials of 1946 had demonstrated the immense hypothesis-testing value of properly designed randomized experiments, ten years later nearly half of the so-called clinical trials being performed in the United States and Britain still didn’t even have control groups. Though one pharmaceutical company executive after another had appeared before the Kefauver investigators to claim that the huge sums invested in clinical research justified high drug prices, they were spending virtually all of their research dollars on the front end of the process: finding likely sources for antibiotics, for example, then extracting, purifying, synthesizing, and manufacturing them. The resources devoted to discovering whether they actually worked outside the lab were minuscule by comparison: essentially giving away free samples to physicians and collecting reports of their experience. As Dr. Louis Lasagna, head of the Department of Clinical Pharmacology at Johns Hopkins, had told the Kefauver committee, controlled comparisons of drugs were “almost impossible to find.”

  Frances Kelsey wasn’t any more inclined to accept the status quo than she was to believe the “meaningless pseudoscientific jargon” that Richardson-Merrell had offered in support of their thalidomide application. In January 1963, even before she was named to head the Investigational Drug Branch, Kelsey presented a protocol for reviewing what was now termed an “Investigational New Drug.” The new system would require applicants for FDA approval to present a substantial dossier on any new drug along with their initial application. Each IND, in Kelsey’s proposed system, would need to provide information on animal testing, for example—not just toxicity, but effectiveness. Pharmaceutical companies would be obliged to share information about the proposed manufacturing process, and about the chemical mechanism by which they believed the new drug offered a therapeutic benefit. And, before any human tests could begin, applicants would have to guarantee that an independent committee at each institution where the drug was to be studied would certify that the study was likely to have more benefits than risks; that any distress for experimental subjects would be minimized; and that all participants gave what was just starting to be known as “informed consent.”*

  The truly radical transformation, however, was what the FDA would demand of the studies themselves. Kelsey’s new system specified three sequential investigative stages for any new drug. The first, phase 1 clinical trials, would be used to determine human toxicity by providing escalating doses to a few dozen subjects in order to establish a safe dosage range. Compounds that survived phase 1 would then be tested on a few hundred subjects in a phase 2 clinical trial, intended to discover whether the drug’s therapeutic effect—if any—could be shown, statistically, to be more than pure chance. The final hurdle set out by the 1963 regulation, a phase 3 trial, would establish the new drug’s value in clinical practice: its safety, effectiveness, and optimum dosage schedules. Phase 3 trials would, therefore, require larger groups, generally a few thousand subjects, tested at multiple locations. At the latter two stages, but especially the third, the FDA gave priority to studies that featured randomization, along with experimental and control “arms.” If the new drug was intended to treat a condition for which no standard treatment yet existed, it could be compared ethically against a placebo. If, as was already the case for most infections and an increasing number of other diseases, a treatment already existed, studies would be obliged to test for “non-inferiority,” which is just what it sounds like: whether the effectiveness of the new treatment isn’t demonstrably inferior to an existing one. In either case, the reviewers at the FDA would be far more likely to grant approval if the two arms in an approved study were double- blinded, with neither the investigators nor subjects aware of who was in the experimental or control groups.

  In February 1963, the commissioner of Food and Drugs approved Kelsey’s three-tiered structure for clinical trials. The process of pharmaceutical development would never be the same. It marked an immediate, though temporary, shift of power from pharmaceutical companies to federal regulators. Within weeks of the announcement of the new regulations, virtually every drug trial in the country, from the Mayo Clinic to the smallest pharmaceutical company, was reclassified into one of the three allowable phases. It allowed Frances Kelsey a remarkably free hand in exercising her authority to grant or withhold IND status; to her critics, this led to any number of cases in which she withheld classification based on nothing but a lack of faith in a particular investigator, or her judgment that the proposed drug was either ineffective or dangerous.*

  The new requirements, which would remain largely unchanged for at least the next fifty years, permanently altered the character of medical innovation.

  The method of validating medical innovation using randomized control trials had given the world of medicine a way of identifying the sort of treatments whose curative powers weren’t immediately obvious to clinicians (and, just as important, identifying those that seemed spectacular, but weren’t). Until 1963, however, RCTs had been a choice. The three phases of the newly empowered FDA made them a de facto requirement. Frances Kelsey’s intention was to use the objectivity of clinical trials to simultaneously protect the public and promote innovative therapies. It is unclear whether she understood the price.

  One of the underrated aspects of the wave of technological innovation that began with the first steam engines in the eighteenth century—the period known as the Industrial Revolution—was a newfound ability to measure the costs and benefits of even tiny improvements, and so make invention sustainable. Just as improvements in the first fossil-fueled machines could be evaluated by balancing the amount of work they did with the amount of fuel they burned, even small benefits of new drugs and other therapies could be judged using the techniques of double-blinding and randomization. Since almost all potential improvements are by definition small, medicine generally, and the pharmaceutical industry in particular, now had a method for sustaining innovation. No longer would progress wait on uncertain bursts of genius; discovery could now be systematized and even industrialized.

  However, there was a giant difference between the methods used to compare mechanical inventions and medical or pharmaceutical treatments. Engineers don’t need to try a new valve on a hundred thousand different pumps to see whether it improves on an existing design. But so long as the RCT was the gold standard for measuring improvement in a drug (or any health technology), small improvements in efficacy would require larger, more time-consuming, and costlier trials. By the arithmetic alone, the value of a treatment that is so superior to its predecessor that it saves ten times more people is apparent after only a few dozen tests. One that saves 5 percent more can require thousands. The smaller the improvement, the more expensive the testing would become.

 
This changed the calculus of discovery dramatically. Selman Waksman’s technique for finding new drugs—sifting through thousands of potential candidates in order to find a single winner—had already virtually destroyed the belief that a brilliant or lucky scientist, working alone (or, more likely, in a relatively small laboratory in a university or hospital), might find a promising new molecule. But demonstrating that it worked would, thanks to Frances Kelsey and Bradford Hill, make the process exponentially more expensive, and riskier. The same economies of scale that had been necessary for the manufacture of the first antibiotics were now required for finding and testing all the ones that would follow. Perversely, the Kefauver hearings, initiated and stage-managed by liberal politicians with no love for big business, had led inexorably to the creation of one of the largest and most profitable industries on the planet.

  Engineers calculate failure rates—sometimes they’re known as “failure densities”—to describe phenomena like the increasing probability over time that one of the components of an engine drivetrain will crack up. Pension companies use similar-looking equations to calculate life spans. Medical researchers use them to derive the survival probabilities of patients given different treatment regimens.

 

‹ Prev