Miracle Cure
Page 26
While the Walter Reed doctors were self-testing, Chloromycetin was also getting a field test, one that, given Parke-Davis’s history, was taking place in South America. In late November 1947, one of the company’s clinical investigators, Dr. Eugene Payne, had arrived in Bolivia, which was then suffering through a typhus epidemic that was killing between 30 and 60 percent of its victims. Payne brought virtually all the chloramphenicol then available in the world (about 200 grams, enough to treat about two dozen patients) and set up a field hospital in Puerto Acosta. Twenty-two patients, all Aymara Indians, were selected for treatment, with another fifty as controls. The results were very nearly miraculous. In hours, patients who had started the day with fevers higher than 105° were sitting up and asking for water. Not a single treated victim died. Out of the fifty members of the control group, only thirty-six survived, a mortality rate of nearly 30 percent.
It was the first of many such field tests. In January 1948, Smadel and a team from Walter Reed recorded similar success in Mexico City. Two months later, they did the same in Kuala Lumpur. Along the way, they discovered that Chloromycetin was effective against the North American rickettsial disease known as Rocky Mountain spotted fever (which can kill more than 20 percent of untreated victims) and the chlamydial disease known variously as parrot fever, or psittacosis. They also found, more or less accidentally*—a patient with typhuslike symptoms turned out to have typhoid instead—that Chloromycetin cured it as well.
The model that had been pioneered by penicillin, and refined for streptomycin and the tetracyclines, was now a well-oiled industrial machine: Microbiologists make a discovery, chemists refine it, and physicians demonstrate its effectiveness in animals and humans. It was time to gear up for industrial production of the new miracle drug. Though the Rebstock experiments had shown how to synthesize the drug (and an improved method had been patented by other Parke-Davis chemists), the drug was still being produced through 1949 both by fermentation and synthesis, the former in a 350,000-square-foot building containing vats originally built to cultivate streptomycin and penicillin.
The process had come a considerable way since those early experiments at the Dunn, and even Pfizer’s converted Brooklyn ice factory. A rail line was built directly to the plant and raw materials arrived on a siding, just as if the railroad was delivering steel for an automobile factory. Every week Parke-Davis’s workforce unloaded tanker cars full of nutrients like wheat gluten, glycerin, and large quantities of salt; also sulfuric acid, sodium bicarbonate, amyl acetate, and deionized water. The S. venezuelae cultures that they were intended to feed were produced in separate laboratories, where they were stored until needed in sterilized earth, cultured on demand, suspended in a solution of castile soap, and held in refrigerators.
The feeding process was just as industrial. Nutrient solution was poured into seven 50-gallon steel tanks plated with nickel and chromium as anticorrosives, and then sterilized by heating to 252°. Streptomyces venezuelae was then injected, the stew agitated using the same washing-machine technique pioneered at the Northern Lab only a few years before, and held at a controlled temperature of 86° for twenty-four hours. The whole mix was then transferred to 500-gallon tanks, and then to 5,000-gallon tanks—each one seventeen feet high and nearly eight feet in diameter, there to ferment.
Following fermentation, the broth was filtered to remove the no-longer-needed S. venezuelae bacteria, thus reducing 5,000 gallons of fermentation broth into 900 gallons of amyl acetate, itself evaporated down to 40 gallons, which was separated and condensed into 2 gallons of solution, from which the antibiotic crystals could—after more than three weeks, and involving hundreds of Parke-Davis chemists, engineers, and technicians—be extracted.
On December 20, 1948, Parke-Davis submitted New Drug Application number 6655 to the Food and Drug Administration, asking that they approve chloramphenicol, and allow the company to bring it to market. On January 12, 1949, the FDA granted the request, authorizing it as “safe and effective when used as indicated.” On March 5, 1949, Collier’s magazine hailed it as “The Greatest Drug Since Penicillin.” By 1951, chloramphenicol represented more than 36 percent of the total broad-spectrum business, and Parke-Davis had it all to itself. The Detroit-based company had become the largest pharmaceutical company in the world, with more than $55 million in annual sales from Chloromycetin alone.
This was an enviable position. But also a vulnerable one.
—
“Blood dyscrasia” is an umbrella term for diseases that attack the complex system by which stem cells in the human bone marrow produce red and white blood cells: erythrocytes, leukocytes, granulocytes, and platelets. Dyscrasias can be specific to one sort—anemia is a deficiency in red blood cells, leukopenia in white—or more than one. Aplastic anemia, a blood dyscrasia first recognized by Paul Ehrlich in 1888, refers to depletion of all of them: of every cellular blood component. The result is not just fatigue from a lack of oxygen distribution to cells, or rapid bruising, but a complete lack of any response to infection. Aplastic anemia effectively shuts down the human immune system.
During the first week of April 1951, Dr. Albe Watkins, a family doctor practicing in the Southern California suburb of Verdugo Hills, submitted a report to the Los Angeles office of the FDA. The subject was the Chloromycetin-caused (he believed) aplastic anemia in his nine-year-old son James, who had received the antibiotic while undergoing kidney surgery, and several times thereafter. On April 7, the LA office kicked it up to Washington, and the agency took notice.
Meanwhile Dr. Watkins, a veteran of the Coast Guard and the U.S. Public Health Service, was making Chloromycetin his life’s work: writing to the Journal of the American Medical Association and to the president and board of directors of Parke-Davis. His passion was understandable; in May 1952, James Watkins died. Dr. Watkins closed his practice and headed east on a crusade to bring the truth to the FDA and AMA. In every small town and medium-sized city in which he stopped, he called internists, family physicians, and any other MD likely to have prescribed Chloromycetin, carefully documenting their stories.
Albe Watkins was the leading edge of a tidal wave, but he wasn’t alone. In January 1952, Dr. Earl Loyd, an internist then working in Jefferson City, Missouri, had published an article in Antibiotics and Chemotherapy entitled “Aplastic Anemia Due to Chloramphenicol,” which sort of tipped its conclusion. Through the first half of 1952, dozens of clinical reports and even more newspaper articles appeared, almost every one documenting a problem with Chloromycetin. Many of them all but accused Parke-Davis of murdering children.
To say this was received with surprise at Parke-Davis’s Detroit headquarters is to badly understate the case. During the three years that Chloromycetin had been licensed for sale, it had been administered to more than four million people, with virtually no side effects.
In the fall of 1952, Albe Watkins made it to Washington, DC, and a meeting with Henry Welch, the director of the FDA’s Division of Antibiotics. Dr. Watkins demanded action. He was trying to kick down a door that had already been opened; Welch had already initiated the first FDA-run survey of blood dyscrasias.
The survey’s findings were confusing. Detailed information on 410 cases of blood dyscrasia had been collected, but it wasn’t clear that chloramphenicol was the cause of any of them. In half the cases—233—the disease had appeared in patients who had never taken the drug. In another 116, additional drugs, sometimes five or more, had been prescribed. Only 61 of the victims took chloramphenicol only, and all of them were, by definition, already sick. The researchers had a numbers problem: Aplastic anemia is a rare enough disease that it barely shows up in populations of less than a few hundred thousand people. As a result, the causes of the disease were very difficult to identify in the 1950s (and remain so today).
Finally, just to further complicate cause and effect, chloramphenicol-caused aplastic anemia, if it existed at all, wasn’t dose dependent. This was,
to put it mildly, rare; ever since Paracelsus, medicine had recognized that “the dose makes the poison.” It not only means that almost everything is toxic in sufficient quantities; it also means that virtually all toxic substances do more damage in higher concentrations. This dose-response relationship was as reliable for most causes of aplastic anemia as for any other ailment. Benzene, for example, which is known to attack the bone marrow, where all blood cells are manufactured, is a reliable dose-related cause of aplastic anemia; when a thousand people breathe air containing benzene in proportions greater than 100 parts per million, aplastic anemia will appear in about ten of them. When the ratio of benzene to air drops below 20 parts per million, though, the incidence of the disease falls off dramatically: only one person in ten thousand will contract it.
Not chloramphenicol, though. A patient who was given five times more of the drug than another was no more likely to get aplastic anemia. Nor was the drug, like many of the pathogens it was intended to combat, hormetic—that is, it wasn’t beneficial in small doses and only dangerous in higher ones. The effect was almost frustratingly random. Some people who took chloramphenicol got aplastic anemia. Most didn’t. No one knew why.
Even so, Chester Keefer of Boston University, the chairman of the Committee on Chemotherapeutics and Other Agents of the National Research Council during the Second World War (and the man who had been responsible for penicillin allocation), “felt that the evidence was reasonably convincing that chloramphenicol caused blood dyscrasias [and that] it was the responsibility of each practicing physician to familiarize himself with the toxic effects of the drug.” In July, after recruiting the NRC, a branch of the National Academy of Sciences, to review the findings, FDA Deputy Commissioner George Larrick phoned Homer Fritsch, an executive vice president at Parke-Davis, to tell him, “We can’t go on certifying that the drug is safe.”
Fritsch might have been concerned that the FDA was preparing to ban Chloromycetin. He needn’t have worried, at least not about that. At the FDA’s Ad Hoc Conference on Chloramphenicol, virtually every attendee believed the drug’s benefits more than outweighed its risks. Even Maxwell Finland, who had found the early reports on Aureomycin to be overly enthusiastic, endorsed chloramphenicol’s continued use. The Division of Antibiotics recommended new labeling for the drug, but no restrictions on its distribution. Nor did it recommend any restriction on the ability of doctors to prescribe it as often, and as promiscuously, as they wished. The sacrosanct principle of noninterference with physician decisions remained.*
If this sounds like a regulatory agency punting on its responsibility, there’s a reason. Even with the reforms of 1938, which empowered the FDA to remove a product from sale, the authority to do so had rarely been used. Instead, the agency response, even to a life-threatening or health-threatening risk, was informational: to change the labeling of the drug. In 1953, the FDA issued a warning about the risk of aplastic anemia in the use of chloramphenicol, but offered no guidelines on prescribing.
The result was confusion. A 1954 survey by the American Medical Association, in which 1,448 instances of anemia were collected and analyzed, found “no statistical inferences can be drawn from the data collected.” And, in case the message wasn’t clear enough, the AMA concluded that restricting chloramphenicol use “would, in fact, be an attempt to regulate the professional activities of physicians.”
Except the “professional activities of physicians” were changing so fast as to be unrecognizable. The antibiotic revolution had given medicine a tool kit that—for the first time in history—actually had some impact on infectious disease. Physicians were no longer customizing treatments for their patients. Instead, they had become providers of remedies made by others. Before penicillin, three-quarters of all prescriptions were still compounded by pharmacists using physician-supplied recipes and instructions, with only a quarter ordered directly from a drug catalog. Twelve years later, nine-tenths of all prescribed medicines were for branded products. At the same moment that their ability to treat patients had improved immeasurably, doctors had become completely dependent on others for clinical information about those treatments. Virtually all of the time, the others were pharmaceutical companies.
This isn’t to say that the information coming from Parke-Davis was inaccurate, or that clinicians didn’t see the drug’s effectiveness in their daily practice. Chloromycetin really was more widely effective than any other antibiotic on offer: It worked on many more pathogens than penicillin and had far fewer onerous side effects than either streptomycin, which frequently damaged hearing, or tetracycline, which was hard on the digestion.* Chloramphenicol, by comparison, was extremely easy on the patient, with all the benefits and virtually none of the costs of any of its competitors.
Nonetheless, because of the National Research Council report, and the consequent labeling agreement, the company’s market position took a serious tumble. Sales of Chloromycetin, which accounted for 40 percent of the company’s revenues and nearly three-quarters of its profits, fell off the table. Parke-Davis had spent $3.5 million on a new plant in Holland, Michigan, built exclusively to make the drug; in the aftermath of the report, the plant was idled. The company had to borrow money in order to pay its 1952 tax bill. In September 1953, Fortune magazine published an article that described the formerly dignified company as “sprawled on the public curb with an inelegant rip in its striped pants.”
Parke-Davis attempted to take the high road, publishing dozens of laudatory studies and estimated the risk of contracting aplastic anemia after taking Chloromycetin at anywhere from 1 in 200,000 to 1 in 400,000.* But the company was fighting with the wrong weapons. In any battle between clinical reports of actual suffering and statistical analyses, the stories were always going to win, especially when the disease in question tends to strike otherwise healthy children and adolescents. The papers, articles, and newspaper reports of the day reveal how many of the discoveries of aplastic anemias were specific or anecdotal: Dr. Louis Weinstein of Massachusetts gave a speech before his state medical society in which he revealed he’d heard of—heard of—forty cases. The Los Angeles Medical Association reported on two cases, one fatal. Albe Watkins, when he made his famous visit to Henry Welch at the FDA, had collected only twelve documented cases.
Even the objective statistics were problematic. In 1949, the year chloramphenicol was approved for sale, the most reliable number of reported cases of aplastic anemia was 638. Two years later—after millions of patients had received the antibiotic, but before Albe Watkins began his crusade—the number was 671. That increased to 828 in 1952, but most of the 23 percent increase in a single year was almost certainly due to heightened awareness of a disease most physicians—including Albe Watkins—had never encountered before. Even more telling: The increase in blood dyscrasias where chloramphenicol was involved was no greater than where it wasn’t. That is, aplastic anemia was on the increase with or without chloramphenicol.
Even though Chloromycetin had such a tenuous cause-and-effect relationship with aplastic anemia, its competitors had no relationship at all. As a result, Parke-Davis was compelled to face the uncomfortable fact that Terramycin and Aureomycin had a similar spectrum of effectiveness, were produced by equally respected companies, and, rightly or wrongly, weren’t being mentioned in dozens of newspaper articles and radio stories as a killer of small children.
What really put Parke-Davis in the FDA’s crosshairs weren’t the gory newspaper headlines, or, for that matter, the NRC study. It was the company’s detail men.
The etymology of “detail man” as a synonym for “pharmaceutical sales rep” can’t be reliably traced back much earlier than the 1920s. Though both patent medicine manufacturers and ethical pharmaceutical companies like Abbott, Squibb, and Parke-Davis employed salesmen from the 1850s on, their job was unambiguous: to generate direct sales. As such, they weren’t always what you might call welcome; in 1902, William Osler, one of the founders of Johns Hopki
ns and one of America’s most famous and honored physicians, described “the ‘drummer’ of the drug house” as a “dangerous enemy to the mental virility of the general practitioner.”
When Osler wrote that, however, he was describing a model that was already on its way out. Though doctors in the latter half of the nineteenth century frequently dispensed drugs from their offices (and so needed to order them from “drummers”), by the beginning of the twentieth they were far more likely to supply their patients through local pharmacies. Pharmaceutical companies, in response, directed their sales representatives to “detail” them—that is, to provide doctors with detailed information about the company’s compounds. By 1929, the term was already in wide circulation; an article in the Journal of the American Medical Association observed, “in the past, when medical schools taught much about drugs and little that was scientific about the actions of drugs [that is, doctors never learned why to choose this drug rather than that one], physicians were inclined to look to the pharmaceutic [sic] ‘detail man’ for instruction in the use of medicines.”