Book Read Free

Miracle Cure

Page 27

by William Rosen


  At the time, there were probably only about two thousand detail men in the United States. By the end of the 1930s, there were more of them, but the job itself hadn’t changed all that much. In 1940, Fortune magazine wrote an article (about Abbott Laboratories) that described the basic bargain behind detailing. In return for forgoing what was, in 1940, still a very lucrative trade in patent medicines, ethical drug companies were allowed a privileged position in their relationships with physicians. They didn’t advertise to consumers; their detail men didn’t take orders. They weren’t anything as low-rent as “salesmen.”

  Or so they presented themselves to the physicians whose prescription pads were the critical first stop on the way to an actual sale. In the same year as the Fortune article, Tom Jones (a detail man for an unnamed company) wrote a book of instructions for his colleagues, in which he cheerfully admitted, “Detailing is, in reality, sales promotion, and every detail man should keep that fact constantly in mind.”

  With the antibiotic revolution of the 1940s, the process of detailing, and the importance of the detail man, changed dramatically. A 1949 manual for detail men (in which they were described as “Professional Service Pharmacists”) argued, “The well-informed ‘detail-man’ is one of the most influential and highly respected individuals in the public-health professions. His niche is an extremely important one in the dissemination of scientific information to the medical, pharmaceutical, and allied professions. . . . He serves humanity well.”

  He certainly provided a service to doctors. In 1950, about 230,000 physicians were practicing in the United States, and the overwhelming majority had left medical school well before the first antibiotics appeared. This didn’t mean they hadn’t completed a rigorous course of study. The 1910 Flexner Report—a Carnegie Foundation–funded, American Medical Association–endorsed review of the 155 medical schools then operating in the United States—had turned medical education into a highly professional endeavor.* But while doctors, ever since Flexner, had been taught a huge number of scientific facts (one of the less than revolutionary recommendations of the report was that medical education be grounded in science), few had really been taught how those facts had been discovered. Doctors, then and now, aren’t required to perform scientific research or evaluate scientific results.

  Before the first antibiotics appeared, this wasn’t an insuperable problem, at least as it affected treating disease. Since so few drugs worked, the successful practice of medicine didn’t depend on picking the best ones. After penicillin, streptomycin, and chloramphenicol, though, the information gap separating pharmaceutical companies from clinicians became not only huge, but hugely significant. Detail men were supplied with the most up-to-date information on the effectiveness of their products—not only company research, but also reprints of journal articles, testimonials from respected institutions and practitioners, and even FDA reports. Doctors, except for those in academic or research settings, weren’t. In 1955, William Bean, the head of internal medicine at the University of Iowa College of Medicine, wrote, “A generation of physicians whose orientation fell between therapeutic nihilism and the uncritical employment of ever-changing placebos was ill prepared to handle a baffling array of really powerful compounds [such as the] advent of sulfa drugs, [and] the emergence of effective antibiotics. . . .”

  The detail man was there to remove any possibility of confusion. And if, along the way, he could improve his employer’s bottom line, all the better. As the same 1949 manual put it, “The Professional Service Pharmacist’s job is one of scientific selling in every sense of the word. . . . He must be a salesman first, last, and always.”

  In general, doctors in clinical practice thought the bargain a fair one. Detail men were typically welcomed as pleasant and well-educated information providers, who, incidentally, also provided free pens, lunches, and office calendars in quantity.* Parke-Davis, in particular, hired only certified pharmacists for their own detailing force, and it was said that a visit from one of them was the equivalent of a seminar in pharmacology.

  In 1953, when the Chloromycetin story blew up, Harry Loynd was fifty-five years old, and had spent most of his adult life selling drugs, from his first part-time job at a local drugstore to a position as pharmacist and store manager in the Owl Drug Company chain. He joined Parke-Davis as a detail man in 1931, eventually rising to replace Alexander Lescohier as the company’s president in 1951. He was aggressive, disciplined, autocratic, impatient with mistakes, and possessed of enormous energy.

  However, unlike his predecessor or most of his fellow industry leaders, Loynd had little use for the medical profession. At one sales meeting, surrounded by his beloved detail men, he told them “If we put horse manure in a capsule, we could sell it to 95 percent of these doctors.” And when he said, “sell” he didn’t mean “advertise.” Ads in magazines like JAMA were fine, in their place; they were an efficient way of reaching large numbers of physicians and other decision makers. But advertising wasn’t able to build relationships, or counter objections, or identify needs. For that, there was nothing like old-fashioned, face-to-face selling. Parke-Davis wouldn’t use the clever folks at the William Douglas McAdams ad agency. Loynd was a salesman through and through and believed that Parke-Davis’s sales force wasn’t just a source of its credibility to doctors, but its biggest competitive advantage.

  Even before the FDA announced its labeling decision, Loynd was spinning it as a victory, issuing a press release that said—accurately, if not exhaustively so—“Chloromycetin has been officially cleared by the FDA and the National Research Council with no restrictions [italics in original] on the number or the range of diseases for which Chloromycetin can be administered. . . .” Doctors all over the country received a letter using similar language, plus the implication that other drugs were just as complicit in cases of aplastic anemia as Chloromycetin. Most important: The Parke-Davis sales force was informed, apparently with a straight face, that that National Research Council report was “undoubtedly the highest compliment ever tendered the medical staff of our Company.” Parke-Davis would use its detail men to retake the ground lost by its most important product.

  Loynd’s instinct for solving every problem with more and better sales calls was itself a problem. When management informs its sales representatives that they are the most important people in the entire company—Loynd regularly told his detail men that the only jobs worth having at Parke-Davis were theirs . . . and his—they tend to take it to heart. Though the company did all the expected things to get its detail men to tell doctors about the risks of Chloromycetin, even requiring every sales call to end with the drug’s brochure open to the page that advised physicians that the drug could cause aplastic anemia, there was only so much that could be done to control every word that came from every sales rep’s mouth. Detail men were salesmen “first, last, and always,” and more than 40 percent of their income came from a single product. Expecting them to emphasize risks over benefits was almost certainly asking too much.

  The FDA, which was asking precisely that, was infuriated. The agency’s primary tool for protecting public safety was controlling the way information was communicated to doctors and pharmacists. They could review advertising and insist on specific kinds of labeling. They could do little, though, about what the industry—and especially Parke-Davis—regarded as its most effective communication channel: detail men. It’s difficult to tell whether the FDA singled out Parke-Davis for special oversight. In one telling example, a San Francisco physician accused two of the company’s detail men of promoting Chloromycetin using deceptive statements at a meeting set up at the FDA’s regional office. But there’s no doubt that Parke-Davis believed it to be true.

  For the next five years, the company walked the narrow line between promoting its most important product and being the primary source of information about its dangers. By most measures, they did it extraordinarily well. Sales recovered—production of the drug peaked a
t more than 84,000 pounds in 1956—even as it had to survive a second public relations nightmare. In 1959, doctors in half a dozen hospitals started noting an alarming rise in neonatal deaths among infants who had been given a prophylactic regimen of chloramphenicol because they were perceived to be at higher than normal risk of infection, usually because they were born premature. Those given chloramphenicol either alone, or in combination with other antibiotics such as penicillin or streptomycin, were dying at a rate five times higher than expected. The cause was the inability of some infants to metabolize and excrete the antibiotic. It’s still not well understood why some infants had this inability, but in a perverse combination, the infants receiving chloramphenicol not only were the ones at most risk, but once they developed symptoms of what has come to be known as “gray baby syndrome”—low blood pressure, cyanosis, ashy skin color—they were given larger and larger doses of the drug. Gray babies frequently showed chloramphenicol blood levels five times higher than the acceptable therapeutic dose.*

  Gray baby syndrome was bad enough. Aplastic anemia was worse, and it was the risk of that disease that returned Chloromycetin to the news in the early 1960s. The new aplastic anemia scare was fueled in large part by the efforts of a Southern California newspaper publisher named Edgar Elfstrom, whose daughter had died of the disease after being treated—overtreated, really; a series of doctors prescribed more than twenty doses of Chloromycetin, one of them intravenously—for a sore throat. Elfstrom, like Albe Watkins before him, made opposition to chloramphenicol a crusade, and he had a much bigger trumpet with which to rally his troops. Watkins had been a well-respected but little-known doctor. Elfstrom was a media-savvy writer, editor, and newspaper publisher. He sued Parke-Davis and his daughter’s physicians; he wrote dozens of open letters to FDA officials, to members of Congress, to Attorney General Robert Kennedy, and to Abraham Ribicoff, the secretary of the Department of Health, Education, and Welfare. He even met with the president. As someone with easy access to the world of print journalism—Elfstrom wasn’t just a publisher himself, but a veteran of both UPI and the Scripps Howard chain of newspapers, with hundreds of friends at publications all over the country—he was able to give the issue enormous prominence. For months, stories appeared in both Elfstrom’s paper and those of his longtime colleagues, including a major series in the Los Angeles Times. They make heartbreaking reading even today: A teenager who died after six months of chloramphenicol treatment for acne. An eight-year-old who contracted aplastic anemia after treatment for an ear infection. Four-year-olds. Five-year-olds. A seventeen-year-old with asthma. The stories have a chilling consistency to them: a minor ailment, treatment with a drug thought harmless, followed by subcutaneous bleeding—visible and painful bruising—skin lesions, hemorrhages, hospitalization, a brief respite brought about by transfusions, followed by an agonizing death.

  The tragic conclusion to each of these stories is one reason that the chloramphenicol episode is largely remembered today as either a fable of lost innocence—the realization that the miracle of antibacterial therapy came at a profound cost—or as a morality tale of greedy pharmaceutical companies, negligent physicians, and impotent regulators. The real lessons are subtler, and more important.

  The first takeaway isn’t, despite aplastic anemia and gray babies, that antibiotics were unsafe; it’s that after sulfa, penicillin, streptomycin, and the broad-spectrum antibiotics, it wasn’t clear what “unsafe” even meant.

  For any individual patient, antibiotics were—and are—so safe that a busy physician could prescribe them every day for a decade without ever encountering a reaction worse than a skin rash. It’s worth recalling that, only fifteen years before the aplastic anemia scare, the arsenal for treating disease had consisted almost entirely of a list of compounds that were simultaneously ineffective and dangerous. The drugs available at the turn of the twentieth century frequently featured toxic concentrations of belladonna, ergot, a frightening array of opiates, and cocaine. Strychnine, the active ingredient in Parke-Davis’s Damiana et Phosphorus cum Nux, is such a powerful stimulant that Thomas Hicks won the 1904 Olympic marathon while taking doses of strychnine and egg whites during the race (and nearly died as a result). The revolutionary discoveries of Paul Ehrlich and others replaced these old-fashioned ways of poisoning patients with scarcely less dangerous mixtures based on mercury and arsenic. No doctor wanted to return to the days before the antibiotic revolution.

  But what was almost certainly safe for a single patient, or even all the patients in a single clinical practice, was just as certainly dangerous to someone. If a thousand patients annually were treated with a particular compound that had a 1 in 10,000 chance of killing them, no one was likely to notice the danger for a good long while. Certainly not most physicians. Eight years after Parke-Davis started affixing the first FDA-required warning labels to Chloromycetin, and even after the first accounts of gray baby syndrome, the Council on Drugs of the AMA found that physicians continued to prescribe it for “such conditions as . . . the common cold, bronchial infections, asthma, sore throat, tonsillitis, miscellaneous urinary tract infections . . . gout, eczema, malaise, and iron deficiency anemia.” The FDA had insisted on labeling Parke-Davis’s flagship product with a warning that advised physicians to use the drug only when utterly necessary, and that hadn’t even worked.

  Most clinicians simply weren’t suited by temperament or training to think about effects that appear only when surveying large populations. They treat individuals, one at a time. The Hippocratic Oath, in both its ancient and modern versions, enjoins physicians to care for patients as individuals, and not for the benefit of society at large. Expecting doctors to think about risk the same way as actuaries was doomed to failure, even as the first antibiotics changed the denominator of the equation—the size of the exposed population—dramatically. Tens of millions of infections were treated with penicillin in 1948 alone; four million people took Chloromycetin, almost all of them safely, from 1948 to 1950.

  But if doctors couldn’t be expected to make rational decisions about risk, then who? If the chloramphenicol story revealed anything, it was just how poorly society at large was at the same task. As a case in point, while no more than 1 in 40,000 chloramphenicol-taking patients could be expected to contract aplastic anemia, a comparable percentage of patients who took penicillin—1 in 50,000—die from anaphylaxis due to an allergic reaction; and, in 1953, a lot more penicillin prescriptions were being written, every one of them without the skull-and-crossbones warning that the FDA had required on Parke-Davis’s flagship product.

  Chloramphenicol also demonstrated why pharmaceutical companies were severely compromised in judging the safety of their products. As with physicians, this wasn’t a moral failing, but an intrinsic aspect of the system: a feature, not a bug. The enormous advances of the antibiotic revolution were a direct consequence of investment by pharmaceutical companies in producing them. The same institutions that had declined to invest hundreds of pounds in the Dunn School’s penicillin research were, less than a decade later, spending millions on their own. That, in turn, demanded even greater resources—collecting more and more samples of soil-dwelling bacteria; testing newer and newer methods of chemical synthesis; building larger and larger factories—in improving on, and so replacing, them.

  This, as much as anything else, is the second lesson of chloramphenicol. Producing the first version of a miracle drug doesn’t have to be an expensive proposition. But the second and third inevitably will be, since they have to be more miraculous than the ones already available. This basic fact guarantees that virtually every medical advance is at risk of rapidly diminishing returns. The first great innovations—the sulfanilamides, penicillin—offer far greater relative benefits than the ones that follow. But the institutions that develop them, whether university laboratories or pharmaceutical companies, don’t spend less on the incremental improvements. Precisely because demonstrating an incremental improvement is so dif
ficult, they spend more. The process of drug innovation demands large and risky investments of both money and time, and the organizations that make them have a powerful incentive to calculate risks and benefits in the way that maximizes the drug’s use. Despite the public-spiritedness of George Merck or Eli Lilly, drug companies—and, for that matter, academic researchers—were always going to be enthusiasts, not critics, about innovative drugs. It’s hard to see how the antibiotic revolution could have occurred otherwise.

  This left the job of evaluating antibiotics to institutions that, in theory at least, should have been able to adopt the widest and most disinterested perspective on the value of any new therapy. This was why the Food, Drug, and Cosmetic Act of 1938 empowered the FDA to oversee drug safety—which sounds clear, but really isn’t. The decades since the Elixir Sulfanilamide disaster had demonstrated that any drug powerful enough to be useful was, for some patients, also unsafe. Few people, even at the FDA, really understood how to compare risks and benefits in a way that the public could understand.

  The third lesson of the chloramphenicol episode should have been that risks and benefits in drug use aren’t measured solely by the probability of a bad outcome, or even its magnitude. They can only be established by comparing the risk of using a compound against the risk of not using it. For this reason, the association of chloramphenicol with blood dyscrasias, while tragic and notorious, was actually beside the point. Chloramphenicol, like penicillin, streptomycin, erythromycin, and the tetracyclines, was an almost unimaginably valuable medicine when used appropriately. The very different incentives of pharmaceutical companies and physicians—the first to maximize the revenue from their investments; the other to choose the most powerful treatments for their patients—practically guaranteed a high level of inappropriate use. Chloramphenicol was critical for treating typhus; not so much for strep throat.*

 

‹ Prev