If money had been at the heart of the dispute, the settlement should have, well, settled things. Possibly, had the judges who awarded the Nobel Prize in Physiology or Medicine in 1952 recognized Schatz as well as their honoree, Waksman, it might have done so. Or if Waksman had, in his Nobel Lecture, done more than mention Schatz, in a single sentence, as one of twenty different lab assistants. But probably not. To his dying day, Schatz refused to recognize the fact that Waksman was a far more accomplished scientist, one who had made a dozen different landmark discoveries both before and after Schatz’s tenure in the lab. In 1949, Waksman even discovered another actinomycete-derived antibiotic, neomycin. Moreover, though Schatz spent decades insisting he was academically marginalized for pursuing on his just rights, it’s not hard to see why other biology departments were wary of hiring someone who had attacked his thesis advisor in print, in court, and even, astonishingly, by sending an open letter to the king of Sweden in an attempt to sabotage the Nobel Prize ceremony. In Selman Waksman’s mind, on the other hand, the great achievement was entirely the result of an experimental machine he had designed and built decades before Schatz arrived in New Brunswick—and that Albert Schatz was, therefore, an easily replaceable component.
And there lies the real misunderstanding. In a letter written on February 8, 1949, Waksman—again turning up the dial on his indignation—wrote to Schatz, “How dare you now present yourself as innocent of what transpired when you know full well that you had nothing to do with the practical development of streptomycin” (emphasis added). Unwittingly, Waksman had revealed the underlying truth of the scientific discovery; and it didn’t serve either Schatz or himself. He was correct that Schatz had little to do with the “practical development” of the new antibiotic. But neither had he. William Feldman and Corwin Hinshaw at Mayo had more to do with demonstrating the therapeutic value of streptomycin than anyone at Rutgers. So had Karl Folkers and Max Tishler and the more than fifty researchers assigned by George Merck to supervise the streptomycin project. The discovery of streptomycin was a collective accomplishment dependent on dozens, if not hundreds, of chemists, biologists, soil scientists, and pathologists.
And one statistician.
—
Streptomycin was a miracle for those suffering from tuberculosis and other bacterial diseases unaffected by penicillin. It was iconic—as much, in its way, as penicillin—in demonstrating the way industrial support could turbocharge university research. But it achieved its most consequential result as the subject of the most important clinical trial in medical history: the one that, for the first time, and forever after, quantified the art of healing.
Though it is frequently described as such, the streptomycin trial of 1948 to 1950 wasn’t anything like medicine’s first clinical trial. The Bible’s Book of Daniel records King Nebuchadnezzar’s more or less accidental test of two different diets—one, by royal commandment, only meat; the other vegetarian—over ten days, after which the perceived health of the vegetarians persuaded the king to alter his required diet. The sixteenth-century French surgeon Ambroise Paré tested two different treatments for wounds—one, a noxious-sounding compound made of egg yolks, rose oil, and turpentine; the other of boiling oil, which was even worse. Two centuries later, the Scottish physician James Lind “selected twelve patients [with] the putrid gums, the spots and lassitude, with weakness of the knees” typical of the deficiency disease scurvy, and gave two of them a daily dose of a “quart of cyder,” two of them sulfuric acid, two vinegar, two seawater, two a paste of honey. The final two, given oranges and lemons, were marked fit for duty after six days, thus clearly demonstrating how to prevent the disease, a scourge of long maritime voyages.* Even fictional characters got into the act. The climax of Sinclair Lewis’s 1925 novel, Arrowsmith, takes place on the fictional Caribbean island of St. Hubert’s, in which the eponymous hero gives half the population of the parish of St. Swithin’s a vaguely described antiplague serum—a “phage”—and the other half nothing at all.
Nor was the streptomycin trial the first time medicine had recognized the importance of sampling. The idea of sampling—choosing a test population so that it reflects the characteristics of the entire universe of similar people, and large enough that conclusions wouldn’t be confounded by a tiny number of outliers—was, in some sense, centuries old; the sixteenth-century Dutch physician and chemist Jan Baptist van Helmont had dared his fellow doctors to compare their techniques with his, proposing:
Let us take out of the hospitals . . . 200 or 500 poor People that have Fevers, Pleurisies, etc. Let us divide them into halfes, let us cast lots, that one of them may fall to my share, and the other to yours [and] we shall see how many funerals both of us shall have. . . .
William Farr, compiler of abstracts in Britain’s General Register Office, first identified the huge difference in childhood mortality between rich and poor, leading to thought experiments on sanitation. And the pioneers of mathematical statistics, Karl Pearson and Ronald Fisher, had applied techniques like analysis of variance and regression to a variety of health-related subjects, such as height and blood pressure. In 1943, Britain’s Medical Research Council actually funded a comparative trial to see if an extract of Penicillium patulinum could cure the common cold (it couldn’t). During the first four decades of the twentieth century, more than one hundred papers cited so-called alternate allocation studies, in which every other patient admitted to a hospital or clinic was given a treatment, and the others were used as a control. But before streptomycin, the perceived efficacy of any medical treatment remained entirely anecdotal: the accumulated experience of clinicians.
This is the sort of thing that works just fine to identify really dramatic innovations, like the smallpox vaccine, or the sulfanilamides, and certainly penicillin. But most advances are incremental; this is true not just for medical innovations, but all technological progress. And identifying small increments of improvement isn’t a job for clinicians. Physicians, no matter how well trained in treating disease, are just as vulnerable as anyone else to cognitive biases: those hiccups of irrationality that give an excessive level of significance to the first bit of information we acquire, or the most recent, or, most common of all, the one we hope will be true.* Medicine needs statisticians.
It was a lucky coincidence that the most influential medical statistician of the twentieth century had a special interest in tuberculosis. Austin Bradford Hill, professor of medical statistics at the London School of Hygiene and Tropical Medicine—like Selman Waksman, a nonphysician—had spent the First World War as a Royal Navy pilot in the Mediterranean, where, in 1918, he acquired pulmonary tuberculosis. Treated with bed rest and the dangerous, unproven, but nonetheless popular technique of deliberately collapsing his lung, he somehow recovered, and was granted a full disability pension, which he would collect for the next seventy-four years, until his death in 1992.
Hill’s personal connection with tuberculosis wasn’t the only reason that streptomycin was the ideal candidate for the then-revolutionary idea of randomizing a population of patients and carefully comparing outcomes between those who received a particular treatment and those who didn’t. Because tuberculosis was so variable in its symptomology—most people who have M. tuberculosis are asymptomatic; some will have a chronic ailment for years; and others die within weeks—it was, even more than most diseases, subject to the “most people get well” problem. And, as all those people who flocked to sanatoria proved, tuberculosis was highly susceptible to environmental differences: People really did do better at Saranac Lake than in cities, where clean air and water were still rare. Streptomycin didn’t cure tuberculosis the way penicillin cured staph infections; before Patricia Thomas left Mineral Springs, she received five courses of treatment over more than four months. Teasing out the best practices for a treatment that worked that slowly—slower than some diseases resolve in the absence of any treatment at all—really demanded sophisticated mathematics.
/> Despite the obvious need for statistical analysis of tuberculosis treatments, Bradford Hill had another obstacle to overcome in persuading Britain’s Tuberculosis Trials Committee, which he had joined in 1946, to fund a randomized trial to evaluate streptomycin: ethics.
Such a trial would mean denying what was widely (and correctly) believed to be a lifesaving drug to the members of a control group. But without a control group, it would be impossible to arrive at a definitive conclusion about the treatment group. This was an especially thorny issue in 1946, as the Nuremberg Military Tribunals were, at that very moment, revealing the horrific results of human experimentation in Nazi Germany, where patients were frequently denied treatment in the name of science.* Letting chance decide which tuberculosis patients got the miracle drug, and which would serve as a control group, would probably have been an insuperable problem, but for one thing: There just wasn’t enough of the drug to go around. A significant number of tuberculosis patients would be denied treatment in any case. As Hill subsequently wrote:
We had exhausted our supply of dollars in the war and the Treasury was adamant we could only have a very small amount of streptomycin . . . in this situation it would not be immoral to do a trial—it would be immoral not to, since the opportunity would never come again. . . .
The Medical Research Council’s Trial of Streptomycin for Pulmonary Tuberculosis began early in 1947 with 107 patients in three London hospitals. All of them were young, and had recently acquired severe tuberculosis in both lungs. Fifty-five of them were assigned to the treatment group, which would be given streptomycin and bed rest, while the fifty-two members of the control group would receive bed rest and a placebo: a neutral pill or injection indistinguishable to the patient from the compound being tested. The two groups were selected entirely at random, based on numbers chosen blindly by the investigator and placed in sealed envelopes, thus assuring that no selection bias would occur unconsciously. Nor were the patients themselves told whether they were receiving streptomycin or a placebo.
Hill insisted that results of the test wouldn’t depend on clinical evaluation alone, but on changes in chest X-rays, which would be examined by radiologists unaware of whether the subject had been in the treatment or the control group. The exercise was therefore a real beauty: a randomized, triple-blind (neither patients, nor clinicians, nor evaluators were told who had received the treatment) clinical trial, the first in the history of medicine.
The results were just as compelling as the methodology, though in an unexpected way. After six months, twenty-eight patients on the streptomycin regimen had improved, and only four had died. The control group, meanwhile, had lost fourteen of its fifty-two patients.
But while streptomycin “worked,” the rigor that Hill had built into the experiment’s design revealed streptomycin’s weaknesses just as clearly as its strengths, even for treating tuberculosis. Because it took months for the treatment to show a statistically demonstrable effect, some of the bacteria were certain to develop resistance to the therapy while it was still going on. And, indeed, when the subjects were revisited three years later, thirty-five of the members of the control group had died . . . but so had thirty-two who had received the treatment.
The experiment threw a massive dose of reality-based cold water on the enthusiasm of the first clinical reports. Something more than streptomycin was clearly required to redeem the drug’s initial promise. Luckily, Hill had a good idea what the “something more” should be. In 1940, a Danish physician, Jörgen Lehmann, had reasoned that, since acetyl salicylic acid, the compound better known as aspirin, increased the oxygen consumption of M. tuberculosis, its chemical cousin—para-aminosalicylic acid, or PAS—might act to inhibit oxygen consumption, thus killing (or at least disabling) the aerobic bacterium. It was a decent enough theory, and in 1946 Lehmann had published an article in the Lancet with his results, which were modest enough. By itself, PAS was only a marginally effective treatment for tuberculosis. But because its mechanism worked to inhibit oxygen consumption by the bacterium, the logic went, it strengthened the action of streptomycin, which needs oxygen to enter the bacterial cells.*
In his second trial, beginning in December 1948, Hill duplicated exactly the same experimental structure—same randomization, same X-ray evaluation—as his first. This time, however, he added Lehmann’s oxygen inhibitor to the treatment group. Less than a year later, the power of what would come to be known as RCT, for randomized controlled trials, was vindicated. The Medical Research Council announced that they had shown “unequivocally that the combination of PAS with streptomycin considerably reduces the risk of the development of streptomycin-resistant strains of tubercle bacilli.” The three-year survival rate using the combination of the two drugs was an almost unbelievable 80 percent.*
In less than three years, penicillin and streptomycin had achieved more victories in the battle against infectious disease than anything in the entire history of medicine since Galen. Both were unprecedentedly powerful weapons against pathogens; but it was streptomycin that revealed a method for finding more of the same: the combination of Selman Waksman’s protocol for finding antibacterial needles in haystacks made of soil, and Bradford Hill’s arithmetic for revealing their clinical value.
SEVEN
“Satans into Seraphs”
The perennial known as timothy grass, which grows from two to five feet tall, covers thousands of acres of the American Midwest. It is famously hardy, resistant to both cold and drought, and prospers in almost any kind of soil, from the heaviest bottomland to the poorest sands. Like many New World plants, it is a relatively recent invader, introduced to colonial America by European settlers—one popular theory suggests that the name comes from an eighteenth-century New England farmer named Timothy Hanson—and it is widely grown as animal feed for everything from domestic rabbits to cattle and horses.
Timothy grass was and is important enough as a commercial commodity that agronomists at the University of Missouri started planting it in Sanborn Field, the university’s agricultural test station, as soon as it opened in 1888. They were still experimenting with it—testing varieties for improved yields, or more weather hardiness—in 1945 when William Albrecht, a soil microbiologist, received a letter from a former colleague, now working in New York. The letter included a request that Albrecht obtain soil samples from a dozen different Missouri locations, including Sanborn Field’s Plot 23. Its author was a botanist and mycologist named Benjamin Minge Duggar.
Duggar was then seventy-three years old and an accomplished and respected plant pathologist. Ever since receiving his PhD in 1898, he had studied fungi and disease, more or less nonstop, at the Department of Agriculture and at a number of prominent land-grant universities including Cornell, the University of Wisconsin, Washington University in St. Louis, and, of course, Missouri.* In 1944, he departed his last academic post and joined Lederle Laboratories to work under its remarkable head of research, Dr. Yellapragada Subbarao.
Lederle Antitoxin Laboratories, as it was originally known, had been founded in 1904 by Dr. Ernst Lederle, a former New York City health commissioner, to produce an American version of the diphtheria vaccine developed by Emil Behring, Paul Ehrlich, and Robert Koch at the end of the nineteenth century, one they could sell to American physicians and hospitals royalty free. Vaccines and antitoxins, for tetanus, typhoid, anthrax, and smallpox, remained the company’s primary business for the next forty years, through the death of its founder in 1921, its subsequent acquisition by the agricultural chemicals manufacturer American Cyanamid in 1930, and the hiring of Subbarao in 1940.
Subbarao, an Indian-born physician and physiologist, arrived in the United States as a penniless immigrant in 1923, but with an admissions letter to Harvard University’s School of Tropical Medicine, a division of the university’s medical school. His tuition expenses were paid by his father-in-law, but in order to pay for his room and board he was given a job at Harvard Medical
School, where he spent the next seventeen years. His achievements were nothing short of stellar; among other endeavors, he isolated the components of adenosine triphosphate, or ATP, the fuel for all cellular respiration. In fact, a complete account of Yellapragada Subbarao’s accomplishments is almost literally too long to list—not only fundamental discoveries about ATP, creatine, and of B12, but half a dozen chemical breakthroughs still in use today, including discovering how a mimic of folic acid known as antifolate could be used to combat leukemia. Despite that, U.S. immigration law’s baroqueries (among other things, immigrants from British India were allowed to stay only if they fell into professional categories that the State Department deemed valuable . . . a list that changed regularly) required him to register as an alien for his entire professional career.
The most consequential result of Subbarao’s problematic immigration status is that one of the university’s most brilliant investigators was denied tenure. Academia’s loss was industry’s gain; in 1940, he left to join Lederle as its director of research. A year after that, he represented Lederle at the first meetings of the Committee on Medical Research called by A. N. Richards to discuss what would become the penicillin project. Three years after that, he hired Benjamin Duggar.
By this time, Selman Waksman’s researches at Rutgers were making him the most famous soil scientist in the world; more, they were inspiring everyone in the entire discipline to emulate his approach: testing literally thousands of actinomycetes for antibacterial properties. It certainly inspired Subbarao and Duggar, who initiated a global program of soil collection. Remarkably enough, in the middle of the world’s largest war, they successfully recruited dozens of soldiers and sailors to seek out soil samples everywhere from the Caucasus to North Africa to South America.
Miracle Cure Page 22