Book Read Free

The Coming Plague

Page 6

by Laurie Garrett


  Later that year, America’s James Watson and Britain’s Francis Crick, working at Cambridge University, figured it all out. One of the chemicals—a sort of carbon chain linked by powerful phosphate chemical bonds—created parallel curved structures similar to the poles of a long, winding ladder. Forming the rungs of the ladder were four other chemicals, called nucleotides. The order of those nucleotide rungs along the carbon/phosphate poles represented a code which, when deciphered properly, revealed the genetic secrets of life.

  DNA, then, was the universal code used by one meningococcal bacterium as the basis for making another meningococcal bacterium. It was the material wrapped up inside the chromosomes of higher organisms. Sections of DNA equaled genes; genes created traits. When the chromosomes of one parent combined with those of another parent, the DNA was the key, and which traits appeared in the children (blue versus brown eyes) was a function of the dominant or recessive genes encoded in the parents’ DNA.11

  While government officials were bragging that everything from malaria to influenza would soon disappear from the planet, scientists were just beginning to use their newfound knowledge to study disease-causing viruses, bacteria, and parasites. Scientists like Johnson were of the first generation of public health researchers to know the significance of DNA. Understanding how DNA played a direct role in the emergence of disease would take still another generation.

  Starting at nature’s most basic level, scientists at Cold Spring Harbor Laboratory on Long Island, New York showed in 1952 that viruses were essentially capsules jam-packed with DNA. Much later, researchers discovered that some other viruses, such as polio, were filled not with DNA but with its sister compound, RNA (ribonucleic acid), which also carries the genetic code hidden in sequences of nucleotides.

  When Karl Johnson was virus hunting in Bolivia, scientists had a limited understanding of the vast variety of viruses in the world, the ways these tiniest of organisms mutate and evolve, or how the microbes interact with the human immune system. The state of the art in 1963 was best summarized in Frank Fenner’s animal virus textbook, the bible for budding microbiologists of the day:

  Suppose that we have isolated a new virus and have managed to produce a suspension of purified particles. How can we classify the virus, and how do we find out about its chemical composition? A lead may be provided by its past history —the species of animal from which it was isolated and whether or not it was related to a disease. This information, in conjunction with that obtained by electron microscope examination of … particles, might be enough for us to make a preliminary identification.12

  Scientists could “see” viruses with the aid of microscopes powerful enough to magnify up to visual level objects that were nearly a million times smaller than a dime. With that power of magnification they could detect clear differences in the appearance of various species of viruses, from the chaotic-looking mumps virus that visually resembles a bowl full of spaghetti to the absolutely symmetrical polio virus that looked as if it were a Buckminster Fuller-designed sphere composed of alternating triangles.

  Researchers also understood that viruses had a variety of different types of proteins protruding from their capsules, most of which were used by the tiny microbes to lock on to cells and gain entry for invasion. Some of the most sophisticated viruses, such as influenza, sugarcoated those proteins so that the human immune system might fail to notice the disguised invaders.

  In 1963 laboratory scientists knew they could also distinguish one virus species from another by testing immune responses to those proteins protruding from the viral capsules. Humans and higher animals made antibodies against most such viral proteins, and the antibodies—which were themselves large proteins—were very specific. Usually an antibody against parts of the polio virus, for example, would not react against the smallpox virus. Indeed, some antibodies were so picky that they might react against a 1958 Chicago strain of the flu, but not the strain that hit the Windy City the following winter.

  Jonas Salk used this response against outer capsule proteins of the polio virus as the basis of his revolutionary vaccine, and by 1963 medical and veterinary pioneers all over the world were finding the pieces of various viruses that could be used most effectively to raise human and animal antibody responses.

  Back in the lab, they could also use antibody responses to find out what might be ailing a mysteriously ill person. Blood samples containing the victim’s attacking microbe would be dotted across a petri dish full of human or animal cells. Antibodies would also be dotted across the dish, and scientists would wait to see which antibody samples successfully prevented viral kill of the cells in the petri dish.

  Of course, if the virus was something never before studied, all the scientists would be able to get was a negative answer: “It’s not anything that we know about, none of our antibodies work.” So in the face of something new, like Machupo, scientists could only say after a tedious process of antibody elimination, “We don’t know what it is.”

  With bacteria the process of identification was far easier because the organisms were orders of magnitude larger than viruses: whereas a virus might be about one ten-millionth of an inch in size, a bacterium would be a thousandth of an inch long. To see a virus, scientists needed powerful, expensive electron microscopes, but since the days of Dutch lens hobbyist Anton van Leeuwenhoek, who in 1674 invented a microscope, it has been possible for people to see what he called “wee animalcules” with little more than a well-crafted glass lens and candlelight.

  The relationship between those “animalcules” and disease was first figured out by France’s Louis Pasteur in 1864, and during the following hundred years bacteriologists learned so much about the organisms that young scientists in 1964 considered classic bacteriology a dead field.

  In 1928 British scientist Alexander Fleming had discovered that Penicillium mold could kill Staphylococcus bacteria in petri dishes, and dubbed the lethal antibacterial chemical secreted by the mold “penicillin.”13 In 1944 penicillin was introduced to general clinical practice, causing a worldwide sensation that would be impossible to overstate. The term “miracle drug” entered the common vernacular as parents all over the industrialized world watched their children bounce back immediately from ailments that just months before had been considered serious, even deadly. Strep throat, once a dreaded childhood disease, instantly became trivial, as did skin boils, infected wounds, and tuberculosis with the quick discovery of streptomycin and other classes of antibiotics. By 1965 more than 25,000 different antibiotic products had been developed; physicians and scientists felt that bacterial diseases, and the microbes responsible, were no longer of great concern or of research interest.

  Amid the near-fanatic enthusiasm for antibiotics there were reports, from the first days of their clinical use, of the existence of bacteria that were resistant to the chemicals. Doctors soon saw patients who couldn’t be healed, and laboratory scientists were able to fill petri dishes to the brim with vast colonies of Staphylococcus or Streptococcus that thrived in solutions rich in penicillin, tetracycline, or any other antibiotic they chose to study.

  In 1952 a young University of Wisconsin microbiologist named Joshua Lederberg and his wife, Esther, proved that these bacteria’s ability to outwit antibiotics was due to special characteristics found in their DNA. Some bacteria, they concluded, were genetically resistant to penicillin or other drugs, and had possessed that trait for aeons; certainly well before Homo sapiens discovered antibiotics.14 In years to come, the Lederbergs’ hypothesis that resistance to antibiotics was inherent in some bacterial species would prove to be true.

  The Lederbergs had stumbled into the world of bacterial evolution. If millions of bacteria must compete among one another in endless turf battles, jockeying for position inside the human gut or on the warm, moist skin of an armpit, it made sense that they would have evo
lved chemical weapons with which to wipe out competitors. Furthermore, yeast—the molds and soil organisms that were the natural sources of the world’s then burgeoning antibiotic pharmaceutics—had evolved the ability to manufacture the same chemicals for similar ecological reasons.

  It stood to reason that populations of organisms could survive only if some individual members of the colony possessed genetically coded R (resistance) Factors, conferring the ability to withstand such chemical assaults.

  The Lederbergs discovered tests that could identify streptomycin-resistant Escherichia coli intestinal bacteria before the organisms were exposed to antibiotics. They also showed that the use of antibiotics in colonies of bacteria in which even less than 1 percent of the organisms were genetically resistant could have tragic results. The antibiotics would kill off the 99 percent of the bacteria that were susceptible, leaving a vast nutrient-filled petri dish free of competitors for the surviving resistant bacteria. Like weeds that suddenly invaded an untended open field, the resistant bacteria rapidly multiplied and spread out, filling the petri dish within a matter of days with a uniformly antibiotic-resistant population of bacteria.

  Clinically this meant that the wise physician should hit an infected patient hard, with very high doses of antibiotics that would almost immediately kill off the entire susceptible population, leaving the immune system with the relatively minor task of wiping out the remaining resistant bacteria. For particularly dangerous infections, it seemed advisable to initially use two or three different types of antibiotics, on the theory that even if some bacteria had R Factors for one type of antibiotic, it was unlikely a bacterium would have R Factors for several widely divergent antibiotics.

  If many young scientists of the mid-1960s considered bacteriology passe—a field commonly referred to as “a science in which all the big questions have been answered”—the study of parasitology was thought to be positively prehistoric.

  A parasite, properly defined, is “one who eats beside or at the table of another, a toady; in biology, a plant or animal that lives on or within another organism, from which it derives sustenance or protection without making compensation.”15 Strictly speaking, then, all infectious microbes could be labeled parasites, from viruses to large ringworms.

  But historically, the sciences of virology, bacteriology, and parasitology have evolved quite separately, with few scientists—other than “disease cowboys” like Johnson and MacKenzie—trained or even interested in bridging the disciplines. By the time hemorrhagic fever broke out in Bolivia, a very artificial set of distinctions had developed between the fields. Plainly put, larger microbes were considered parasites: protozoa, amoebae, worms. These were the domain of parasitologists.

  Their scientific realm had been absorbed by another, equally artificially designated field dubbed tropical medicine, which often had nothing to do with either geographically tropical areas or medicine.

  Both distinctions—parasitology and tropical medicine—set off the study of diseases that largely plagued the poorer, less developed countries of the world from those that continued to trouble the industrialized world. The field of tropical medicine did so most blatantly, encompassing not only classically defined parasitic diseases but also viruses (e.g., yellow fever and the various hemorrhagic fever viruses) and bacteria (e.g., plague, yaws, and typhus) that were by the mid-twentieth century extremely rare in developed countries.

  In the eighteenth century the only organisms big enough to be studied easily without the aid of powerful microscopes were larger parasites that infected human beings in some stage of the overall life cycle of the creature. Doctors could, without magnification, see ringworms or the eggs of some parasites in patients’ stools. Without much magnification (on the order of hundreds-fold versus the thousands-fold necessary to study bacteria) scientists could see the dangerous fungal colonies of Candida albicans growing in a woman’s vagina, scabies acariasis roundworms in an unfortunate victim’s skin, or cysticercosis tapeworms in the stools of individuals fed undercooked pork.

  As British and French imperial designs increasingly in the late eighteenth century turned to colonization of areas such as the Indian subcontinent, Africa, and Southeast Asia, tropical medicine became a distinct and powerful science that separated itself from what was then considered a more primitive field, bacteriology. Science historian John Farley concluded that what began as a separation designed to lend parasitology greater resources and esteem—and did so in the early nineteenth century—ended up leaving it science’s stepchild.16

  Ironically, parasites, classically defined, were far more complex than bacteria and their study required a broader range of expertise than was exacted by typical E. coli biology. Top parasitologists—or tropical medicine specialists, if you will—were expected in the mid-1960s to have vast knowledge of tropical insects, disease-carrying animals, the complicated life cycles of over a hundred different recognized parasites, human clinical responses to the diseases, and the ways in which all these factors interacted in particular settings to produce epidemics or long periods of endemic, or permanent, disease.

  Consider the example of one of the world’s most ubiquitous and complicated diseases: malaria. To truly understand and control the disease, scientists in the mid-twentieth century were supposed to have detailed knowledge of the complex life cycle of the malarial parasite, the insect that carried it, the ecology of that insect’s highly diverse environment, other animals that could be infected with the parasite, and how all these factors were affected by such things as heavy rainfall, human migrations, changes in monkey populations, and the like.

  It was known that several different strains of Anopheles mosquitoes could carry the tiny parasites. The female Anopheles would suck parasites out of the blood of infected humans or animals when she injected her syringe-like proboscis into a surface capillary to feed. The microscopic male and female sexual stages of the parasites, called gametocytes, would make their way up the proboscis and into the female mosquito’s gut, where they would unite sexually and make a tiny sac in the lining of the insect’s stomach.

  Over a period of one to three weeks the sac would grow, as inside thousands of sporozoite-stage parasites were manufactured. Eventually, the sac would explode, flooding the insect’s gut with microscopic one-celled parasites that caused no harm to the cold-blooded insect; their target was a warm-blooded creature, one full of red blood cells.

  Some of the sporozoites would make their way into the insect’s salivary glands, from which they would be drawn up into the “syringe” when the mosquito went on her nightly sundown feeding frenzy, and be injected into the bloodstream of an unfortunate human host.

  At that point the speed and severity of events (from the human host’s perspective) would depend on which of four key malarial parasite species had been injected by the mosquito. A good parasitologist in the 1950s knew a great deal about the differences between the four species, two of which were particularly dangerous: Plasmodium vivax and P. falciparum.

  If a human host was most unlucky, the parasites coursing through her bloodstream would be P. falciparum and she would have only twelve days to realize she’d been infected and get treatment of some kind before the disease would strike, in the form of either acute blood anemia or searing infection of the brain. In either case, for an individual whose immune system had never before seen P. falciparum, the outcome would likely be death.

  Scientists knew that injected sporozoites made their way to the liver, where they underwent another transformation, becoming so-called schizonts capable of infecting red blood cells. By the millions the tiny creatures, matured into merozoites, multiplied and grew inside red blood cells, eventually becoming so numerous that the cells exploded. Soon the human body would be severely anemic, its every tissue crying out for oxygen. If the immune system managed to keep the merozoite popul
ation down to manageable levels, the results would be prolonged—perhaps chronic lifetime —fatigue and weakness. Unchecked, however, the merozoites would so overwhelm the red blood cell population that the host’s brain, heart, and vital organs would fail and death would result.

  During the merozoite invasion of the blood supply, a smaller number of male and female gametocyte-stage P. falciparum would also be made, and the entire cycle of events would repeat itself when another female Anopheles mosquito fed on the blood of the ailing human, sucking those gametocytes up into her proboscis.

  Understanding that process of the disease was relatively easy; more difficult was predicting when and why humans and Anopheles mosquitoes were likely to come into fatal contact and how the spread of malaria could be stopped.

  Several types of monkeys were known to serve as parasite reservoirs, meaning that for long periods of time the disease could lurk in monkey habitats. The Anopheles mosquito would happily feed on both the monkeys and the humans that entered such ecospheres, spreading P. falciparum between the species.17

  The size of Anopheles mosquito populations could vary drastically in a given area, depending on rainfall, agricultural practices, the nature of human housing and communities, altitude, proximity to forests or jungles, economic development, the nutritional status of the local people, and numerous other factors that could affect mosquito breeding sites and the susceptibility of local human populations.18

  Almost entirely absent in the mid-twentieth century was an intellectual perspective that wedded the ecological outlook of the classical parasitologist with the burgeoning new science of molecular biology then dominating the study of nontropical bacteria and viruses. Money was shifting away from research on diseases like malaria and schistosomiasis. Young scientists were encouraged to think at the molecular level, concentrating on DNA and the many ways it affected cells.

 

‹ Prev