Pandemic

Home > Other > Pandemic > Page 7
Pandemic Page 7

by Sonia Shah


  A particularly pernicious pathogen called New Delhi metallo-beta-lactamase 1 (NDM-1) has been present in New Delhi since at least 2006. It’s actually a fragment of DNA called a plasmid, which can spread between bacterial species. What makes it dangerous is that it endows bacteria with the ability to resist fourteen classes of antibiotics, including the powerful intravenous antibiotics administered solely in hospitals as a last resort in patients who have failed to respond to all other treatment options. When NDM-1 inserts itself into a bacterial pathogen, in other words, it makes that strain nearly untreatable. Only two imperfect drugs can contain NDM-1 infections: an older antibiotic called colistin, which fell into disuse in the 1980s because of its toxicity, and an expensive IV antibiotic called tigecycline, which is currently approved only for soft-tissue infections.50

  Thanks to the power, speed, and relative comfort of air travel, even the most obscure pathogens can leap over continents and oceans. NDM-1 escaped Indian operating rooms in the bodies of medical tourists. In 2008, during a routine test to measure bacteria levels, NDM-1 bacteria were isolated from the urine of a fifty-nine-year-old man hospitalized outside Stockholm. The man had acquired the bacteria in New Delhi. Other cases appeared in Sweden and also in the U.K., all connected to patients who’d traveled to India or Pakistan for procedures such as cosmetic surgeries or organ transplants. In 2010, three patients in the United States were found to be infected with NDM-1 isolates; all three had received medical care in India. By 2011, NDM-1 bacteria had been isolated from patients in Turkey, Spain, Ireland, and the Czech Republic. By 2012, medical tourists had helped NDM-1 radiate into twenty-nine countries around the world.51

  So far, NDM-1 has mostly been found in bacterial species that can live harmlessly inside the body, like the Klebsiella pneumoniae that resides in healthy people’s mouth, skin, and intestines, and Escherichia coli, which can be found in their guts. But the medical tourism industry that has helped disseminate the plasmid remains lucrative and robust. Health-care costs continue to skyrocket in industrialized countries, forcing patients out of their homes and into the air in search of cheaper, faster treatments. Despite the emergence of NDM-1, they show no signs of wanting to change their tickets. The farther they and other carriers take NDM-1—and the more bacterial species it encounters in its peregrinations—the more likely the plasmid’s transfer into a dangerous bacterial pathogen becomes.

  Such a pathogen, endowed with NDM-1, would place a ruinous burden on the practice of medicine, causing nearly unstoppable infections. Few medical procedures would be worth the risk. “All medical feats will come to a stop,” predicts the medical microbiologist Chand Wattal of Sir Ganga Ram Hospital in New Delhi. “Bone marrow transplants, or this or that replacement—all that will vanish,” he says.52

  * * *

  The ease with which our transportation networks dispatch pandemic-worthy pathogens like NDM-1 is discomfiting, and since cholera’s time they’ve done it with increasing speed and efficiency.

  But we are not passive victims of our mobility, doomed to be followed by a cloud of malevolent microbial hangers-on. Global distribution is a prerequisite of pandemics, but it’s not a sufficient condition on its own. Pathogens, even if ubiquitous, can cause pandemics only if they encounter the right transmission opportunities wherever they land. A widely distributed pathogen, deprived of such opportunities, is as harmless as a defanged snake.

  And pathogens’ reliance on specific modes of transmission is not particularly flexible. Once adapted to a certain mode of transmission, pathogens cannot easily alter the complex machinery they’ve evolved in order to jump from victim to victim. That’s why, historically, mosquitoborne pathogens don’t evolve to become waterborne ones, and waterborne ones don’t evolve to become airborne ones. But while their modes of transmission are relatively fixed, the transmission opportunities they exploit are fluid: they’re shaped almost entirely by our behavior.

  It’s true that some pathogens spread by taking advantage of forms of human intimacy that are integral to our societies, such as sexual relations or the proximity that results in people breathing on each other, but many others spread via more obscure, convoluted practices that are comparatively rare, or easily altered. The pathogen Toxoplasma gondii spreads into humans when rodents ingest its eggs, cats consume the rodents, and humans then expose themselves to the infected cats’ litter boxes. Transmission of the pathogen Dicrocoelium dendriticum requires that snails incubate its eggs, ants drink the snails’ slime, and then grazing animals feed on the ants.

  A pathogen such as Vibrio cholerae requires that humans regularly consume their own excreta. That’s good news because it means we can easily deprive it of transmission opportunities, for consuming each other’s waste is required for neither human survival nor the stability of our societies. The bad news is that sometimes historical conditions conspire to make even the most unnecessary and risky behaviors nearly inevitable.

  THREE

  FILTH

  For pathogens, excreta is a perfect vehicle for spreading from one person to another. Human feces, freshly emerged from the body, teems with bacteria and viruses. By weight, nearly 10 percent is composed of bacteria, and in each gram there might be up to one billion viral particles. Every year, a typical human produces 13 gallons of the stuff (plus 130 gallons of sterile urine), creating a microbe-rich river of waste that, unless contained and isolated, can easily stick to the bottoms of feet, cling to hands, pollute food, and seep into the drinking water, allowing pathogens to creep from one victim to the next.1

  Fortunately, people have known for centuries that healthful living requires separating ourselves from our waste. Ancient civilizations in Rome, the Indus valley, and the Nile valley knew how to manage their waste so that it didn’t contaminate their food and water.2

  Ancient Romans used water to flush waste far from their settlements, where it could rot undisturbed. The Romans controlled a supply of fresh water from distant, unpopulated highlands via a network of wood and lead pipes, which brought the typical resident three hundred gallons of fresh water every day, three times more water than the average water-guzzling American uses today, according to the Environmental Protection Agency. The Romans mostly used this flow of water to run bathhouses and public fountains, but they also used it in communal latrines, where they sat over keyhole-shaped openings on benches situated over large drains, a gutter of fresh running water flowing at their feet.3

  One of the principal virtues of using water to flush away excreta, from a public-health perspective, is that during that critical period between production and decomposition, no human need handle the microbe-rich dung. The water simply carries it away. The drawback is that it also makes the excreta mobile, creating a large volume of flowing, contaminated water that can pollute drinking-water supplies (among other things). But for precisely the same reasons that drove the ancients to construct their water distribution networks in the first place—their love of fresh, cleansing waters—the Romans understood the importance of clean drinking water. They turned their noses up at people foolish enough to bathe in, let alone drink, water that wasn’t filtered, and heeded the advice of the ancient Greek physician Hippocrates that water be boiled before drinking.4

  By all rights, these healthful practices should have persisted throughout the ages. But they did not. By the nineteenth century, the European descendants of the ancient Romans who came to populate the city of New York had forsaken the practices of their ancestors. They immersed themselves so completely in each other’s waste that each likely ingested two teaspoons of fecal matter every day with their food and drink.5

  Partly, this about-face had to do with the rise of Christianity in the fourth century A.D. The Greeks and Romans, not to mention the Hindus, the Buddhists, and the Muslims, all prescribed ritualized hygiene practices. Hindus must wash after any number of acts considered “unclean,” as well as before prayer. Muslims must perform ablutions at least three times before their five-times-daily prayers, as
well as on numerous other occasions. Jews were enjoined to wash before and after each meal, before praying, and after relieving themselves. In contrast, Christianity prescribed no elaborate water-based hygiene rituals. Good Christians had only to sprinkle some holy water to consecrate their bread and wine. Jesus himself, after all, had sat down to eat without washing first. Prominent Christians openly repudiated water’s cleansing effect as superficial, vain, and decadent. “A clean body and a clean dress,” opined one, “means an unclean soul.” The most holy Christians, with their lice-infested hair shirts, were among the least washed people on Earth. Not surprisingly, after the Goths disabled the Roman aqueducts in 537, the unwashed leaders of Christian Europe didn’t bother rebuilding them, or any other elaborate water-delivery system.6

  Then, in the mid-fourteenth century, bubonic plague arrived in Europe. Leaders of Christian Europe, like political leaders everywhere facing an incomprehensible threat, blamed their favorite bugbear, water-based hygiene. In 1348, physicians from the University of Paris condemned hot baths in particular, asserting that bathing with water opened the skin’s pores and allowed disease to enter the body. “Steam-baths and bath-houses should be forbidden,” King Henry III’s surgeon Ambroise Paré agreed. “When one emerges, the flesh and the whole disposition of the body are softened and the pores open, and as a result, pestiferous vapour can rapidly enter the body and cause sudden death,” he wrote in 1568. Across the Continent, the remaining bathhouses from the Roman era were shuttered.7

  Given their suspicions about the moral and mortal dangers posed by water, medieval Europeans used as little as possible to manage their waste and quench their thirst. They drank directly from shallow wells, muddy springs, and stagnant rivers. If it tasted bad, they’d turn their sparse water supplies into beer.8 Those who could afford it practiced “dry” hygiene. Seventeenth-century aristocratic Europeans masked the ripe odors of their grimy bodies with perfumes and by wrapping themselves in velvets, silks, and linen. “Our usage of linen,” asserted a seventeenth-century Parisian architect, “serves to keep the body clean more conveniently than the baths and vapour baths of the ancients could do.” They used golden ear picks encrusted with rubies to extract wax from their ears and rubbed their teeth with black silk edged in lace: anything other than wash themselves with water. “Water was the enemy,” writes the hygiene historian Katherine Ashenburg, “to be avoided at all costs.”9

  The result was centuries of close congress with human and animal waste, inuring preindustrial people to its presence and even leading them to see it as salubrious. Medieval Europeans commonly lived with the smelly presence of all manner of dung underfoot, their own being the least of it. They shared their homes with the animals that fed and transported them, and the cows, horses, and hogs produced far more prodigious quantities of manure than the local humans did, and were even less fastidious about where to deposit it.10 To dispose of their own waste, some perched on simple buckets, in their homes or in outhouses, which they called “privies.” Slightly more elaborate setups included hand-dug pits either outdoors or in cellars, sometimes loosely lined with stones or bricks (as in cesspools and privy vaults), fitted perhaps with a bottomless seat or a squatting plate. The precise method of collection and disposal depended upon the whim of individual domiciles; political authorities imposed few if any rules.11 The act of excretion itself didn’t require privacy or provoke shame back then as it does now. Sixteenth- and seventeenth-century monarchs such as England’s Elizabeth I and France’s Louis XIV openly relieved themselves while holding court.12

  Far from reviling human feces, medieval Europeans even began to think of it as medicinal. According to a history of sanitation by the journalist Rose George, the sixteenth-century German monk Martin Luther ate a spoonful of his own feces every day. Eighteenth-century French courtiers took a different route, ingesting their “poudrette,” dried and powdered human feces, by sniffing it up their noses.13 (Was this dangerous? Quite possibly. But in contrast to more immediate threats like, say, bubonic plague, the sporadic cases of diarrhea these practices may have caused would have paled.)

  When Dutch colonists established the little town of New Amsterdam on the southern tip of the island of Manhattan in 1625, they brought these medieval ideas about and methods of sanitation with them. The Dutch built their privies to open at ground level and pour their contents into the streets, so that “hogs may consume the filth and wallow in it,” as a New Amsterdam official put it in 1658. The English, who took control of the colony in 1664 and renamed it “New York,” similarly contained their excreta in what they called “ordure tubs,” which they, too, emptied into the streets.14

  These medieval practices persisted through the nineteenth century, even as the town of a few thousand inhabitants mushroomed into a small city of several hundred thousand residents. By 1820, privies and cesspools covered one-twelfth of the city, and tens of thousands of hogs, cows, horses, and stray dogs and cats roamed the streets, defecating at will.15 New York’s outhouses and privies “were in a most filthy and disgusting condition,” complained one official in 1859, with “accumulations of stagnant fluid, full of all sorts of putrefying matter, the effluvia from which is intolerable.” Raw sewage rotted in the back lots and sidewalks of tenement buildings for weeks and months at a time. To cover up the filth, landlords laid wooden boards over the ground. When pressed, the boards exuded a “thick greenish fluid,” the city inspector reported.16

  Occasionally the city government hired private outfits to collect the manure and human waste that built up on the streets. It was sold as fertilizer, a trade that turned Brooklyn and Queens into two of the most productive agricultural counties in mid-nineteenth-century America. But “sewage farming,” as it was called, never gained momentum. There was nowhere sufficiently isolated to store the excreta as it awaited transport. The reeking piles left at the wharves provoked complaints from nearby residents. Plus, city authorities tended to dole out the job to private contractors as political patronage, many of whom didn’t bother actually doing the job.17

  As a result, most of the city’s excreta simply seeped along the streets and soaked into the ground. The filth compacted into “long ridges forming embankments along the outer edge of the sidewalks,” as the newspaper editor Asa Greene put it in the late 1840s.18 Horses and pedestrians trod upon it, slowly flattening it into a dense mat. The paving stones underneath the thick sludge carpeting the streets “rarely had shown themselves again to mortal eyes,” Greene noted in his diary. On the rare occasion that the city scraped the streets clear, locals professed shock. Greene quotes an elderly woman, who’d lived her whole life in the city, remarking on the state of recently cleaned streets: “Where in the world did all these stones come from? I never knew that the streets were covered with stones before. How very droll!”19

  The use of medieval sanitation methods in early industrial cities created conditions ripe for a cholera epidemic. These places were nothing like the European countryside in which their waste-management habits had formed. In the mostly rural communities that medieval Europeans had lived in, soils were thick and population density low. When their pits of excreta reached capacity, people had the space to simply seal them and dig new pits nearby. The streets that they emptied their chamber pots into were not heavily traveled. The excreta could disperse into the ground, where the soil’s disparate particles of minerals, organic matter, and microbes would trap and filter it, allowing it to decompose well before it reached the groundwater.20

  The island of Manhattan, in contrast, had limited capacity to hold and filter the waste. Manhattan was the largest of a series of islands, now known as Staten, Governors, Liberty, Ellis, Roosevelt, Ward, and Randalls, that dotted the Hudson Estuary. Two brackish rivers flanked the island, pulsing with the Atlantic’s tides: the Hudson on its west side and the East River on its east side. The two flows collided just off the island’s southern tip, stirring up bottom sediments and sending up plumes of nutrients into the water column. Oysters
grew so large they had to be cut into three to eat. (Today, if you dig deep enough almost anywhere in lower Manhattan, you’ll hit pure shell, the remains of earlier oyster feasts.) But while local waters were rich with marine life, the island’s soil was only three feet deep, as disappointed Dutch farmers had found. It couldn’t hold much for long. And that thin layer rested atop thick, fractured bedrock of schist and Fordham gneiss. That bedrock later proved useful for carrying the weight of skyscrapers, but it made underground water supplies dangerously vulnerable to the excreta that was dumped on top of it. Fresh human waste that sank through the thin soil into the bedrock entered an underground highway of cracks and fissures, through which it could travel hundreds of yards.21

  These geographical features made drinking-water supplies in the city especially vulnerable to contamination. And supplies were limited to begin with. The Hudson and East Rivers that surrounded the island were too salty to drink. Collecting rainwater proved treacherous. By the time the rain passed over residents’ filthy roofs, collecting ashes and soot, “its appearance is nearly as dark as ink,” one local noted, “and its smell any thing but agreeable.”22 (This paucity of drinking-water sources had been noted as a serious drawback to settling the island as early as 1664. The Dutch fort, its last governor, Peter Stuyvesant, complained, subsisted “without either well or cistern.”) The island’s sole source of easily accessible drinking water was the seventy-foot-deep Collect Pond, a small kettle pond that had been gouged out by a retreating glacier. But as the city’s population expanded northward, noxious industries such as tanneries and slaughterhouses were pushed out to the Collect’s shores. Soon the pond had become a “very sink and common sewer,” as one resident complained in an open letter to the city published in the New York Journal. In 1791, the city purchased all claims to the pond, and health commissioners called for it to be drained entirely. Workers cut canals and ditches to drain the springs that fed the pond. In 1803, the city ordered the drained pond filled, paying New Yorkers 5¢ for every cartload of fill—that is, garbage—they dumped into it.23

 

‹ Prev