Pale Rider: The Spanish Flu of 1918 and How It Changed the World

Home > Other > Pale Rider: The Spanish Flu of 1918 and How It Changed the World > Page 20
Pale Rider: The Spanish Flu of 1918 and How It Changed the World Page 20

by Laura Spinney


  First, let’s sketch out that geographical unevenness with a whistlestop tour of the world in figures–excess mortality rates, to be precise. These varied to an astonishing degree. If you lived in certain parts of Asia, you were thirty times more likely to die from flu than if you lived in certain parts of Europe. Asia and Africa suffered the highest death rates, in general, and Europe, North America and Australia the lowest. But there was great variation within continents too. Denmark lost approximately 0.4 per cent of its population, Hungary and Spain around three times that. African countries south of the Sahara experienced death rates two or even three times higher than those north of the desert, while in Asia the rates varied from roughly 2 per cent in the Philippines, to between 8 and 22 per cent in Persia (the large range reflects the fact that Persia was in crisis, and gathering statistics was hardly anybody’s priority). India, which included Pakistan and Bangladesh at the time, and which lost around 6 per cent of its population, suffered the greatest loss in absolute numbers of any country in the world. Between 13 million and 18 million Indians died, meaning that more Indians may have died of Spanish flu than human beings were killed in the First World War.

  Cities tended to suffer worse than rural areas, but within a country, some cities fared worse than others. Thus Chicago got off lightly compared to Washington DC, which got off lightly compared to San Francisco. Within cities, there was variation. In the Norwegian capital Kristiania (Oslo), for example, death rates rose as apartment sizes shrank.1 In Rio, it was the mushrooming subúrbios–sprawling shanty towns at the edge of the city–that suffered the heaviest losses. Newly arrived immigrants tended to die more frequently than older, better-established groups–though the pattern is sometimes hard to discern, because there are fewer data for the immigrants. A 1920 study of what happened in the US state of Connecticut nevertheless reported that ‘The Italian race stock contributed nearly double its normal proportion to the state death roll during the epidemic period.’ The Italians, as we know, were the newest immigrant group to arrive in America. In fact, residents of Connecticut who were of Italian origin were more likely to die than those of Irish, English, Canadian, German, Russian, Austrian or Polish background.2

  What caused these inegalities? Some of it comes down to disparities in wealth and caste, and–as far as it reflected these–skin colour.3 Eugenicists pointed to the constitutional inferiority of the ‘degenerate’ races, whose lack of drive caused them to gravitate to squalid tenements and favelas, where the diseases to which they were prone naturally followed them (in other words, they argued that Italians were more susceptible because they were Italian). In fact, it was bad diet, crowded living conditions and poor access to healthcare that weakened the constitution, rendering the poor, immigrants and ethnic minorities more susceptible to disease. This was why, in Korea, ethnic Koreans and Japanese people fell ill at roughly similar rates, but Koreans were twice as likely to die.4 And why, in India, the remote, forested region of the Dangs in Gujarat lost a higher proportion of its population than most Indian cities (16.5 per cent between 1911 and 1921, mainly due to the Spanish flu). The Dangs bucked the ‘rural advantage’ trend, probably because they were home to adivasis–the so-called original inhabitants of the area–who were looked down upon by both the British and other Indians as backward jungle tribes.5

  Statisticians were foxed by their observation that the highest death rates in the French capital were recorded in the wealthiest neighbourhoods, until they realised who was dying there. The ones coughing behind the grand Haussmannian facades weren’t the owners on the étage noble, but the servants in the chambres de bonne. As Theresa McBride explained in her book, The Domestic Revolution, ‘Close enough to their employers’ apartments on the floors below, the servants were segregated into a society of their own where they need not be seen but could be easily summoned.’ They worked fifteen-to-eighteen-hour days and often had to share their sleeping spaces with other servants. ‘The servant’s room was generally small, with sloping ceilings, dark, poorly ventilated, unheated, dirty, lacking privacy or even safety,’ wrote McBride. The flu may have been democratic, as one French historian pointed out, but the society it struck was not: a quarter of all the women who died in Paris were maids.6

  There were other paradoxes. African Americans, though severely discriminated against in the United States, seem to have had a light dose–and they noticed it at the time. ‘As far as the “Flu” is concerned the whites have the whole big show to themselves,’ wrote one J. Franklin Johnson to the Baltimore Afro-American, adding that, had it been otherwise, ‘we would have never heard the last of it, and health talks to colored people would have been printed by the wholesale in seventy-two-point type in the daily papers’.7 The case of the African Americans remains puzzling today (were they disproportionately exposed to the mild spring wave, and so protected, to some extent, against the autumn one?), but another mystery has been solved: the discrepancy in the death rates at the Rand gold mines and the Kimberley diamond mines in South Africa. This, it seems, came down to those black tentacles of rail tracks.8

  The gold mines were by far the bigger of the two operations, employing almost twenty times as many men as the diamond mines, and Johannesburg–the city built to serve them–was consequently the larger rail hub. The rail network connected Johannesburg to the country’s east coast, and in particular to Durban, the major port of Natal Province. Though South Africa didn’t record a ‘herald’ wave of Spanish flu, as such, an epidemiologist named Dennis Shanks has found reports buried in the literature of cases of a mild, flu-like illness that arrived on ships in Durban in July 1918. From there, the infection travelled northwards, along the rail tracks, to the Rand. When the flu returned to the Rand a few months later, therefore, the gold miners may have been partially protected. Kimberley, on the other hand, was relatively poorly served by the rail network. Lying 500 kilometres to the south-west of Johannesburg, it was however connected to Cape Town, and it therefore received its first dose of flu from that city after the arrival of the infected troopships Jaroslav and Veronej, having had no prior exposure. When the flu left the industrial centres with the panicked miners, Natal Province was again protected compared to those parts of the country that were not served by the Durban–Johannesburg branch of the rail network–notably the Transkei and Ciskei regions of the Cape, where death rates were three times higher than in Natal.

  Isolation was also the reason that some of the most remote places on earth were vulnerable. Lack of historical exposure to the virus translated into higher death rates, which were often amplified by problems associated with poverty and exclusion. After the steamer the SS Talune left Auckland, New Zealand with infection on board, it carried that infection to a string of Pacific Islands in turn. Fiji subsequently lost around 5 per cent of its population, Tonga twice that, and Western Samoa a staggering 22 per cent.

  Cities were more vulnerable to infection than rural areas mainly because of the density of their populations, but what about that puzzling variation between cities? Exposure to the mild spring wave may have buffered those that received it, but an effective disease-containment strategy also had an impact. One 2007 study showed that public health measures such as banning mass gatherings and imposing the wearing of masks collectively cut the death toll in some American cities by up to 50 per cent (the US was much better at imposing such measures than Europe). The timing of the measures was critical, however. They had to be introduced early, and kept in place until after the danger had passed. If they were lifted too soon, the virus was presented with a fresh supply of immunologically naive hosts, and the city experienced a second peak of death.9

  In Zamora, mass gatherings were positively encouraged–and at 3 per cent, or more than twice the national average, Zamora had the highest death rate of any city in Spain. In fact, religious rituals–or secular rituals masquerading as religious ones–contributed to the shape and possibly the duration of the pandemic everywhere. Some have argued, for example, that there were re
ally only two waves of sickness–in the (northern hemisphere) spring and autumn of 1918–and that what appeared to be a third wave in early 1919 was simply the tail end of the second after a brief hiatus due to end-of-year festivities. Around the time of Christmas and Hanukkah, for example, Christian and Jewish children tended to stay away from school, depriving the virus of a valuable pool of potential hosts–until, that is, they returned to class in the New Year.

  An underlying disease made you more susceptible to the Spanish flu. Medical historian Amir Afkhami has suggested that Persians fighting in the British Army were more severely affected by the flu than native British soldiers, because they were more likely to be suffering from malaria and its complication anaemia (a reduction in the number of red blood cells or haemoglobin–the oxygen-carrying molecule–in blood), which impairs the immune response.10 The pandemic also purged the world of a disproportionately large number of TB patients who would otherwise have died more slowly over the following decade. In fact, it is possible that TB–once described as ‘the captain of all these men of death’, because of the misery it inflicted throughout the nineteenth century and into the twentieth–was one of the main reasons why the flu killed more men than women globally. In that vulnerable twenty-to-forty-year age group, more men than women had TB, in part because they were more likely to be exposed to it in the workplace.11

  Thus culture shaped biology: men were more likely to go out to work, in many countries, and women were more likely to stay at home. But although more men than women died overall, in some countries that trend was reversed in certain age groups. In India, strikingly, it was reversed in every age group. Why did more women die than men in India, when Indian women were also traditionally home-makers? One argument goes like this: in times of crisis, Indian girls and women–who were already more likely to be neglected and underfed than their male counterparts–were also expected to care for the sick. They therefore had both greater exposure and less resistance to the disease, and dietary taboos may have exacerbated their susceptibility.

  The main religion in India is, and was, Hinduism. Hindus are not necessarily vegetarian, but a vegetarian diet is associated with spiritual serenity, women are more likely to be vegetarian than men, and traditionally, vegetarianism is obligatory for widows. In her detailed analysis of life in a northern Indian village in the 1920s, American anthropologist and missionary Charlotte Viall Wiser noted that the villagers’ diet consisted predominantly of what their fields could furnish–cereals, pulses and vegetables. She was astonished to find that most of them did not lack iron (iron deficiency is a common cause of anaemia), but she described how they eked every last atom of it from their food. Grains, for example, were not milled but eaten whole, with the iron-rich outer layers intact. She felt that they lived at the margin of deficiency, and that any slight disruption could push them over the edge.12 The drought that followed the failure of the south-west monsoon in the summer of 1918 certainly qualified as such a disruption.

  When all other things were equal, when neither wealth nor diet nor festival season nor travelling habits could differentiate two groups of human beings, there was still that distressing residual disparity that meant that one might be decimated while the other survived more or less intact–as if a god had thrown his thunderbolts carelessly. Death struck very unevenly across the territory of Alaska, for example. Bristol Bay, the worst-affected region, lost close to 40 per cent of its population, but other parts lost less than 1 per cent–on a par with some of the great American metropolises–and a relatively high number of Alaskans, one in five, escaped the disease entirely. This was the recalcitrant core of the variation, and for a long time it defied explanation. Many wondered if the answer lay in human genes–in the way they shaped the host-virus encounter–but how to prove it? People who share genes often share an environment too, which is another way of saying that families tend to live together, so they are exposed to the same germs. Disentangling the two effects would not be easy.

  The Mormons inadvertently provided a way to slice the knot. Mormons, members of the Church of Jesus Christ of Latter-day Saints, believe that the family unit can survive death if all of its members have been baptised, and those who weren’t baptised in life may be baptised after death. They are therefore conscientious genealogists who keep detailed records of their family trees, which they store on millions of rolls of microfilm in a vault under Granite Mountain–a peak in the Wasatch range close to Salt Lake City. The vault, which was built in 1965, is protected by a thirteen-tonne steel door designed to withstand a nuclear explosion, but these days you can access the archives via the Internet. Even more helpfully, the records have been digitally linked to the relevant death certificates, meaning that it is possible to learn at the tap of a key what an individual Mormon died of. In 2008, Frederick Albright and colleagues at the University of Utah identified close to 5,000 Mormons who had died of flu in the previous hundred years. Having reconstructed their family trees, they discovered that a blood relative of one of these index cases was more likely to have died of flu than an unrelated person, even if the two relatives had never shared an environment.13

  It was a fascinating hint that flu might have a heritable component, but other studies failed to replicate the finding. Then in January 2011, in the midst of the annual flu season in France, a two-year-old girl was admitted to the intensive care unit of the Necker Hospital for Sick Children in Paris, suffering from ARDS (acute respiratory distress syndrome). Doctors saved her life, and one of them, Jean-Laurent Casanova, sequenced her genome. He wanted to know if it held the key to why an otherwise healthy child had nearly died of a disease that most children shrug off. It turned out that the girl had inherited a genetic defect that meant she was unable to produce interferon, that all-important first-line defence against viruses. As a result, her besieged immune system went straight to plan B: a massive inflammatory response similar to the one pathologists saw in 1918. The child’s genetic defect was rare, but Casanova went on to identify a cluster of similar defects that also result in an inability to make interferon. Collectively, he calculated, these might affect one in 10,000 people–roughly the incidence of ARDS during an annual flu outbreak.14

  What Casanova’s finding meant was that, regardless of their culture, diet, social status or income, one in 10,000 people are particularly vulnerable to flu–a vulnerability that they inherit from their parents. In the 1918 pandemic, these people were probably among the first to die, but a hundred years on, we have it in our power to level the genetic playing field and give them a fighting chance. The reason is that the genetic defect that prevents an individual from making interferon does not affect his or her ability to produce antibodies. In theory, therefore, such a person can be protected against flu just by being vaccinated with the standard annual flu vaccine. Every year since 2011, the girl whom Casanova first met in the ICU of the Necker children’s hospital has received a shot of flu vaccine and sailed through the subsequent flu season as easily as her peers.

  Casanova had discovered a genetic component to flu, and perhaps, the last piece in the puzzle of why the Spanish flu struck so unevenly. His finding fell on fertile ground, because at the time, scientists were beginning to think about infectious diseases in a new way–that is, as partly genetic. The idea they were working towards was this: all infectious diseases have a genetic component, but in some, one or a few genes control susceptibility to that disease, while in others the genetic component consists of the small, cumulative effects of many genes. In the first case, a defect in one of those genes causes a large increase in susceptibility; in the second, only a small one. If this idea turns out to be correct, we will have to recalibrate the way we think about disease yet again: not only might infectious diseases be partly genetic, but diseases that we have long thought of as genetic or ‘environmental’ in origin might turn out to be partly infectious, too. One theory about Alzheimer’s disease holds, for example, that it is caused by ‘prions’–infectious agents that, until recently
, were as shrouded in mystery as viruses were in 1918.

  A hundred and fifty years ago, George Sand was affronted when the residents of Palma, Majorca asked her to leave on the grounds that her lover’s disease was infectious rather than heritable. Today, we know that TB is caused by the bacterium Mycobacterium tuberculosis, but that susceptibility to that bacterium is inherited. Something similar applies to influenza–a disease that, a hundred years ago, was thought to be bacterial. To the best of our knowledge in 2017, flu is caused by a virus, but it is also partly under the control of human genes. Understanding that helps us to make sense of the extraordinary variability in its manifestation, that people found so baffling in 1918. They couldn’t see beyond the surface phenomena; now we’re able to look ‘beneath the bonnet’. (One day, science might help us to explain diseases that mystify us today for the same reason, such as autism spectrum disorder.)

  The revision in how we think about flu seems radical, but perhaps it isn’t as radical as all that. While observing sick silkworms in the nineteenth century, Louis Pasteur made two observations: first, that la flacherie, as the worms’ disease was called (literally, ‘flaccidity’–caused by eating contaminated mulberry leaves, it gave them debilitating diarrhoea) was infectious; and second, that offspring could inherit it from their parents. In all the furore over the first observation, the second was overlooked. Perhaps the time for Pasteur’s second insight has finally come.

  PART SEVEN: The Post-Flu World

  Linus H. French with some of the flu orphans rescued from Bristol Bay, Alaska, 1919

 

‹ Prev