Pale Rider: The Spanish Flu of 1918 and How It Changed the World
Page 9
Flagging up infection
The devastating plagues of the Middle Ages gave birth to the concept of disease surveillance–that is, the gathering of data on disease outbreaks so as to enable an appropriate and timely response, if not to the epidemic in progress then at least to the next one. To begin with, disease reporting was crude: diagnoses were vague, numbers approximate. Gradually, however, the data grew in volume and accuracy. Doctors started recording not only the numbers of sick and dead, but also who they were, where they lived, and when they first reported symptoms. They realised that by pooling and analysing these data, they could learn a great deal about where epidemics came from and how they spread. By the twentieth century, a number of countries had made disease reporting compulsory, and there was also recognition of the fact that infectious diseases don’t respect borders. In 1907, European states set up the International Office of Public Hygiene in Paris, as a centralised repository of disease data, and to oversee international rules concerning the quarantining of ships.
In 1918, if a doctor diagnosed a reportable disease, he was obliged to notify local, state or national health authorities. The penalties for not doing so, though rarely enforced, included fines and revocation of his licence. Only diseases that were considered to pose a serious risk to public health were reportable, so that in the US, for example, smallpox, TB and cholera were reportable at the beginning of 1918, but influenza was not. Very few countries in the world that boasted well-organised disease-reporting systems required doctors to report flu at that time, which means, quite simply, that the Spanish flu took the world by surprise.
There were local reports of outbreaks, thanks mainly to the newspapers and to conscientious doctors who realised that this one was worse than most, but almost no central authority had an overview of the situation. Unable to connect the dots, they were ignorant of its date of arrival, point of entry, and speed and direction of travel. There was, in other words, no alarm system in place. The disease was made reportable, belatedly, but by the time the ancient instinct had been roused to batten down the hatches, it was too late: the disease was on the inside.
There were exceptions, but these owe their luck mainly to the happenstance of being islands, and remote ones at that. Iceland had a population of fewer than 100,000 at the time, and when the flu arrived in its midst, word quickly spread. Icelanders set up a roadblock on the main road leading to the north of the island, and posted a sentry at a place where an unbridged glacial river crossed a road, forming a natural barrier to the eastern part. Eventually, the authorities imposed a quarantine on incoming ships, and the combination of these measures helped keep more than a third of the Icelandic population flu-free.
Australia saw the epidemic coming from a long way off, both in time and space. Its authorities first heard about a flu epidemic in Europe in the northern hemisphere summer of 1918, and in September they became aware of the horrifying reports of the lethal second wave. Having watched it advance through Africa and Asia, they finally introduced quarantine procedures at all Australian ports on 18 October (New Zealand did not follow suit). When jubilant crowds gathered in Sydney’s Martin Place to celebrate the armistice in November, therefore, they enjoyed the privilege–almost unique in the world–of having nothing to fear from the virus. Though the country did receive the third wave in early 1919, its losses would have been far greater had it let the autumn wave in.
The Philippines were not protected by their island status. When flu broke out there, it didn’t occur to the occupying Americans that it might have come from outside, even though the first casualties were longshoremen toiling in the ports of Manila. They assumed its origins were indigenous–they called it by the local name for flu, trancazo–and made no attempt to protect the local population, which numbered 10 million. The only exception was the camp on the outskirts of Manila where Filipinos were being trained to join the US war effort, around which they created a quarantine zone. In some remote parts of the archipelago, 95 per cent of communities fell ill during the epidemic, and 80,000 Filipinos died.3
The starkly contrasting fates of American and Western Samoa–two neighbouring groups of islands in the South Pacific–show what happened when the authorities got the direction of travel right, and when they got it wrong. The American authorities who occupied American Samoa realised not only that the threat came from outside the territory, but also that indigenous Samoans were more vulnerable to the disease than white-skinned settlers, due to their history of isolation, and they deployed strict quarantine measures to keep it out. American Samoa got off scot-free, but Western Samoa, under the control of New Zealand, was not so lucky. After infection reached the islands via a steamer out of Auckland, local authorities made the same error as the occupiers of the Philippines, and assumed that it was of indigenous origin. One in four Western Samoans died in the ensuing tragedy which, as we’ll see, would dramatically shape the islands’ future.
Of course, for the most flagrant illustration of the global failure to report the Spanish flu, we need look no further than its name. The world thought it had come from Spain, when in fact only one country could legitimately accuse the Spanish of sending them the angel of death: Portugal. Injustice breeds injustice, and piqued at being made the world’s scapegoat, the Spanish pointed the finger back at the Portuguese. Thousands of Spanish and Portuguese people provided temporary labour in France during the war, replacing French workers who had gone to fight, and though these labourers undoubtedly ferried the virus across borders, the Spanish singled out the Portuguese for blame. They set up sanitary cordons at railway stations, and sealed train wagons carrying Portuguese passengers so that they could have no contact with supposedly infection-free Spaniards in other wagons. At Medina del Campo, an important railway junction 150 kilometres north-west of Madrid, Portuguese travellers were sprayed with foul-smelling disinfectants and detained for up to eight hours. Those who protested were fined or even imprisoned. On 24 September 1918, much to the indignation of its neighbours, Spain closed both borders–a pointless move, since by then illness was already spreading through the castle barracks in Zamora. The Naples Soldier was back in the country.
Blocking the spread
An epidemic, like a forest fire, depends on ‘fuel’–that is, individuals susceptible to infection. It grows exponentially from a few initial cases–the ‘spark’–because those cases are surrounded by a vast pool of susceptible individuals. Over time, however, that pool shrinks as people either die or recover and acquire immunity. If you were to draw a graph of an epidemic, therefore, with ‘number of new cases’ on the vertical axis, and ‘time’ on the horizontal axis, you’d be looking at a normal distribution, or bell curve.
This is the classical form of an epidemic, though endless variations are possible–the curve’s height or width may vary, for example, or it may have more than one peak. The basic form remains recognisable, which means that it can be described in mathematical terms. In the twenty-first century, the mathematical modelling of epidemics is highly sophisticated, but scientists had already begun to think that way in 1918. Two years earlier, in his ‘theory of happenings’, the British malaria expert and Nobel laureate Ronald Ross had come up with a set of differential equations that could help determine, at any given time, the proportion of a population that was infected, the proportion that was susceptible, and the rate of conversion between the two (with some diseases, infected individuals could return to the susceptible group on recovery). A happening, according to Ross’s definition, was anything that spread through a population, be it a germ, a rumour or a fashion.
Ross’s work, along with that of others, illustrated in hard numbers something that people had long understood instinctively–that a happening will begin to recede when the density of susceptible individuals falls below a certain threshold. An epidemic will run its course and vanish on its own, without intervention, but measures that reduce that density–collectively called ‘social distancing’–can both bring it to an end sooner, and
reduce the number of casualties. You can think of the area under the epidemic curve as reflecting the total amount of misery that it incurs. Now, picture the difference in size of that area when the curve is high and broad–that is, without intervention–and when it is low and narrow, with intervention. That is potentially the difference between an overwhelmed public health infrastructure, where patients can’t get treated, doctors and nurses are pushed beyond exhaustion and dead bodies accumulate in morgues, and a functioning system that, though stretched to its limit, is still managing the flux of the sick.
In 1918, as soon as the flu had become reportable and the fact of the pandemic had been acknowledged, a raft of social distancing measures were put in place–at least in countries that had the resources to do so. Schools, theatres and places of worship were closed, the use of public transport systems was restricted and mass gatherings were banned. Quarantines were imposed at ports and railway stations, and patients were removed to hospitals, which set up isolation wards in order to separate them from non-infected patients. Public information campaigns advised people to use handkerchiefs when they sneezed and to wash their hands regularly; to avoid crowds, but to keep their windows open (because germs were known to breed in warm, humid conditions).
These were tried and tested measures, but others were more experimental. The Spanish flu was, to all intents and purposes, the first post-Pasteurian flu pandemic, since it was only well into the previous pandemic–the Russian flu of the 1890s–that Richard Pfeiffer had announced that he had identified the microbial cause of the disease. His model still prevailed in 1918, but it was, of course, wrong. With no diagnostic test available, and health experts disagreeing as to the agent of contagion–even, in some cases, the disease’s identity–they found themselves caught on the horns of their own dilemma.
In some places, for example, the wearing of a layered gauze mask over the mouth was recommended–and in Japan this probably marked the beginning of the practice of mask-wearing to protect others from one’s own germs–but health officials disagreed as to whether masks actually reduced transmission. They were divided over the use of disinfectant too. In late October 1918, well into the autumn wave–when metro stations and theatres across Paris were being doused in bleach–a journalist asked Émile Roux, director of the Pasteur Institute, no less, if disinfection was effective. The question took Roux by surprise. ‘Absolutely useless,’ he replied. ‘Put twenty people in a disinfected room, insert one flu patient. If he sneezes, if a fleck of his nasal mucus or saliva reaches his neighbours, they will be contaminated despite the disinfected room.’4
It had long been assumed that school-age children represented ideal vectors of infection, because they are among the preferred victims of seasonal flu, they meet and mingle on a daily basis, and their snot control has a tendency to be suboptimal. The closing of schools was therefore a knee-jerk reaction, in case of a flu epidemic, and so it was in 1918. A couple of more thoughtful voices raised themselves against the clamour, however–and occasionally, as we’ll see, even won the day. They belonged to observant individuals who had noticed two things: that school-age children were not the primary targets of this particular flu, and that even when they did fall sick, it wasn’t clear where they had caught the disease–at home, at school, or somewhere in between. If it wasn’t school, then closing the schools would neither protect the children nor stop the spread.
The most heated discussions of all, however, revolved around vaccination. Vaccination was older than germ theory–Edward Jenner had successfully vaccinated a boy against cowpox in 1796–so it was undeniably possible to create an effective vaccine without knowing the identity of the microbe to which you were eliciting an immune response. Pasteur had, after all, created a vaccine against rabies without knowing that rabies was caused by a virus. In 1918, government laboratories produced large quantities of vaccines against Pfeiffer’s bacillus and other bacteria thought to cause respiratory disease, and some of them actually seemed to save lives. Mostly, though, they had no effect: those who were vaccinated continued to fall ill and die.
We now know that the reason some of the vaccines worked is because they blocked the secondary bacterial infections that caused the pneumonia that killed so many patients. At the time, however, doctors interpreted the results according to their own pet theory of flu. Some pointed to the effective vaccines as proof that Pfeiffer’s bacillus was the culprit. Others instinctively understood that the vaccines were dealing with the complication, not the underlying disease, the nature of which still eluded them. There were slanging matches, public disavowals. The American Medical Association advised its members not to put their faith in vaccines, and the press reported it all. The controversy was counter-productive, because the older measures–the ones that kept the sick and the healthy apart–worked, as long as people complied.
Getting people to comply
Quarantine and other disease-containment strategies place the interests of the collective over those of the individual. When the collective is very large, as we’ve said, those strategies have to be imposed in a top-down fashion. But mandating a central authority to act in the interests of the collective potentially creates two kinds of problems. First, the collective may have competing priorities–the need to make money, say, or the need to raise an army–and deny or water down the authority’s powers of enforcement. And second, the rights of individuals risk getting trampled on, especially if the authority abuses the measures placed at its disposal.
The competing interests of the collective are the reason that historian Alfred Crosby, who told the story of the flu in America, argued that democracy was unhelpful in a pandemic. The demands of national security, a thriving economy and public health are rarely aligned, and elected representatives defending the first two undermine the third, simply by doing their job. In France, for example, powerful bodies including the Ministry of the Interior and the Academy of Medicine ordered the closure of theatres, cinemas, churches and markets, but this rarely happened, because prefects in the French departments didn’t enforce the measures ‘for fear of annoying the public’.5 But a concentration of power at the top didn’t guarantee effective containment either. In Japan, which was undergoing a transition from rule by a small group of oligarchs to a nascent democracy at the time, the authorities did not even consider closing public meeting places. A police officer in Tokyo observed that the authorities in Korea–then a Japanese colony–had banned all mass gatherings, even for worship. ‘But we can’t do this in Japan,’ he sighed, without giving a reason.
Individuals also had cause to be wary in 1918. Throughout the last decades of the nineteenth century–that is, in very recent memory–public health campaigns had targeted marginalised groups, as eugenics and germ theory came together in a toxic mix. India is a case in point. The British colonial authorities had long taken a laissez-faire attitude to indigenous health in that country, believing it to be incorrigibly unhygienic, but when bubonic plague broke out in 1896, they realised the threat that deadly disease posed to their interests and went to the other extreme, imposing a brutal campaign to rout the infection. In the city of Pune, for example, the sick were isolated in hospitals, from which most never returned, while their relatives were segregated in ‘health camps’. The floors of their houses were dug up, their personal effects were fumigated or burnt, and fire engines pumped such enormous quantities of carbolic acid into the buildings that one bacteriologist reported having to put up an umbrella before entering.6
Blinkered by their negative perception of the ‘barefoot poor’, the British authorities refused to believe–at least in the early days of the plague epidemic–that the disease was spread by rat fleas. If they had, they might have realised that a better strategy would have been to inspect imported merchandise rather than people, and to de-rat buildings rather than disinfect them. As for the Indians on the receiving end of these measures, they came to see hospitals as ‘places of torture and places intended to provide material for experiments’
.7 Indeed, in 1897, the head of the Pune Plague Committee, Walter Charles Rand, was murdered by three local brothers, the Chapekars, who were hanged for their crime (today, a monument in the city honours them as freedom fighters).
Similar violations had taken place in other parts of the world. In Australia, a policy had been implemented to remove mixed-race Aboriginal children from their parents and place them with white families. The thinking was that ‘pure’ Aboriginals were doomed to extinction, but those whose blood was diluted with that of the ‘superior’ white races could potentially be saved by being assimilated into white society (this at a time when Aboriginals were dying in large numbers due to infectious diseases brought into their midst by white people). In Argentina, meanwhile, a programme had been launched to rid cities of people of African origin entirely, on the grounds that they posed a risk to the health of other citizens–a measure the Brazilian government considered but ultimately deemed unworkable because the vast majority of Brazilians were of African descent.
It was against this backdrop that, in 1918, health authorities once again announced the imposition of disease-containment measures. The pattern varied from country to country, but in general they were a mixture of mandatory and voluntary requirements. You were urged to use a handkerchief and to open your window at night, but nothing bad happened to you if you didn’t. Vigilant police officers might stop you spitting in the street, and fine or even imprison you for a repeat offence, but if you violated the ban on mass gatherings by attending a political meeting or sporting event, you risked a band of them bursting in with batons and rudely breaking up the party. For breaching quarantine regulations, or a sanitary cordon, you could expect a very severe punishment indeed.