Although several studies have looked at the spread of animal behaviour in captivity, it has been difficult to do the same in wild populations. Given great tits’ reputation for innovation, zoologist Lucy Aplin and her colleagues set out to see how these ideas propagated. First they needed a new innovation. The team headed out into Wytham Woods, near Oxford, and set up a puzzle box containing mealworms. If the birds wanted to get the food inside, they’d need to move a sliding door in a certain direction. To see how the birds interacted, the researchers tagged almost all the tits in the area with automated tracking devices. ‘We could get real-time information about how and when individuals acquired knowledge,’ Aplin said. ‘The automated data-collection also meant we could let the process run without disturbance.’[32]
The birds grouped together into several different sub-populations; in five of these populations, the researchers taught a couple of birds how to solve the puzzle. The technique spread quickly: within twenty days, three in every four birds had picked up the idea. The team also studied a control group of birds, which hadn’t been trained. A few eventually worked out how to get into the box, but it took much longer for the idea to emerge and spread.
In the trained populations, the idea was also highly resilient. Many of the birds died from one season to the next, but the knowledge didn’t. ‘The behaviour re-emerged very quickly each winter,’ Aplin said, ‘even if there were only a small number of individuals that were alive from the previous year and had knowledge of the behaviour.’ She also noticed that transmission of information between birds had some familiar features. ‘Some general principles are similar to how disease spreads through populations, for instance more social individuals being more likely to encounter and adopt new behaviours, and socially central individuals can act as “keystones” or “super-spreaders” in the diffusion of information.’
The study also demonstrated that social norms could emerge in wild animals. There were actually a couple of ways to get into the puzzle box, but it was the solution the researchers had introduced that became the accepted method. Such conformity is even more common when we look at humans. ‘We’re social learning specialists,’ Aplin said. ‘The social learning and culture we observe in human societies is of a magnitude greater than anything we observe in the rest of the animal kingdom.’
We often share characteristics with people we know, from health and lifestyle choices to politicial views and wealth. In general, there are three possible explanations for such similarities. One is social contagion: perhaps you behave in a certain way because your friends have influenced you over time. Alternatively, it may be the other way around: you may have chosen to become friends because you already shared certain characteristics. This is known as ‘homophily’, the idea that ‘birds of a feather flock together’. Of course, your behaviour might be nothing to do with social connections at all. You may just happen to share the same environment, which influences your behaviour. Sociologist Max Weber used the example of a crowd of people opening umbrellas when it starts to rain. They aren’t necessarily reacting to each other; they’re reacting to the clouds above.[33]
It can be tough to work out which of the three explanations – social contagion, homophily or a shared environment – is the correct one. Do you like a certain activity because your friend does, or are you friends because you both like that activity? Did you skip your running session because your friend did, or did you both abandon the idea because it was raining? Sociologists call it ‘the reflection problem’, because one explanation can mirror another.[34] Our friendships and behaviour will often be correlated, but it can be very difficult to show that contagion is responsible.
What we need is a way to separate social contagion from the other possible explanations. The most definitive way to do this would be to spark an outbreak and watch what happens. This would mean introducing a specific behaviour, like Aplin and her colleagues did with birds, and measuring how it spreads. Ideally we would compare results with a randomly selected ‘control’ group of individuals – who aren’t exposed to the spark – to see how much effect the outbreak has. This type of experiment is common in medicine, where it’s known as a ‘randomised controlled trial’.
How might such an approach work in humans? Say we wanted to run an experiment to study the spread of cigarette smoking between friends. One option would be to introduce the behaviour we’re interested in: pick some people at random, get them to take up smoking, and then see whether the behaviour spreads through their friendship groups. Although this experiment might tell us whether social contagion occurs, it doesn’t take much to spot that there are some big ethical problems with this approach. We can’t ask people to adopt a harmful activity like smoking on the off chance it will help us understand social behaviour.
Rather than randomly introducing smoking, we could instead look at how existing smoking behaviour spreads through new social connections. But this would mean rearranging people’s friendships and locations at random and tracking whether people adopt their new friends’ behaviour. Again, this is generally not feasible: who wants to reshuffle their entire friendship network for a research project?
When it comes to designing social experiments, Aplin’s work on birds had some big advantages over studies of humans. Whereas humans may keep similar social links for years or decades, birds have a relatively short lifespan, which meant new networks of interactions would form each year. The team could also tag most of the birds in the area, making it possible to track the network in real-time. This meant the researchers could introduce a new idea – the puzzle solution – and watch how it spread through the newly formed networks.
There are some circumstances in which new human friendships randomly form all at once, for example when recruits are assigned to military squadrons or students are allocated to university halls.[35] Unfortunately for researchers, these are rare examples. In most real-life situations, scientists can’t meddle with behaviour or friendship dynamics to see what might happen. Instead, they must try and gain insights from what they can observe naturally. ‘Though a lot of the best strategies involve randomisation or some plausible source of randomness, for many things we really care about as social scientists and citizens, we’re not going to be able to randomise,’ said Dean Eckles, a social scientist at MIT.[36] ‘So we should do the best job we can with purely observational research.’
Much of epidemiology relies on observational analysis: in general, reseachers can’t deliberately start outbreaks or give people severe illnesses to understand how they work. This has led to some suggestions that epidemiology is closer to journalism than science, because it just reports on the situation as it happens, instead of running experiments.[37] But such claims ignore the huge improvements in health that have come from observational studies.
Take smoking. In the 1950s, researchers started to investigate the massive rise in lung cancer deaths that had occurred during the preceding decades.[38] There seemed to be a clear link with the popularity of cigarettes: people who smoked were nine times more likely to die of the disease than non-smokers. The problem was how to show that smoking was actually causing cancer. Ronald Fisher, a prominent statistician (and heavy pipe smoker) argued that just because the two things were correlated, it didn’t mean one was causing the other. Perhaps smokers had very different lifestyles to non-smokers, and it was one of these differences, rather than smoking, that was causing the deaths? Or maybe there was some genetic trait – as yet unidentified – that happened to make people both more likely to develop lung cancer and more likely to smoke? The issue divided the scientific community. Some, like Fisher, argued that the patterns linking smoking and cancer were just a coincidence. Others, like epidemiologist Austin Bradford Hill, thought that smoking was to blame for the rising deaths.
Of course, there was an experiment that would have given a definitive answer, but as we’ve already seen, it wouldn’t have been ethical to run it. Just as modern social scientists can’t make people take up smok
ing to see if the habit spreads, researchers in the 1950s couldn’t ask people to smoke to find out if it caused cancer. To solve the puzzle, epidemiologists had to find a way to work out whether one thing causes another without running an experiment.
Ronald ross spent august 1898 waiting to announce his discovery that mosquitoes transmitted malaria. While he battled to get government permission to publish the work in a scientific journal, he feared others would pounce on his research and take the credit. ‘Pirates lay in the offing ready to board me,’ as he put it.[39]
The pirate he feared most was a German biologist named Robert Koch. Stories were circulating that Koch had travelled to Italy to study malaria. If he managed to infect a person with the parasite, it could overshadow Ross’s work, which had used only birds. Relief came a few weeks later, in the form of a letter from Patrick Manson. ‘I hear Koch has failed with the mosquito in Italy,’ Manson wrote, ‘so you have time to grab the discovery for England.’
Eventually Koch did publish a series of malaria studies, which fully credited Ross’s work. In particular, Koch suggested that children in malarial areas acted as reservoirs of infection, because older adults had often developed immunity to the parasite. Malaria was the latest in a line of new pathogens for Koch. During the 1870s and 1880s, he had shown that bacteria were behind diseases like anthrax in cattle and tuberculosis in humans. In the process, he’d come up with a set of rules – or ‘postulates’ – to identify whether a particular germ is responsible for a disease. To start with, he thought that it should always be possible to find the germ inside someone who has the disease. Then, if a healthy host – like a laboratory animal – was exposed to this germ, it should develop the disease too. Finally, it should be possible to extract a sample of the germ from the new host once they fall ill; this germ should be the same as the one they were originally exposed to.[40]
Koch’s postulates were useful for the emerging science of ‘germ theory’, but he soon realised they had limitations. The biggest problem was that some pathogens don’t always cause disease. Sometimes people would get infected but not have noticeable symptoms. Researchers therefore needed a more general set of principles to work out what might be behind a disease.
For Austin Bradford Hill, the disease of interest was lung cancer. To show that smoking was responsible, he and his collaborators would eventually compile several types of evidence. He’d later summarise these as a set of ‘viewpoints’, which he hoped would help researchers decide whether one thing causes another. First on his list was the strength of correlation between the proposed cause and effect. For example, smokers were much more likely to get lung cancer than non-smokers. Bradford Hill said this pattern should be consistent, cropping up in different places across multiple studies. Then there was timing: did the cause come before the effect? Another indicator was whether the disease was specific to a certain type of behaviour (although this isn’t always helpful because non-smokers can get lung cancer too). Ideally there would also be evidence from an experiment: if people stopped smoking, it should reduce their chances of cancer.
In some cases, Bradford Hill said it’s possible to relate the level of exposure to the risk of disease. For instance, the more cigarettes a person smokes, the more likely they are to die from them. What’s more, it may be possible to draw an analogy with a similar cause and effect, such as another chemical that causes cancer. Finally, Bradford Hill suggested it’s worth checking to see whether the cause is biologically plausible and fits with what’s already known to scientists.
Bradford Hill emphasised that these viewpoints were not a checklist to ‘prove’ something beyond dispute. Rather, the aim was to help answer a crucial question: is there any better explanation for what we are seeing than simple cause and effect? As well as providing evidence that smoking caused cancer, these kinds of methods have helped researchers uncover the source of other diseases. During the 1950s and 1960s, epidemiologist Alice Stewart gathered evidence that low-dose radiation could cause leukaemia.[41] At the time, new X-ray technology was regularly being used on pregnant women; there were even X-rays in shoe shops, so people could see their feet inside the shoes. After a long battle by Stewart, these hazards were removed. More recently, researchers at the US CDC used the Bradford Hill viewpoints to argue that infections with Zika were causing birth defects.[42]
Establishing such causes and effects is inherently difficult. Often there will be an intense debate about what is responsible and what should be done. Still, Stewart believed that, faced with troubling evidence, people should act despite the inevitable uncertainty involved. ‘The trick is to get the best guess of the thickness of the ice when crossing a lake,’ she once said. ‘The art of the game is to get the correct judgment of the weight of the evidence, knowing that your judgment is subject to change under the pressure of new observations.’[43]
When christakis and fowler originally set out to study social contagion, they’d planned to do it from scratch. The idea was to recruit 1,000 people, get each of them to name five contacts, and then get each of their contacts to name five more contacts. In total, they would have had to track the behaviour of 31,000 people in detail for multiple years. A study that large would have cost around $30m.[44]
While exploring options, the pair got in touch with the team running the Framingham Heart Study, because it would be easier to recruit those initial 1,000 people from an existing project. When Christakis visited Marian Bellwood, the project co-ordinator, she mentioned they kept forms in the basement with details of each participant. To avoid losing contact with participants, they’d got people to list their relatives, friends and co-workers on the forms. It turned out that many of these contacts were also in the study, which meant their health information was being recorded too.
Christakis was astonished. Rather than recruiting a completely new set of social contacts, they could instead piece together the social network among Framingham participants. ‘I called James from the parking lot and said, “you won’t believe this!”,’ he recalled. There was just one catch: they’d have to go through twelve thousand names and fifty thousand addresses to identify the existing links. ‘We had to decipher everyone’s handwriting,’ Christakis said. ‘It took two years to computerise it.’
The pair had initially thought about analysing the spread of smoking, but decided obesity was a better starting point. Smoking depended on what participants reported, whereas obesity could be observed directly. ‘Because we were doing something so novel, we wanted to start with something that could be objectively measured,’ Christakis said.
The next step was to estimate whether obesity was being transmitted through the network. This meant tackling the reflection problem, separating potential contagion from homophily or environmental factors. To try and rule out the birds-of-a-feather effect of homophily, the pair included a time lag in the analysis; if obesity really spread from one person to their friend, the friend couldn’t have become obese first. Environmental factors were trickier to exclude, but Christakis and Fowler tried to tackle the issue by looking at the direction of friendship. Suppose I list you as a friend in a survey, but you don’t list me. This suggests I am more influenced by you than you are by me. However, if in reality we’re actually both influenced by some shared environmental factor – like a new fast food restaurant – our friendship direction shouldn’t affect who becomes obese. Christakis and Fowler found evidence that it did matter, suggesting that obesity could be contagious.
When the analysis was published, it received sharp criticism from some researchers. Much of the debate came down to two main points. The first was that the statistical evidence could have been stronger: the result showing that obesity was contagious was not as definitive as it would need to be for, say, a clinical trial showing whether a new drug worked. The second criticism was that, given the methods and data Christakis and Fowler had used, they could not conclusively rule out other explanations. In theory, it was possible to imagine a situation involving homophily a
nd environment that could have produced the same pattern.
In my view, these are both reasonable criticisms of the research. But it doesn’t mean that the studies weren’t useful. Commenting on the debate about Christakis and Fowler’s early papers, statistician Tom Snijders suggested that the studies had limitations, but were still important because they’d found an innovative way to put social contagion on scientists’ agenda. ‘Bravo for the imagination and braveness of Nick Christakis and James Fowler.’ [45]
In the decade since Christakis and Fowler published their initial analysis of the Framingham data, evidence for social contagion has accumulated. Several other research groups have also shown that things like obesity, smoking, and happiness can be contagious. As we’ve seen, it is notoriously difficult to study social contagion, but we now have a much better understanding of what can spread.
The next step will be to move beyond simply saying that contagion exists. Showing that behaviour can catch on is equivalent to knowing that the reproduction number is above zero: on average, there will be some transmission, but we don’t know how much. Of course, this is still useful information, because it shows contagion is a factor we need to think about. It tells us the behaviour is capable of spreading, even if we can’t predict how big the outbreak might be. However, if governments and other organisations want to address health issues that are contagious, they’ll need to know more about the actual extent of social contagion, and what impact different policies might have. If one person in a friendship group becomes overweight, exactly how much influence will it have on others? If you become happier, how much will your community’s happiness increase? Christakis and Fowler have acknowledged that it’s tricky to estimate the precise extent of social contagion. What’s more, addressing such questions often means using imperfect data and methods. But as new datasets become available, they point out others will be able to build on their analysis, moving towards an accurate measurement of contagion.
The Rules of Contagion Page 10