The Rules of Contagion
Page 20
Given that our attention is so valuable, tech companies have a big incentive to keep us online. The more time we spend using their products, the more information they can collect, and the better they can tailor their content and adverts. Sean Parker, the founding president of Facebook, has previously spoken about the mindset of those who’d built early social media applications. ‘That thought process was all about: “How do we consume as much of your time and conscious attention as possible?”’ he said in 2016.[85] Other companies have since followed suit. ‘We’re competing with sleep,’ joked Netflix CEO Reed Hastings in 2017.[86]
One way to keep us hooked on an app is through design. Tristan Harris, who specialises in the ethics of design, has compared the process to a magic trick. He notes that businesses will often try and guide our choices towards a specific outcome. ‘Magicians do the same thing,’ he once wrote. ‘You make it easier for a spectator to pick the thing you want them to pick, and harder to pick the thing you don’t.’[87] Magic tricks work by controlling our perception of the world; user interfaces can do the same.
Notifications are a particularly powerful way of keeping us engaged. The average iPhone user unlocks their phone over eighty times a day.[88] According to Harris, this behaviour is similar to the psychological effects of gambling addiction: ‘When we pull our phone out of our pocket, we’re playing a slot machine to see what notifications we got,’ he suggested. Casinos capture players’ attention by including payoffs that are infrequent and highly variable. Sometimes people get a reward; sometimes they get nothing. In many apps, the sender can also see if we’ve read their message, which encourages us to respond quicker. The more we interact with the app, the more we need to keep interacting. ‘It’s a social-validation feedback loop,’ as Sean Parker put it. ‘It’s exactly the kind of thing that a hacker like myself would come up with, because you’re exploiting a vulnerability in human psychology.’[89]
There are several other design features that keep us viewing and sharing content. In 2010, Facebook introduced ‘infinite scrolling’, removing the distraction of having to change page. Unlimited content is now common on most social media feeds; since 2015, YouTube has automatically played another video after the current one ends. Social media design is also centred on sharing; it’s difficult for us to post content without seeing what others are up to.
Although not all features were originally intended to be so addictive, people are increasingly aware of how apps can influence their behaviour.[90] Even developers have become cautious of their own inventions. Justin Rosenstein and Leah Pearlman were part of the team that introduced Facebook’s ‘like’ button. In recent years, both have reportedly tried to escape the allure of notifications. Rosenstein had his assistant put parental controls on his phone; Pearlman, who later became an illustrator, hired a social media manager to look after her Facebook page.[91]
As well as encouraging interactions, design can also hinder them. WeChat, China’s vastly popular social media app, had over a billion active users in 2019. The app brings together a wide range of services: users can shop, pay bills and book travel, as well as sending messages to each other. People can also share ‘Moments’ (i.e. images or media) with their friends, much like the Facebook News Feed. Unlike Facebook, however, WeChat users can only ever see their friends’ comments on posts.[92] This means that if you have two friends who aren’t friends with each other, they can’t see everything that’s been said. This changes the nature of interactions. ‘It prevents what I would describe as conversation from emerging,’ Dean Eckles said. ‘Anybody who posts anything as a comment knows that it’s possible that it will be taken totally out of context, because others may see only their comment and not what happened previously in that thread.’ Facebook and Twitter have widely shared posts with thousands of public comments below. In contrast, attempts at WeChat discussions inevitably look fragmented or confused, which deters users from trying.
Chinese social media discourages collective action in several ways, including deliberate barriers created by government censorship. A few years ago, political scientist Margaret Roberts and her colleagues tried to reconstruct the process of Chinese censorship. They created new accounts, posted different types of content and tracked what got removed. As they pieced together the censorship mechanisms, they discovered that criticism of leaders or policies wasn’t blocked, but discussions of protests or rallies were. Roberts would later divide online censorship strategies into what she calls the ‘three Fs’: flooding, fear, and friction. By flooding online platforms with the opposing views, censors can drown out other messages. The threat of repercussions for rule breaking leads to fear. And removing or blocking content creates friction by slowing down access to information.[93]
On my first trip to mainland China, I remember trying to connect to WiFi when I arrived at my hotel. It took me a while to work out whether I was actually online. All the apps I usually might load to check my connection – Google, WhatsApp, Instagram, Twitter, Facebook, Gmail – were blocked. As well as demonstrating the power of the Chinese firewall, it made me realise how much influence US technology firms have. The bulk of my online activity is in the hands of just three companies.
We share a huge amount of information with such platforms. Perhaps the best illustration of just how much data tech companies can collect comes from a 2013 Facebook study.[94] They looked at who had typed comments on the platform but never posted them. The research team noted that the contents of the posts weren’t sent back to Facebook’s servers, just a record of whether someone had started typing. Maybe that was the case for this study. But regardless, it shows the level of detail with which companies can track our online behaviour and interactions. Or even, in this case, a lack of interactions.
Given the power of our social media data, organisations can have a lot to gain by accessing it. According to Carol Davidsen, who worked on the Obama campaign in the 2012 US presidential election, Facebook’s privacy settings at the time made it possible to download the friendship network of everyone who’d agreed to support the campaign on the platform. These friendship connections gave the campaign a huge amount of information. ‘We were actually able to ingest the entire social network of the US that’s on Facebook,’ she later said.[95] Facebook eventually removed this ability to gather friendship data. Davidsen claimed that, because the Republicans had been slow off the mark, the Democrats had information that their opponents didn’t have. Such data analysis didn’t break any rules, but the experience raised questions about how information is collected and who has control of it. ‘Who owns the fact that you and I are friends?’ as Davidsen put it.
At the time, many hailed the Obama campaign’s use of data as innovative.[96] It was a modern method for a new political era. Just as the finance industry had got excited about new mortgage products in the 1990s, social media was seen as something that would change politics for the better. But much like those financial products, it wasn’t an attitude that would last.
‘Hey lovely you gonna vote in the election? & for who?’ In the run up to the 2017 UK general election, thousands of people looking for a date on the Tinder app got a political chat-up line instead. Londoners Charlotte Goodman and Yara Rodrigues Fowler had wanted to encourage their fellow twenty-somethings to vote for Labour, so designed a chatbot to reach a wide audience.
Once a volunteer installed the bot, it automatically set their Tinder location to somewhere in a marginal constituency, swiped ‘yes’ to every person, and started chatting to any matches. If the initial message was well received, volunteers could take over and start talking for real. The bot sent over 30,000 messages in total, reaching people who canvassers might not usually talk to. ‘The occasional match was disappointed to be talking to a bot instead of a human, but there was very little negative feedback,’ Goodman and Rodrigues Fowler later wrote. ‘Tinder is too casual a platform for users to feel hoodwinked by some political conversation.’[97]
Bots make it possible to have a vast
number of interactions at the same time. With a linked network of bots, people can perform actions at a scale that simply wouldn’t be feasible if a human had to do it all manually. These botnets can consist of thousands, if not millions of accounts. Like human users, these bots can post content, start conversations, and promote ideas. However, the role of such accounts has come under scrutiny in recent years. In 2016, two votes shook the Western world: in June, Britain voted to leave the eu; in November, Donald Trump won the US presidency. What had caused these events? In the aftermath, speculation grew that false information – much of it created by Russia and far-right groups – had been spread widely during these elections. Vast numbers of people in the UK, and then vast numbers in the US, had been duped by fake stories posted by bots and other questionable accounts.
At first glance, the data seem to support this story. There’s evidence that over 100 million Americans may have seen Facebook posts backed by Russia during the 2016 election. And on Twitter, almost 700,000 people in the US were exposed to Russian-linked propaganda, spread by 50,000 bot accounts.[98] The idea that many voters fell for propaganda posted by fake websites and foreign spies is an appealing narrative, especially for those of us who were politically opposed to Brexit and Trump. But if we look more closely at the evidence, this simple story starts to fall apart.
Despite Russia-linked propaganda circulating during the 2016 US election, Duncan Watts and David Rothschild have pointed out that a lot of other content was as well. Facebook users may have been exposed to Russian content, but during that period American users saw over 11 trillion posts on the platform. For every Russian post people were exposed to, on average there were almost 90,000 other pieces of content. Meanwhile on Twitter, less than 0.75 per cent of election-related tweets came from accounts linked with Russia. ‘In sheer numerical terms, the information to which voters were exposed during the election campaign was overwhelmingly produced not by fake news sites or even by alt-right media sources, but by household names,’ noted Watts and Rothschild.[99] Indeed, it’s been estimated that in the first year of his campaign, Trump gained almost $2bn worth of free mainstream media coverage.[100] The pair highlighted the media focus on the Hillary Clinton email controversy as one example of what outlets chose to inform their readers about. ‘In just six days, the New York Times ran as many cover stories about Hillary Clinton’s emails as they did about all policy issues combined in the 69 days leading up to the election.’
Other researchers have reached a similar conclusion about the scale of false news sources in 2016. Brendan Nyhan and his colleagues found that although some US voters consumed a lot of news from dubious websites, these people were in the minority. On average, only 3 per cent of the articles that people viewed were published by websites peddling false stories. They later published a follow-up analysis of the 2018 midterms; the results suggested that dodgy news had an even smaller reach during this election. In the UK, there was also little evidence of Russian content dominating conversations on Twitter or YouTube in the run up to the eu referendum.[101]
This might seem to suggest that we shouldn’t be concerned about bots and questionable websites, but again it’s not quite that simple. When it comes to online manipulation, it turns out that something much subtler – and far more troubling – has been happening.
Benito mussolini once said ‘it is better to live one day as a lion than 100 years as a sheep’. According to the Twitter user @ilduce2016, though, the quote actually comes from Donald Trump. Originally created by a pair of journalists at Gawker, this Twitter bot has sent thousands of tweets misattributing Mussolini lines to Trump. Eventually one of the posts caught Trump’s attention: on 28 February 2016, just after the fourth Republican primary, he retweeted the lion quote.[102]
Whereas some social media bots target a mass audience, others have a much narrower range. Known as ‘honey pot bots’, they aim to attract the attention of specific users and lure them into responding.[103] Remember how Twitter cascades often rely on a single ‘broadcast’ event? If you want to get a message to spread, it helps if someone high profile can amplify it for you. Because many outbreaks won’t spark, it also helps to have a bot that can repeatedly try: @ilduce2016 posted over two thousand times before Trump finally retweeted a quote. Bot creators seem to be aware of how powerful this approach can be. When Twitter bots posted dubious content during 2016–17, they disproportionately targeted popular users.[104]
It’s not just bots that use this targeting strategy. Following the 2018 shooting at Marjory Stoneman Douglas High School in Parkland, Florida, there were reports that the shooter had been a member of a small white supremacist group based in the state capital Tallahassee. However, the story was a hoax. It had started with trolls on online forums, who’d managed to persuade curious reporters that it was a genuine claim. ‘All it takes is a single article,’ noted one user. ‘And everyone else picks up the story.’[105]
Although researchers like Watts and Nyhan have suggested that people didn’t get much of their information from dubious online sources in 2016, it doesn’t mean it’s not a problem. ‘I think it really matters, but it doesn’t quite matter in the way that people think it does,’ said Watts. When fringe groups post false ideas or stories on Twitter, they aren’t necessarily trying to reach mass audiences. Not initially, at least. Instead, they are often targeting those journalists or politicians who spend a lot of time on social media. The hope is that these people will pick up on the idea and spread it to a wider audience. During 2017, for instance, journalists regularly quoted messages from a Twitter user named @wokeluisa, who appeared to be a young political science graduate from New York. In reality, though, the account was run by a Russian troll group, who were apparently targeting media outlets to build credibility and get messages amplified.[106] This is a common tactic among groups who want ideas to spread. ‘Journalists aren’t just part of the game of media manipulation,’ suggested Whitney Phillips, who researches online media at Syracuse University. ‘They’re the trophy.’[107]
Once a media outlet picks up on a story, it can trigger a feedback effect, with others covering it too. A few years ago, I inadvertently experienced this media feedback first hand. It started when I tipped off a journalist at The Times about a mathematical quirk in the new National Lottery (at the time, I’d just written a book about the science of betting). Two days later the story appeared in print. The morning it was published, I got an 8.30am message from a producer at ITV’s This Morning, who’d seen the story. By 10.30am, I was live on national television. Soon after, I received a message from BBC Radio 4; they’d also read the article, and wanted to get me on their flagship lunchtime show. More coverage would follow. I’d end up reaching an audience of millions, all from that one initial story.
My experience was a harmless, if surreal, accident. But others have made a strategic effort to exploit media feedback effects. This is how false information can spread widely, despite the fact that most of the public avoid fringe websites. In essence, it’s a form of information laundering. Just as drug cartels might funnel their money through legitimate businesses to hide its origins, online manipulators will get credible sources to amplify and spread their message, so the wider population will hear the idea from a familiar personality or outlet rather than an anonymous account.
Such laundering makes it possible to influence debate and coverage surrounding an issue. With careful targeting and amplification, manipulators can create the illusion of widespread popularity for specific policies or political candidates. In marketing, this strategy is known as ‘astroturfing’, because it artificially mimics grassroots support. This makes it harder for journalists and politicians to ignore the story, so eventually it becomes real news.
Of course, media influence isn’t a recent development; it’s long been known that journalists can shape the news cycle. When Evelyn Waugh wrote his 1938 satirical novel Scoop, he included a tale about a star reporter named Wenlock Jakes, who is sent to cover a revolution. Unfo
rtunately, Jakes oversleeps on his train and wakes up in the wrong country. Not realising his mistake, he makes up a story about ‘barricades in the streets, flaming churches, machine guns answering the rattle of his typewriter’. Other journalists, not wanting to be left out, arrive and concoct similar stories. Before long, stocks plummet and the country suffers an economic crash, leading to a state of emergency and finally a revolution.
Waugh’s tale was fictional, but the underlying news feedback he describes still occurs. However, there are some major differences with modern information. One is the speed with which it can spread. Within hours, something can grow from a fringe meme into a mainstream talking point.[108] Another difference is the cost of producing contagion. Bots and fake accounts are fairly cheap to create, and mass amplification by politicians or news sources is essentially free. In some cases, popular false articles can even make money by bringing in advertising revenue. Then there’s the potential for ‘algorithmic manipulation’: if a group can use fake accounts to manufacture the sort of reactions that are valued by social media algorithms – such as comments and likes – they may be able to get a topic trending even if few people are actually talking about it.
Given these new tools, what sort of things have people tried to make popular? Since 2016, ‘fake news’ has become a common term to describe manipulative online information. However, it’s not a particularly helpful phrase. Technology researcher Renée DiResta has pointed out that ‘fake news’ can actually refer to several different types of content, including clickbait, conspiracy theories, misinformation, and disinformation. As we’ve seen, clickbait simply tries to entice people to visit a page; the links will often lead to real news articles. In contrast, conspiracy theories tweak real-life stories to include a ‘secret truth’, which may become more exaggerated or elaborate as the theory grows. Then we have misinformation, which DiResta defines as false content that is generally shared by accident. This can include hoaxes and practical jokes, which are created to be deliberately false but are then inadvertently spread by people who believe them to be true.