The Best Australian Essays 2017

Home > Other > The Best Australian Essays 2017 > Page 12
The Best Australian Essays 2017 Page 12

by Anna Goldsworthy


  In 2012, Facebook supported an experiment in which researchers manipulated the News Feed of almost 700,000 users to find out if they could alter people’s emotional states. By hiding certain words, it was discovered, unsurprisingly, that they could. ‘We show,’ researchers announced, ‘via a massive experiment on Facebook, that emotional states can be transferred to others via emotional contagion, leading people to experience the same emotions without their awareness.’ The experiment was done without user knowledge or consent. When it became the subject of controversy in 2014, the researchers first claimed that they did have people’s consent, because it was ‘consistent with Facebook’s Data Use Policy, to which all users agree prior to creating an account on Facebook, constituting informed consent for this research’. The Facebook data scientist who led the research claimed it was carried out ‘because we care about the emotional impact of Facebook and the people that use our product’. Finally the company’s chief technology officer apologised, adding that the company had been ‘unprepared’ for the anger it stirred up, which suggests that perhaps it was the backlash rather than the experiment itself that caused remorse.

  In 2015, it was revealed that Facebook tracks the web browsing of everyone who visits a page on its site, even if the user doesn’t have an account or has explicitly opted out of tracking, and even after a user has logged out.

  The Guardian reported on the research commissioned by a Belgian data protection agency, which argued that Facebook’s data collection processes were unlawful.

  ‘European legislation is really quite clear on this point. To be legally valid, an individual’s consent towards online behavioural advertising must be opt-in,’ explained Brendan Van Alsenoy, one of the report’s authors. ‘Facebook cannot rely on users’ inaction to infer consent. As far as non-users are concerned, Facebook really has no legal basis whatsoever to justify its current tracking practices.’

  In May this year, European regulators announced that Facebook was breaking data privacy laws in France, Belgium and the Netherlands, and faces investigations in Spain and Germany. French regulator CNIL announced that that it was applying the maximum fine that had been allowed under French privacy law when its investigation began: a grand total of 150,000. CNIL had last year issued an order that Facebook stop tracking non-users’ web activity without their consent, and stop some transfers of personal data to the US.

  ‘We take note of the CNIL’s decision with which we respectfully disagree,’ replied Facebook. It has argued that it should only be subject to rulings from the Irish data protection authority because its European headquarters are in Dublin. In Europe, though, new personal data protection regulations will come into force mid next year, potentially allowing regulators to impose fines of up to 4 per cent of Facebook’s revenues.

  Also in May, the Australian uncovered a document outlining how the social network can pinpoint ‘moments when young people need a confidence boost’. By monitoring posts, pictures, interactions and internet activity in real time, Facebook can determine when people as young as fourteen feel ‘stressed’, ‘defeated’, ‘overwhelmed’, ‘anxious’, ‘nervous’, ‘stupid’, ‘silly’, ‘useless’ and a ‘failure’. The confidential presentation was intended to show how well Facebook knows its users, and by implication how willing it is to use this knowledge on behalf of advertisers. Privacy laws in Australia are nowhere near as stringent as in Europe, and not enforced with any great vigour.

  In the US, the Trump administration recently repealed data protection rules, meaning browser histories could be sold to advertisers without user consent. According to research from Princeton University published last year, Google and Facebook together own all of the top ten third-party data collectors.

  Not that any of this has so far caused any great public outcry, either here or in the States, perhaps because it all appears to be in the service of giving people exactly what they want. Nothing seems to interest the public less than debates about privacy laws and metadata collection. Until recently, it didn’t seem to be a major issue.

  *

  In June 2007, David Stillwell, a PhD student at the University of Cambridge, created a new Facebook app called myPersonality. Volunteer users filled out different psychometric questionnaires, including a handful of psychological questions, and in return received a ‘personality profile’. They could also opt to share their Facebook profile data with the researchers. Stillwell was soon joined by another researcher, Michal Kosinski, and their project took off. People were happy to share intimate details, their likes and dislikes (both online and off), their age, marital status and place of residence. Before long, the two doctoral candidates owned the largest ever dataset combining Facebook profiles and psychometric scores.

  In 2012, wrote Hannes Grassegger and Mikael Krogerus in an article for Das Magasin and Motherboard, Kosinski proved that, on the basis of an average of sixty-eight Facebook ‘likes’ by a user, it was possible to predict their skin colour (with 95 per cent accuracy), their sexual orientation (88 per cent accuracy), and their affiliation to the Democratic or Republican party (85 per cent) … Seventy ‘likes’ were enough to outdo what a person’s friends knew, 150 what their parents knew, and 300 ‘likes’ what their partner knew. More ‘likes’ could even surpass what a person thought they knew about themselves.

  On the day Kosinski published these findings, ‘he received two phone calls’, reported Grassegger and Krogerus. ‘The threat of a lawsuit and a job offer. Both from Facebook.’

  Shortly afterwards, Facebook made ‘likes’ private by default. The personal information users put on Facebook had always been owned by the company, to analyse or sell, but what it was worth was only just becoming clear. (It has a long history of changing privacy settings without much notice or explanation.)

  Facebook wasn’t the only one to register the potential of this tool. A young assistant professor from the Cambridge psychology department, Aleksandr Kogan, soon approached Kosinski on behalf of another company that was interested in the myPersonality database. Kogan initially refused to divulge the name of this company, or why and how it planned to use the information, but eventually revealed it was Strategic Communication Laboratories. SCL is a communications group whose ‘election management agency’ does marketing based on psychological modelling; its offshoots, one of which was named Cambridge Analytica, had been involved in dozens of election campaigns around the world. The company’s ownership structure was opaque, and Kosinski, who by this stage had become deeply suspicious of its motives, eventually broke off contact.

  Kosinski was therefore dismayed, if not altogether surprised, to learn of Cambridge Analytica’s role in last year’s election of Donald Trump.

  ‘We are thrilled that our revolutionary approach to data-driven communication has played such an integral part in President-elect Trump’s extraordinary win,’ said Cambridge Analytica’s forty-one-year-old CEO, Alexander Nix, in a press release.

  In September 2016, speaking at the Concordia Summit in New York, Nix had explained how Cambridge Analytica acquires massive amounts of personal information (legally) – from shopping data to bonus cards, club memberships to land registries, along with Facebook information and other online data – and combines it (including phone numbers and addresses) with party electoral rolls into personality profiles.

  ‘We have profiled the personality of every adult in the United States of America – 220 million people,’ Nix boasted.

  According to Mattathias Schwartz, writing in the Intercept, Kogan and another SCL affiliate paid 100,000 people a dollar or two to fill out an online survey and download an app that gave them access to the profiles of their unwitting Facebook friends, including their ‘likes’ and contact lists. Data was also obtained from a further 185,000 survey participants via a different unnamed company, yielding thirty million usable profiles. No-one in this larger group of thirty million knew that their Facebook profile was being harvested.

  It doesn’t take a great deal of imaginatio
n to see how useful this could be to the Trump campaign. As Grassegger and Krogerus reported:

  On the day of the third presidential debate between Trump and Clinton, Trump’s team tested 175,000 different ad variations for his arguments, in order to find the right versions above all via Facebook. The messages differed for the most part only in microscopic details, in order to target the recipients in the optimal psychological way: different headings, colours, captions, with a photo or video.

  The Trump campaign, heavily outspent by the Clinton campaign in television, radio and print, relied almost entirely on a digital marketing strategy.

  ‘We can address villages or apartment blocks in a targeted way,’ Nix claimed. ‘Even individuals.’ Advertising messages could be tailored, for instance, to poor and angry white people with racist tendencies, living in rust-belt districts. These would be invisible to anyone but the end user, leaving the process open to abuse.

  In February, the communications director of Brexit’s Leave. EU campaign team revealed the role Cambridge Analytica had played. The company, reported the Guardian, ‘had taught [the campaign] how to build profiles, how to target people and how to scoop up masses of data from people’s Facebook profiles’. The official Vote Leave campaign, Leave. EU’s rival, reportedly spent 98 per cent of its £6.8 million budget on digital media (and most of that on Facebook).

  Trump’s chief strategist, Stephen Bannon, was once a board member of Cambridge Analytica. The company is owned in large part by Robert Mercer (up to 90 per cent, according to the Guardian), whose money enabled Bannon to fund the right-wing news site Breitbart, and who funds climate-change denial think tank the Heartland Institute.

  The critical point is not that wealthy conservatives may be manipulating politics – this is hardly new – but that politics has become so vulnerable to covert manipulation, on a scale never before experienced.

  There is good reason for the strict regulations around the world on the use and abuse of the media in election campaigns, yet governments have almost completely abrogated responsibility when it comes to social media.

  According to Labor senator Sam Dastyari, chair of the future of public interest journalism inquiry, Australia’s security agencies ‘are very clear that [deliberately misleading news and information] is a real and serious threat … We would be very naive to believe it’s not going to happen here.’

  *

  A BuzzFeed News analysis found that in the three months before the US election the top twenty fake-news stories on Facebook generated more engagement (shares, reactions and comments) than the top twenty real-news stories. The Pope endorsed Donald Trump. An FBI agent suspected of leaking Hillary Clinton’s email was FOUND DEAD IN APPARENT MURDER-SUICIDE. In other news, WikiLeaks confirmed that Clinton had sold weapons to ISIS, and Donald Trump dispatched his personal plane to save 200 starving marines.

  Facebook’s algorithm, designed to engage people, had simply given Americans what they wanted to read.

  The criticism was heated and widespread, prompting Mark Zuckerberg’s ‘Building Global Community’ essay.

  Sure, there are ‘areas where technology and social media can contribute to divisiveness and isolation’, Zuckerberg wrote, and there are ‘people left behind by globalization, and movements for withdrawing from global connection’, but his answer to these problems was consistent and uniform: people need to be more connected (on Facebook). His promises to build a better network – to counter misinformation, for example, in a veiled reference to the US election campaign, or filter out abuse – rely to some extent on our goodwill and credulity. We’re denied access to Facebook’s internal workings, and that’s as Zuckerberg intends it. Which is within his right as chairman and CEO of a business. But a network this large, this influential, this secretive is more than a business. In many ways, it’s a test of our belief in the market.

  Promises to fix problems ranged from introducing a different system for flagging false content to working with outside fact-checking outfits. Perhaps changes were also made to the News Feed algorithm, but if so they remained confidential. How would the new community standards be applied? Would Facebook ever make changes that crimp its business prospects? Does it accept that social obligations come with such editorial decisions? All this remained obscure, notwithstanding a mess of corporate nonsense posted by both Zuckerberg and company PR figures.

  The News Feed algorithm works like this: in order to engage you, it chooses the ‘best’ content out of several thousand potential stories that could appear in your feed each day. The stories are ranked in order of perceived importance to you (Your best friend’s having a party! Trump has bombed North Korea again!), and the News Feed prioritises stories you’ll like, comment on, share, click on, and spend time reading. It recognises who posted things and their proximity to you, how other people responded to the post, what type of post it is and when it was posted.

  As TechCrunch writer Josh Constine puts it, ‘The more engaging the content … the better it can accomplish its mission of connecting people while also earning revenue from ads shown in News Feed.’

  Over time, as millions have joined Facebook, the number of potential posts that might populate a feed has multiplied, so the algorithm has become not only increasingly necessary to prevent users from drowning in ‘content’ but also increasingly subject to human design.

  *

  Despite this, Facebook insists it’s not a media organisation. It’s a technology company and a neutral platform for other people’s content. It is certainly true that it piggybacks on other companies’ content. But it is also constantly testing, surveying and altering its algorithms, and the changes have vast effects. Kurt Gessler, deputy editor for digital news at the Chicago Tribune, started noticing significant changes in January, and three months later wrote a post about them, titled ‘Facebook’s algorithm isn’t surfacing one-third of our posts. And it’s getting worse’, on Medium. The Tribune’s numbers of posts hadn’t changed over time, nor had the type of posts. The newspaper had a steadily rising number of Facebook fans but the average post reach had fallen precipitously. ‘So,’ he asked, ‘is anyone else experiencing this situation, and if so, does anyone know why and how to compensate? Because if 1 of 3 Facebook posts isn’t going to be surfaced by the algorithm to a significant degree, that would change how we play the game.’

  Facebook has made it clear that it has been increasingly giving priority to videos in its News Feed. Videos and mobile ads are, not coincidentally, the very things driving Facebook’s revenue growth. It has also been rewarding publishers that post directly to Facebook instead of posting links back to their own sites. None of which bodes well for the Chicago Tribune.

  There is one way to guarantee your articles will be surfaced by Facebook: by paying Facebook. As every social media editor knows, ‘boosting’ a post with dollars is the surest way to push it up the News Feed. Greg Hywood and HuffPost Australia editor-in-chief Tory Maguire pointed out to the Senate’s future of public interest journalism inquiry that even the ABC pays Google and Facebook to promote its content. ‘Traffic is dollars,’ said Hywood, ‘and if the ABC takes traffic from us by using taxpayers’ money to drive that traffic, it’s using taxpayers’ money to disadvantage commercial media organisations.’

  ‘This is normal marketing behaviour in the digital space,’ replied the ABC, ‘and is critical to ensuring audiences find relevant content. It is [also] used by other public broadcasters like the BBC and CBC.’ As well as, needless to say, thousands of other media organisations, including Fairfax and Schwartz Media.

  This is what passes for normal marketing behaviour in 2017: news organisations, haemorrhaging under the costs of producing news while losing advertising, are paying the very outfits that are killing them. Could there be a more direct expression of the twisted relationship between them? Could the power balance be any more skewed?

  Mark Thompson, CEO of the New York Times Company, recently put it like this: ‘Advertising revenue goes pri
ncipally to those who control platforms.’ Over time, this will mean they also control the fate of most news organisations. So it is somewhat troubling that one of the few ways to keep a check on the power of Facebook is by maintaining a robust fourth estate.

  *

  It was on social media that I stumbled across Rachel Baxendale’s Australian article about the anti-Semitic abuse directed at Ariella on social media. It was also from social media that I acquired Baxendale’s contact details, to ask her about the article.

  Baxendale had heard of the story through a contact, Dr Dvir Abramovich, chairman of the B’nai B’rith Anti-Defamation Commission. Abramovich had verified the story himself, and Baxendale then went back and forth with the schoolgirl over several days, checking details, looking at the screenshots of the abuse, discussing whether to use a pseudonym and so forth. Baxendale had explained the story to her bureau chief, and it then went to the Australian’s main editorial team in Sydney for approval. It was run past the legal team and then subeditors, and Facebook was approached for comment. All of this is routine at a newspaper. If anyone has a complaint, it can be taken to the Australian Press Council, which will study it impartially before making a public ruling. Or readers can, of course, get in contact with the journalist in question or her editors.

  By contrast, Facebook has a single email address for all global media enquiries, and its moderators had only moments to deal with the matter of Ariella’s abuse. There were fifty pages of screenshots.

  The Guardian reported in May it had seen more than 100 internal training manuals, spreadsheets and flowcharts used by Facebook in moderating controversial user posts. The Guardian also revealed that for almost two billion users and more than 100 million pieces of content to review per month (according to Zuckerberg) there were just 4500 moderators. That is one for every 440,000 users or more. Most work for subcontractors around the world; Facebook won’t divulge where. They are trained for two weeks, paid little, and often have ‘just 10 seconds’ to cast judgement on issues involving child abuse, suicide, animal cruelty, racial discrimination, revenge porn and terrorist threats, and must balance these against the desire to respect freedom of speech. Fake news? Fact-checking would be impossible. To help them, the moderators are provided with instruction manuals, which contain guidelines for dealing with matters from threats and specific issues to live broadcasts and image censorship. Facebook formulates country-specific materials to comply with national laws, and Zuckerberg often refers to the company’s attempts to follow community standards, but he really means Facebook’s Community Standards, which it determines. (‘You can host neo-Nazi content on Facebook but you can’t show a nipple’ is Scott Ludlam’s shorthand characterisation of these standards.)

 

‹ Prev