Book Read Free

Hello World

Page 5

by Hannah Fry


  But, if the undercover footage recorded by Channel Four News is to be believed, Cambridge Analytica were also using personality profiles of the electorate to deliver emotionally charged political messages – for example, finding single mothers who score highly on neuroticism and preying on their fear of being attacked in their own home to persuade them into supporting a pro-gun-lobby message. Commercial advertisers have certainly used these techniques extensively, and other political campaigns probably have, too.

  But on top of all that, Cambridge Analytica are accused of creating adverts and dressing them up as journalism. According to one whistleblower’s testimony to the Guardian, one of the most effective ads during the campaign was an interactive graphic titled ‘10 inconvenient truths about the Clinton Foundation’.21 Another whistleblower went further and claimed that the ‘articles’ planted by Cambridge Analytica were often based on demonstrable falsehoods.22

  Let’s assume, for the sake of argument, that all the above is true: Cambridge Analytica served up manipulative fake news stories on Facebook to people based on their psychological profiles. The question is, did it work?

  Micro-manipulation

  There is an asymmetry in how we view the power of targeted political adverts. We like to think of ourselves as independently minded and immune to manipulation, and yet imagine others – particularly those of a different political persuasion – as being fantastically gullible. The reality is probably something in between.

  We do know that the posts we see on Facebook have the power to alter our emotions. A controversial experiment run by Facebook employees in 2013 manipulated the news feeds of 689,003 users without their knowledge (or consent) in an attempt to control their emotions and influence their moods.23 The experimenters suppressed any friends’ posts that contained positive words, and then did the same with those containing negative words, and watched to see how the unsuspecting subjects would react in each case. Users who saw less negative content in their feeds went on to post more positive stuff themselves. Meanwhile, those who had positive posts hidden from their timeline went on to use more negative words themselves. Conclusion: we may think we’re immune to emotional manipulation, but we’re probably not.

  We also know from the Epstein experiment described in the ‘Power’ chapter that just the ordering of pages on a search engine can be enough to tip undecided voters into favouring one candidate over another. We know, too, from the work done by the very academics whose algorithms Cambridge Analytica repurposed, that adverts are more effective if they target personality traits.

  Put together, all this does build a strong argument to suggest that these methods can have an impact on how people vote, just as they do on how people spend their money. But – and it’s quite a big but – there’s something else you need to know before you make your mind up.

  All of the above is true, but the actual effects are tiny. In the Facebook experiment, users were indeed more likely to post positive messages if they were shielded from negative news. But the difference amounted to less than one-tenth of one percentage point.

  Likewise, in the targeted adverts example, the makeup sold to introverts was more successful if it took into account the person’s character, but the difference it made was minuscule. A generic advert got 31 people in 1,000 to click on it. The targeted ad managed 35 in 1,000. Even that figure of 50 per cent improvement that I cited here, which is boldly emblazoned across the top of the academic paper, is actually referring to an increase from 11 clicks in 1,000 to 16.

  The methods can work, yes. But the advertisers aren’t injecting their messages straight into the minds of a passive audience. We’re not sitting ducks. We’re much better at ignoring advertising or putting our own spin on interpreting propaganda than the people sending those messages would like us to be. In the end, even with the best, most deviously micro-profiled campaigns, only a small amount of influence will leak through to the target.

  And yet, potentially, in an election those tiny slivers of influence might be all you need to swing the balance. In a population of tens or hundreds of millions, those one-in-a-thousand switches can quickly add up. And when you remember that, as Jamie Bartlett pointed out in a piece for the Spectator, Trump won Pennsylvania by 44,000 votes out of six million cast, Wisconsin by 22,000, and Michigan by 11,000, perhaps margins of less than 1 per cent might be all you need.24

  The fact is, it’s impossible to tell just how much of an effect all this had in the US presidential election. Even if we had access to all of the facts, we can’t look back through time and untangle the sticky web of cause and effect to pinpoint a single reason for anyone’s voting decisions. What has gone has gone. What matters now is where we go in the future.

  Rate me

  It’s important to remember that we’ve all benefited from this model of the internet. All around the world, people have free and easy access to instant global communication networks, the wealth of human knowledge at their fingertips, up-to-the-minute information from across the earth, and unlimited usage of the most remarkable software and technology, built by private companies, paid for by adverts. That was the deal that we made. Free technology in return for your data and the ability to use it to influence and profit from you. The best and worst of capitalism in one simple swap.

  We might decide we’re happy with that deal. And that’s perfectly fine. But if we do, it’s important to be aware of the dangers of collecting this data in the first place. We need to consider where these datasets could lead – even beyond the issues of privacy and the potential to undermine democracy (as if they weren’t bad enough). There is another twist in this dystopian tale. An application for these rich, interconnected datasets that belongs in the popular Netflix show Black Mirror, but exists in reality. It’s known as Sesame Credit, a citizen scoring system used by the Chinese government.

  Imagine every piece of information that a data broker might have on you collapsed down into a single score. Everything goes into it. Your credit history, your mobile phone number, your address – the usual stuff. But all your day-to-day behaviour, too. Your social media posts, the data from your ride-hailing app, even records from your online matchmaking service. The result is a single number between 350 and 950 points.

  Sesame Credit doesn’t disclose the details of its ‘complex’ scoring algorithm. But Li Yingyun, the company’s technology director, did share some examples of what might be inferred from its results in an interview with the Beijing-based Caixin Media. ‘Someone who plays video games for ten hours a day, for example, would be considered an idle person. Someone who frequently buys diapers would be considered as probably a parent, who on balance is more likely to have a sense of responsibility.’25

  If you’re Chinese, these scores matter. If your rating is over 600 points, you can take out a special credit card. Above 666 and you’ll be rewarded with a higher credit limit. Those with scores above 650 can hire a car without a deposit and use a VIP lane at Beijing airport. Anyone over 750 can apply for a fast-tracked visa to Europe.26

  It’s all fun and games now while the scheme is voluntary. But when the citizen scoring system becomes mandatory in 2020, people with low scores stand to feel the repercussions in every aspect of their lives. The government’s own document on the system outlines examples of punishments that could be meted out to anyone deemed disobedient: ‘Restrictions on leaving the borders, restrictions on the purchase of … property, travelling on aircraft, on tourism and holidays or staying in star-ranked hotels.’ It also warns that in the case of ‘gravely trust breaking subjects’ it will ‘guide commercial banks … to limit their provision of loans, sales insurance and other such services’.27 Loyalty is praised. Breaking trust is punished. As Rogier Creemers, an academic specializing in Chinese law and governance at the Van Vollenhoven Institute at Leiden University, puts it: ‘The best way to understand it is as a sort of bastard love child of a loyalty scheme.’28

  I don’t have much comfort to offer in the case of Sesame Credit
, but I don’t want to fill you completely with doom and gloom, either. There are glimmers of hope elsewhere. However grim the journey ahead appears, there are signs that the tide is slowly turning. Many in the data science community have known about and objected to the exploitation of people’s information for profit for quite some time. But until the furore over Cambridge Analytica these issues hadn’t drawn sustained, international front-page attention. When that scandal broke in early 2018 the general public saw for the first time how algorithms are silently harvesting their data, and acknowledged that, without oversight or regulation, it could have dramatic repercussions.

  And regulation is coming. If you live in the EU, there has recently been a new piece of legislation called GDPR – General Data Protection Regulation – that should make much of what data brokers are doing illegal. In theory, they will no longer be allowed to store your data without an explicit purpose. They won’t be able to infer information about you without your consent. And they won’t be able to get your permission to collect your data for one reason, and then secretly use it for another. That doesn’t necessarily mean the end of these kinds of practices, however. For one thing, we often don’t pay attention to the T&Cs when we’re clicking around online, so we may find ourselves consenting without realizing. For another, the identification of illegal practices and enforcement of regulations remains tricky in a world where most data analysis and transfer happens in the shadows. We’ll have to wait and see how this unfolds.

  Europeans are the lucky ones, but there are those pushing for regulation in America, too. The Federal Trade Commission published a report condemning the murky practices of data brokers back in 2014, and since then has been actively pushing for more consumer rights. Apple has now built ‘intelligent tracking prevention’ into the Safari browser. Firefox has done the same. Facebook is severing ties with its data brokers. Argentina and Brazil, South Korea and many more countries have all pushed through GDPR-like legislation. Europe might be ahead of the curve, but there is a global trend that is heading in the right direction.

  If data is the new gold, then we’ve been living in the Wild West. But I’m optimistic that – for many of us – the worst will soon be behind us.

  Still, we do well to remember that there’s no such thing as a free lunch. As the law catches up and the battle between corporate profits and social good plays out, we need to be careful not to be lulled into a false sense of privacy. Whenever we use an algorithm – especially a free one – we need to ask ourselves about the hidden incentives. Why is this app giving me all this stuff for free? What is this algorithm really doing? Is this a trade I’m comfortable with? Would I be better off without it?

  That is a lesson that applies well beyond the virtual realm, because the reach of these kinds of calculations now extends into virtually every aspect of society. Data and algorithms don’t just have the power to predict our shopping habits. They also have the power to rob someone of their freedom.

  Justice

  IT’S NOT UNUSUAL to find good-natured revellers drinking on a summer Sunday evening in the streets of Brixton, where our next story begins. Brixton, in south London, has a reputation as a good place to go for a night out; on this particular evening, a music festival had just finished and the area was filled with people merrily making their way home, or carrying on the party. But at 11.30 p.m. the mood shifted. A fight broke out on a local council estate, and when police failed to contain the trouble, it quickly spilled into the centre of Brixton, where hundreds of young people joined in.

  This was August 2011. The night before, on the other side of the city, an initially peaceful protest over the shooting by police of a young Tottenham man named Mark Duggan had turned violent. Now, for the second night in a row, areas of the city were descending into chaos – and this time, the atmosphere was different. What had begun as a local demonstration was now a widespread breakdown in law and order and a looting free-for-all.

  Just as the rioting took hold, Nicholas Robinson, a 23-year-old electrical engineering student who had spent the weekend at his girlfriend’s, headed home, taking his usual short walk through Brixton.1 By now, the familiar streets were practically unrecognizable. Cars had been upturned, windows had been smashed, fires had been started, and all along the street shops had been broken into.2 Police had been desperately trying to calm the situation, but were powerless to stop the cars and scooters pulling up alongside the smashed shop fronts and loading up with stolen clothes, shoes, laptops and TVs. Brixton was completely out of control.

  A few streets away from an electrical store that was being thoroughly ransacked, Nicholas Robinson walked past his local supermarket. Like almost every other shop, it was a wreck: the glass windows and doors had been broken, and the shelves inside were strewn with the mess from the looters. Streams of rioters were running past holding on to their brand-new laptops, unchallenged by police officers. Amid the chaos, feeling thirsty, Nicholas walked into the store and helped himself to a £3.50 case of bottled water. Just as he rounded the corner to leave, the police entered the supermarket. Nicholas immediately realized what he had done, dropped the case and tried to run.3

  As Monday night rolled in, the country braced itself for further riots. Sure enough, that night looters took to the streets again.4 Among them was 18-year-old Richard Johnson. Intrigued by what he’d seen on the news, he grabbed a (distinctly un-summery) balaclava, jumped into a car and made his way to the local shopping centre. With his face concealed, Richard ran into the town’s gaming store, grabbed a haul of computer games and returned to the car.5 Unfortunately for Richard, he had parked in full view of a CCTV camera. The registration plate made it easy for police to track him down, and the recorded evidence made it easy to bring a case against him.6

  Both Richard Johnson and Nicholas Robinson were arrested for their actions in the riots of 2011. Both were charged with burglary. Both stood before judges. Both pleaded guilty. But that’s where the similarities in their cases end.

  Nicholas Robinson was first to be called to the dock, appearing before a judge at Camberwell Magistrates’ Court less than a week after the incident. Despite the low value of the bottled water he had stolen, despite his lack of a criminal record, his being in full-time education, and his telling the court he was ashamed of himself, the judge said his actions had contributed to the atmosphere of lawlessness in Brixton that night. And so, to gasps from his family in the public gallery, Nicholas Robinson was sentenced to six months in prison.7

  Johnson’s case appeared in front of a judge in January 2012. Although he’d gone out wearing an item of clothing designed to hide his identity with the deliberate intention of looting, and although he too had played a role in aggravating public disorder, Johnson escaped jail entirely. He was given a suspended sentence and ordered to perform two hundred hours of unpaid work.8

  The consistency conundrum

  The judicial system knows it’s not perfect, but it doesn’t try to be. Judging guilt and assigning punishment isn’t an exact science, and there’s no way a judge can guarantee precision. That’s why phrases such as ‘reasonable doubt’ and ‘substantial grounds’ are so fundamental to the legal vocabulary, and why appeals are such an important part of the process; the system accepts that absolute certainty is unachievable.

  Even so, discrepancies in the treatment of some defendants – like Nicholas Robinson and Richard Johnson – do seem unjust. There are too many factors involved ever to say for certain that a difference in sentence is ‘unfair’, but within reason, you would hope that judges were broadly consistent in the way they made decisions. If you and your imaginary twin committed an identical crime, for instance, you would hope that a court would give you both the same sentence. But would it?

  In the 1970s, a group of American researchers tried to answer a version of this question.9 Rather than using twin criminals (which is practically difficult and ethically undesirable) they created a series of hypothetical cases and independently asked 47 Virginia State dis
trict court judges how they would deal with each. Here’s an example from the study for you to try. How would you handle the following case?

  An 18-year-old female defendant was apprehended for possession of marijuana, arrested with her boyfriend and seven other acquaintances. There was evidence of a substantial amount of smoked and un-smoked marijuana found, but no marijuana was discovered directly on her person. She had no previous criminal record, was a good student from a middle-class home and was neither rebellious nor apologetic for her actions.

  The differences in judgments were dramatic. Of the 47 judges, 29 decreed the defendant not guilty and 18 declared her guilty. Of those who opted for a guilty verdict, eight recommended probation, four thought a fine was the best way to go, three would issue both probation and a fine, and three judges were in favour of sending the defendant to prison.

 

‹ Prev