by Hannah Fry
Problems with privacy
Let’s be honest, Google isn’t exactly short of private, even intimate information on each of us. But something feels instinctively different – especially confidential – about our medical records.
For anyone with a clean bill of health, it might not be immediately obvious why that should be: after all, if you had to choose between releasing your medical data and your browser history to the world, which would you prefer? I know I’d pick the former in a heartbeat and suspect that many others would too. It’s not that I have anything particularly interesting to hide. But one is just a bland snapshot of my biological self, while the other is a window straight into my character.
Even if healthcare data might have less potential to be embarrassing, Timandra Harkness, author of Big Data: Does Size Matter? and presenter of the BBC Radio 4 programme Future Proofing, argues that it is still a special case.
‘First off, a lot of people’s healthcare data includes a narrative of their life,’ she told me. ‘For instance, one in three women in Britain have had a termination – there might be people in their own lives who don’t know that.’ She also points out that your medical record isn’t just relevant to you. ‘If someone has your genetic data, they also know something about your parents, your siblings, your children.’ And once it’s out, there’s no getting away from it. ‘You cannot change your biology, or deny it. If somebody samples your DNA you can’t change that. You can have plastic surgery on your face, you can wear gloves to cover your fingerprints, but your DNA is always there. It’s always linked to you.’
Why does this matter? Timandra told me about a focus group she’d chaired in 2013, where ordinary people were asked what concerned them most about their medical data. ‘On the whole, people weren’t that worried about their data being hacked or stolen. They were concerned about having assumptions made about them as a group and then projected on to them as individuals.’
Most of all, they were concerned with how their data might be used against them. ‘Suppose someone had linked their supermarket loyalty card to their medical records. They might go for a hip operation, the doctor would say, ‘Oh, I’m sorry, we see here that you’ve been buying a lot of pizzas or you’ve been buying a lot of cigarettes and therefore I’m afraid we’re going to have to send you to the back of the waiting list.’
That’s a very sensible fear in the UK, where some cash-strapped NHS hospitals are already prioritizing non-smokers for knee and hip operations.49 And there are many countries around the world where insurance or treatment can be denied to obese patients.50
There’s something of a conundrum here. Humans, as a species, could stand to benefit enormously from having our medical records opened up to algorithms. Watson doesn’t have to remain a fantasy. But to turn it into reality, we’ll need to hand over our records to companies rich enough to drag us through the slog of the challenges that lie between us and that magical electronic doctor. And in giving up our privacy we’ll always be dancing with the danger that our records could be compromised, stolen or used against us. Are you prepared to take that risk? Do you believe in these algorithms and their benefits enough to sacrifice your privacy?
Or, if it comes to it, will you even particularly care?
Genetic giveaway
Francis Galton was a Victorian statistician, a human geneticist, one of the most remarkable men of his generation – and the half-cousin of Charles Darwin. Many of his ideas had a profound effect on modern science, not least his work that essentially laid the foundations for modern statistics. For that, we owe Galton a sincere debt of gratitude. (Unfortunately, he was also active in the burgeoning eugenics movement, for which we certainly do not.)
Galton wanted to study human characteristics through data, and he knew even then that you needed a lot of it to really learn anything of interest. But he also knew that people had an insatiable curiosity about their own bodies. He realized that – when whetted in the right way – people’s voracious appetite for an expert’s assessment of themselves could over-ride their desire for privacy. What’s more, they were often willing to pay for the pleasure of feeding it.
And so, in 1884, when a huge exhibition was held in London under the patronage of Queen Victoria to celebrate the advances Britain had made in healthcare, Galton saw his opportunity. At his own expense, he set up a stand at the exhibition – he called it an ‘Anthropometric Laboratory’ – in the hopes of finding a few people among the millions of visitors who’d want to pay money to be measured.
He found more than a few. Punters queued up outside and thronged at the door, eager to hand over a threepenny bit each to enter the lab. Once inside, they could pit their skills against a series of specially designed instruments, testing, among other things, their keenness of sight, judgement of eye, strength of pull and squeeze, and swiftness of blow.51 Galton’s lab was so popular he had to take two people through at a time. (He quickly noticed that it was best to keep parents and children apart during the testing to avoid any wasted time on bruised egos. Writing in a journal article after the event, he commented: ‘The old did not like to be outdone by the young and insisted on repeated trials.’)52
However well or badly each person performed, their results were scribbled on to a white card that was theirs to keep as a souvenir. But the real winner was Galton. He left the exhibition with a full copy of everything – a valuable set of biometric measurements for 9,337 people – and a handsome profit to boot.
Fast-forward 130 years and you might just recognize some similarities in the current trend for genetic testing. For the bargain price of £149, you can send off a saliva sample to the genomics and biotechnology company 23andMe in exchange for a report of your genetic traits, including answers to questions such as: What kind of earwax do you have? Do you have the genes for a monobrow?53 Or the gene that makes you sneeze when you look at the sun?54 As well as some more serious ones: Are you prone to breast cancer? Do you have a genetic predisposition for Alzheimer’s disease?55
The company, meanwhile, has cleverly amassed a gigantic database of genetic information, now stretching into the millions of samples. It’s just what the internet giants have been doing, except in handing over our DNA as part of the trade, we’re offering up the most personal data we have. The result is a database we all stand to benefit from. It constitutes a staggeringly valuable asset in terms of its potential to advance our understanding of the human genome. Academics, pharmaceutical companies and non-profits around the world are queuing up to partner with 23andMe to hunt for patterns in their data – both with and without the help of algorithms – in the hope of answering big questions that affect all of us: What are the hereditary causes of different diseases? Are there new drugs that could be invented to treat people with particular conditions? Is there a better way to treat Parkinson’s?
The dataset is also valuable in a much more literal sense. Although the research being done offers an immense benefit to society, 23andMe isn’t doing this out of the goodness of its heart. If you give it your consent (and 80 per cent of customers do), it will sell on an anonymized version of your genetic data to those aforementioned research partners for a tidy profit.56 The money earned isn’t a happy bonus for the company; it’s actually their business plan. One 23andMe board member told Fast Company: ‘The long game here is not to make money selling kits, although the kits are essential to get the base level data.’ Something worth remembering whenever you send off for a commercial genetic report: you’re not using the product; you are the product.57
Word of warning. I’d also be a bit wary of those promises of anonymity. In 2005 a young man,58 conceived through a completely anonymous sperm donor, managed to track down and identify his birth father by sending off a saliva swab to be analysed and picking up on clues in his own DNA code.59 Then in 2013 a group of academics, in a particularly famous paper, demonstrated that millions of people could potentially be identified by their genes using nothing more than a home computer and a few clever
internet searches.60
And there’s another reason why you might not want your DNA in any dataset. While there are laws in place to protect people from the worst kinds of genetic discrimination – so we’re not quite headed for a future where a Beethoven or a Stephen Hawking will be judged by their genetic predispositions rather than their talent – these rules don’t apply to life insurance. No one can make you take a DNA test if you don’t want to, but in the US, insurers can ask you if you’ve already taken a test that calculates your risk of developing a particular disease such as Parkinson’s, Alzheimer’s or breast cancer and deny you life insurance if the answer isn’t to their liking. And in the UK insurers are allowed to take the results of genetic tests for Huntington’s disease into account (if the cover is over £500,000).61 You could try lying, of course, and pretend you’ve never had the test, but doing so would invalidate your policy. The only way to avoid this kind of discrimination is never to have the test in the first place. Sometimes, ignorance really can be bliss.
The fact is, there’s no dataset that would be more valuable in understanding our health than the sequenced genomes of millions of people. But, none the less, I probably won’t be getting a genetic test any time soon. And yet (thankfully for society) millions of people are voluntarily giving up their data. At the last count 23andMe has more than 2 million genotyped customers,62 while MyHeritage, Ancestry.com – even the National Geographic Genographic project – have millions more. So perhaps this is a conundrum that isn’t. After all, the market has spoken: in a straight swap for your privacy, the chance to contribute to a great public good might not be worth it, but finding out you’re 25 per cent Viking certainly is.fn2
The greatest good?
OK, I’m being facetious. No one can reasonably be expected to keep the grand challenges of the future of human healthcare in the forefront of their minds when deciding whether to send off for a genetic test. Indeed, it’s perfectly understandable that people don’t – we have different sets of priorities for ourselves as individuals and for humankind as a whole.
But this does bring us to a final important point. If a diagnostic machine capable of recommending treatments can be built, who should it serve? The individual or the population? Because there will be times where it may have to choose.
Imagine, for instance, you go into the doctor with a particularly annoying cough. You’ll probably get better on your own, but if a machine were serving you – the patient – it might want to send you for an X-ray and a blood test, just to be on the safe side. And it would probably give you antibiotics too, if you asked for them. Even if they only shortened your suffering by a few days, if the sole objective was your health and comfort, the algorithm might decide it was worth the prescription.
But if a machine were built to serve the entire population, it would be far more mindful of the issues around antibiotic resistance. As long as you weren’t in immediate danger, your temporary discomfort would pale into insignificance and the algorithm would only dole out the drugs when absolutely necessary. An algorithm like this might also be wary of wasting resources or conscious of long waiting lists, and so not send you for further tests unless you had other symptoms of something more serious. Frankly, it would probably tell you to take some aspirin and stop being such a wimp.
Likewise, a machine working on behalf of everyone might prioritize ‘saving as many lives as possible’ as its core objective when deciding who should get an organ transplant. Which might well produce a different treatment plan from a machine that only had your interests in mind.
A machine working for the NHS or an insurer might try to minimize costs wherever possible, while one designed to serve a pharmaceutical company might aim to promote the use of one particular drug rather than another.
The case of medicine is certainly less fraught with tension than the examples from criminal justice. There is no defence and prosecution here. Everyone in the healthcare system is working towards the same goal – getting the patient better. But even here every party in the process has a subtly different set of objectives.
In whatever facet of life an algorithm is introduced, there will always be some kind of a balance. Between privacy and public good. Between the individual and the population. Between different challenges and priorities. It isn’t easy to find a path through the tangle of incentives, even when the clear prize of better healthcare for all is at the end.
But it’s even harder when the competing incentives are hidden from view. When the benefits of an algorithm are over-stated and the risks are obscured. When you have to ask yourself what you’re being told to believe, and who stands to profit from you believing it.
Cars
THE SUN WAS barely up above the horizon first thing in the morning of 13 March 2004, but the Slash X saloon bar, in the middle of the Mojave desert, was already thronging with people.1 The bar is on the outskirts of Barstow, a small town between Los Angeles and Las Vegas, near where Uma Thurman was filmed crawling out of a coffin for Kill Bill II.2 It’s a place popular with cowboys and off-roaders, but on that spring day it had drawn the attention of another kind of crowd. The makeshift stadium that had been built outside in the dust was packed with crazy engineers, excited spectators and foolhardy petrolheads who all shared a similar dream. To be the first people on earth to witness a driverless car win a race.
The race had been organized by the US Defence Advanced Research Projects Agency, DARPA (nicknamed the Pentagon’s ‘mad science’ division).3 The agency had been interested in unmanned vehicles for a while, and with good reason: roadside bombs and targeted attacks on military vehicles were a big cause of deaths on the battlefield. Earlier that year, they had announced their intention to make one-third of US ground military forces vehicles autonomous by 2015.4
Up to that point, progress had been slow and expensive. DARPA had spent around half a billion dollars over two decades funding research work at universities and companies in the hope of achieving their ambition.5 But then they had an ingenious idea: why not create a competition? They would openly invite any interested people across the country to design their own driverless cars and race them against each other on a long-distance track, with a prize of $1 million for the winner.6 It would be the first event of its kind in the world, and a quick and cheap way to give DARPA a head start in pursuing their goal.
The course was laid out over 142 miles, and DARPA hadn’t made it easy. There were steep climbs, boulders, dips, gullies, rough terrain and the odd cactus to contend with. The driverless cars would have to navigate dirt tracks that were sometimes only a few feet wide. Two hours before the start, the organizers gave each team a CD of GPS coordinates.7 These represented two thousand waypoints that were sprinkled along the route like breadcrumbs – just enough to give the cars a rough sense of where to go, but not enough to help them navigate the obstacles that lay ahead.
The challenge was daunting, but 106 plucky teams applied in that first year. Fifteen competitors passed the qualifying rounds and were considered safe enough to take on the track. Among them were cars that looked like dune buggies, cars that looked like monster trucks, and cars that looked like tanks. There were rumours that one contender had mortgaged their house to build their car, while another had two surfboards stuck on their vehicle roof to make it stand out. There was even a self-balancing motorbike.8
On the morning of the race, a ramshackle line-up of cars gathered at Slash X along with a few thousand spectators. Without any drivers inside, the vehicles took turns approaching the start, each one looking more like it belonged in Mad Max or Wacky Races than the last. But looks didn’t matter. All they had to do was get around the track, without any human intervention, in less than ten hours.
Things didn’t quite go as planned. One car flipped upside down in the starting area and had to be withdrawn.9 The motorbike barely cleared the start line before it rolled on to its side and was declared out of the race. One car hit a concrete wall 50 yards in. Another got tangled in a barbed-wire f
ence. Yet another got caught between two tumbleweeds and – thinking they were immovable objects – became trapped reversing back and forth, back and forth, until someone eventually intervened.10 Others still went crashing into boulders and careering into ditches. Axles were snapped, tyres ripped, bodywork went flying.11 The scene around the Slash X saloon bar began to look like a robot graveyard.
The top-scoring vehicle, an entry by Carnegie Mellon University, managed an impressive 7 miles before misjudging a hill – at which point the tyres started spinning and, without a human to help, carried on spinning until they caught fire.12 By 11 a.m. it was all over. A DARPA organizer climbed into a helicopter and flew over to the finish line to inform the waiting journalists that none of the cars would be getting that far.13
The race had been oily, dusty, noisy and destructive – and had ended without a winner. All those teams of people had worked for a year on a creation that had lasted at best a few minutes. But the competition was anything but a disaster. The rivalry had led to an explosion of new ideas, and by the next Grand Challenge in 2005, the technology was barely recognizable.
Second time around, all but one of the entrants surpassed the 7 miles achieved in 2004. An astonishing five different cars managed to complete the full race distance of 132 miles without any human intervention.14
Now, little more than a decade later, it’s widely accepted that the future of transportation is driverless. In late 2017, Philip Hammond, the British Chancellor of the Exchequer, announced the government’s intention to have fully driverless cars – without a safety attendant on board – on British roads by 2021. Daimler has promised driverless cars by 2020,15 Ford by 2021,16 and other manufacturers have made their own, similar forecasts.