by Hannah Fry
Talk in the press has moved on from questioning whether driverless cars will happen to addressing the challenges we’ll face when they do. ‘Should your driverless car hit a pedestrian to save your life?’ asked the New York Times in June 2016;17 and, in November 2017: ‘What happens to roadkill or traffic tickets when our vehicles are in control?’18 Meanwhile, in January 2018, the Financial Times warned: ‘Trucks headed for a driverless future: unions warn that millions of drivers’ jobs will be disrupted.’19
So what changed? How did this technology go from ramshackle incompetence to revolutionary confidence in a few short years? And can we reasonably expect the rapid progress to continue?
What’s around me?
Our dream of a perfect autonomous vehicle dates all the way back to the sci-fi era of jet packs, rocket ships, tin-foil space suits and ray guns. At the 1939 World Fair in New York, General Motors unveiled its vision of the future. Visitors to the exhibition strapped themselves into an audio-equipped chair mounted on a conveyor that took them on a 16-minute tour of an imagined world.20 Beneath the glass, they saw a scale model of the GM dream. Superhighways that spanned the length and breadth of the country, roads connecting farmlands and cities, lanes and intersections – and roaming over all of them, automated radio-controlled cars capable of safely travelling at speeds of up to 100 miles an hour. ‘Strange?’ the voiceover asked them. ‘Fantastic? Unbelievable? Remember, this is the world of 1960!’21
There were numerous attempts over the years to make the dream a reality. General Motors tried with the Firebird II in the 1950s.22 British researchers tried adapting a Citroën DS19 to communicate with the road in the 1960s (somewhere between Slough and Reading, you’ll still find a 9-mile stretch of electric cable, left over from their experiments).23 Carnegie’s ‘Navlab’ in the 1980s, the EU’s $1 billion Eureka Prometheus Project in the 1990s.24 With every new project, the dream of the driverless car seemed, tantalizingly, only just around the corner.
On the surface, building a driverless car sounds as if it should be relatively easy. Most humans manage to master the requisite skills to drive. Plus, there are only two possible outputs: speed and direction. It’s a question of how much gas to apply and how much to turn the wheel. How hard can it be?
But, as the first DARPA Grand Challenge demonstrated, building an autonomous vehicle is actually a lot trickier than it looks. Things quickly get complicated when you’re trying to get an algorithm to control a great big hunk of metal travelling at 60 miles per hour.
Take the neural networks that are used to great effect to detect tumours in breast tissue; you’d think they should be perfectly suited to help a driverless car technology ‘see’ its surroundings. By 2004, neural networks (albeit in slightly more rudimentary form than today’s state-of-the-art versions) were already whirring away within prototype driverless vehicles,25 trying to extract meaning from the cameras mounted on top of the cars. There’s certainly a great deal of valuable information to be had from a camera. A neural network can understand the colour, texture, even physical features of the scene ahead – things like lines, curves, edges and angles. The question is: what do you do with that information once you have it?
You could tell the car: ‘Only drive on something that looks like tarmac.’ But that won’t be much good in the desert, where the roads are dusty paths. You could say: ‘Drive on the smoothest thing in the image’ – but, unfortunately, the smoothest thing is almost always the sky or a glass-fronted building. You could think in quite abstract terms about how to describe the shape of a road: ‘Look for an object with two vaguely straight borders. The lines should be wide apart at the bottom of the image and taper in towards each other at the top.’ That seems pretty sensible. Except, unfortunately, it’s also how a tree looks in a photograph. Generally, it isn’t considered wise to encourage a car to drive up a tree.
The issue is that cameras can’t give you a sense of scale or distance. It’s something film directors use to their advantage all the time – think of the opening scene in Star Wars where the Star Destroyer slowly emerges against the inky blackness of space, looming dramatically over the top of the frame. You get the sense of it being a vast, enormous beast, when in reality it was filmed using a model no more than a few feet long. It’s a trick that works well on the big screen. But in a driverless car, when two thin parallel lines could either be a road ahead on the horizon or the trunk of a nearby tree, accurately judging distance becomes a matter of life and death.
Even if you use more than one camera and cleverly combine the images to build a 3D picture of the world around you, there’s another potential problem that comes from relying too heavily on neural networks, as Dean Pomerleau, an academic from Carnegie Mellon University, discovered back in the 1990s. He was working on a car called ALVINN, Autonomous Land Vehicle In a Neural Network, which was trained in how to understand its surroundings from the actions of a human driver. Pomerleau and others would sit at the wheel and take ALVINN on long drives, recording everything they were doing in the process. This formed the training dataset from which their neural networks would learn: drive anywhere a human would, avoid everywhere else.26
It worked brilliantly at first. After training, ALVINN was able to comfortably navigate a simple road on its own. But then ALVINN came across a bridge and it all went wrong. Suddenly, the car swerved dangerously, and Pomerleau had to grab hold of the wheel to save it from crashing.
After weeks of going through the data from the incident, Pomerleau worked out what the issue had been: the roads that ALVINN had been trained on had all had grass running along the sides. Just like those neural networks back in the ‘Medicine’ chapter, which classified huskies on the basis of snow in the pictures, ALVINN’s neural network had used the grass as a key indicator of where to drive. As soon as the grass was gone, the machine had no idea what to do.
Unlike cameras, lasers can measure distance. Vehicles that use a system called LiDAR (Light Detection and Ranging, first used at the second DARPA Grand Challenge in 2005) fire out a photon from a laser, time how long it takes to bounce off an obstacle and come back, and end up with a good estimate of how far away that obstacle is. It’s not all good news: LiDAR can’t help with texture or colour, it’s hopeless at reading road signs, and it’s not great over long distances. Radar, on the other hand – the same idea but with radio waves – does a good job in all sorts of weather conditions, can detect obstacles far away, even seeing through some materials, but is completely hopeless at giving any sort of detail of the shape or structure of the obstacle.
On its own, none of these data sources – the camera, the LiDAR, the radar – can do enough to understand what’s going on around a vehicle. The trick to successfully building a driverless car is combining them. Which would be a relatively easy task if they all agreed about what they were actually seeing, but is a great deal more difficult when they don’t.
Consider the tumbleweed that stumped one of the cars in the first DARPA Grand Challenge and imagine your driverless car finds itself in the same position. The LiDAR is telling you there is an obstacle ahead. The camera agrees. The radar, which can pass through the flimsy tumbleweed, is telling you there’s nothing to worry about. Which sensor should your algorithm trust?
What if the camera pulls rank? Imagine a big white truck crosses your path on a cloudy day. This time LiDAR and radar agree that the brakes need to be applied, but against the dull white sky, the camera can see nothing that represents a danger.
If that weren’t hard enough, there’s another problem. You don’t just need to worry about your sensors misinterpreting their surroundings, you need to take into account that they might mis-measure them too.
You may have noticed that blue circle on Google Maps that surrounds your location – it’s there to indicate the potential error in the GPS reading. Sometimes the blue circle will be small and accurately mark your position; at other times it will cover a much larger area and be centred on entirely the wrong place. Most of
the time, it doesn’t much matter. We know where we are and can dismiss incorrect information. But a driverless car doesn’t have a ground truth of its position. When it’s driving down a single lane of a motorway, less than 4 metres wide, it can’t rely on GPS alone for an accurate enough diagnosis of where it is.
GPS isn’t the only reading that’s prone to uncertainty. Every measurement taken by the car will have some margin of error: radar readings, the pitch, the roll, the rotations of the wheels, the inertia of the vehicle. Nothing is ever 100 per cent reliable. Plus, different conditions make things worse: rain affects LiDAR;27 glaring sunlight can affect the cameras;28 and long, bumpy drives wreak havoc with accelerometers.29
In the end, you’re left with a big mess of signals. Questions that seemed simple – Where are you? What’s around you? What should you do? – become staggeringly difficult to answer. It’s almost impossible to know what to believe.
Almost impossible. But not quite.
Because, thankfully, there is a route through all of this chaos – a way to make sensible guesses in a messy world. It all comes down to a phenomenally powerful mathematical formula, known as Bayes’ theorem.
The great Church of the Reverend Bayes
It’s no exaggeration to say that Bayes’ theorem is one of the most influential ideas in history. Among scientists, machine-learning experts and statisticians, it commands an almost cultish enthusiasm. Yet at its heart the idea is extraordinarily simple. So simple, in fact, that you might initially think it’s just stating the obvious.
Let me try and illustrate the idea with a particularly trivial example.
Imagine you’re sitting having dinner in a restaurant. At some point during the meal, your companion leans over and whispers that they’ve spotted Lady Gaga eating at the table opposite.
Before having a look for yourself, you’ll no doubt have some sense of how much you believe your friend’s theory. You’ll take into account all of your prior knowledge: perhaps the quality of the establishment, the distance you are from Gaga’s home in Malibu, your friend’s eyesight. That sort of thing. If pushed, it’s a belief that you could put a number on. A probability of sorts.
As you turn to look at the woman, you’ll automatically use each piece of evidence in front of you to update your belief in your friend’s hypothesis. Perhaps the platinum-blonde hair is consistent with what you would expect from Gaga, so your belief goes up. But the fact that she’s sitting on her own with no bodyguards isn’t, so your belief goes down. The point is, each new observation adds to your overall assessment.
This is all Bayes’ theorem does: offers a systematic way to update your belief in a hypothesis on the basis of the evidence.30 It accepts that you can’t ever be completely certain about the theory you’re considering, but allows you to make a best guess from the information available. So, once you realize the woman at the table opposite is wearing a dress made of meat – a fashion choice that you’re unlikely to chance upon in the non-Gaga population – that might be enough to tip your belief over the threshold and lead you to conclude that it is indeed Lady Gaga in the restaurant.
But Bayes’ theorem isn’t just an equation for the way humans already make decisions. It’s much more important than that. To quote Sharon Bertsch McGrayne, author of The Theory That Would Not Die: ‘Bayes runs counter to the deeply held conviction that modern science requires objectivity and precision.’31 By providing a mechanism to measure your belief in something, Bayes allows you to draw sensible conclusions from sketchy observations, from messy, incomplete and approximate data – even from ignorance.
Bayes isn’t there just to confirm our existing intuitions. It turns out that being forced to quantify your beliefs in something often leads to counter-intuitive conclusions. It’s Bayes’ theorem that explains why more men than women are falsely identified as future murderers, in the example here in the ‘Justice’ chapter. And it’s Bayes’ theorem that explains why – even if you have been diagnosed with breast cancer – the level of error in the tests means you probably don’t have it (see the ‘Medicine’ chapter, here). Across all branches of science, Bayes is a powerful tool for distilling and understanding what we really know.
But where the Bayesian way of thinking really comes into its own is when you’re trying to consider more than one hypothesis simultaneously – for example, in attempting to diagnose what’s wrong with a patient on the basis of their symptoms,fn1 or finding the position of a driverless car on the basis of sensor readings. In theory, any disease, any point on the map, could represent the underlying truth. All you need to do is weigh up the evidence to decide which is most likely to be right.
And on that point, finding the location of a driverless car turns out to be rather similar to a problem that puzzled Thomas Bayes, the British Presbyterian minister and talented mathematician after whom the theorem is named. Back in the mid-1700s, he wrote an essay which included details of a game he’d devised to explain the problem. It went something a little like this:32
Imagine you’re sitting with your back to a square table. Without you seeing, I throw a red ball on to the table. Your job is to guess where it landed. It’s not going to be easy: with no information to go on, there’s no real way of knowing where on the table it could be.
So, to help your guess, I throw a second ball of a different colour on to the same table. Your job is still to determine the location of the first ball, the red one, but this time I’ll tell you where the second ball ends up on the table relative to the first: whether it’s in front, behind, to the left or right of the red ball. And you get to update your guess.
Then we repeat. I throw a third, a fourth, a fifth ball on to the table, and every time I’ll tell you where each one lands relative to the very first red one – the one whose position you’re trying to guess.
The more balls I throw and the more information I give you, the clearer the picture of the red ball’s position should become in your mind. You’ll never be absolutely sure of exactly where it sits, but you can keep updating your belief about its position until you end up with an answer you’re confident in.
In some sense, the true position of the driverless car is analogous to that of the red ball. Instead of a person sitting with their back to the table, there’s an algorithm trying to gauge exactly where the car is at that moment in time, and instead of the other balls thrown on to the table there are the data sources: the GPS, the inertia measurements and so on. None of them tells the algorithm where the car is, but each adds a little bit more information the algorithm can use to update its belief. It’s a trick known as probabilistic inference – using the data (plus Bayes) to infer the true position of the object. Packaged up correctly, it’s just another kind of machine-learning algorithm.
By the turn of the millennium, engineers had had enough practice with cruise missiles, rocket ships and aircraft to know how to tackle the position problem. Getting a driverless car to answer the question ‘Where am I?’ still wasn’t trivial, but with a bit of Bayesian thinking it was at least achievable.
Between the robot graveyard of the 2004 Grand Challenge and the awe-inspiring technological triumph of the 2005 event – when five different vehicles managed to race more than 100 miles without any human input – many of the biggest leaps forward were thanks to Bayes. It was algorithms based on Bayesian ideas that helped solve the other questions the car needed to answer: ‘What’s around me?’ and ‘What should I do?’fn2
So, should your driverless car hit a pedestrian to save your life?
Let’s pause for a moment to consider the second of those questions. Because, on this very topic, in early autumn 2016, tucked away in a quiet corner of an otherwise bustling exhibition hall at the Paris Auto Show, a Mercedes-Benz spokesperson made a rather exceptional statement. Christoph von Hugo, the manager of driver assistance systems and active safety for the company, was asked in an interview what a driverless Mercedes might do in a crash.
‘If you know you can save at least o
ne person, at least save that one,’ he replied.33
Sensible logic, you would think. Hardly headline news.
Except, Hugo wasn’t being asked about any old crash. He was being tested on his response to a well-worn thought experiment dating back to the 1960s, involving a very particular kind of collision. The interviewer was asking him about a curious conundrum that forces a choice between two evils. It’s known as the trolley problem, after the runaway tram that was the subject of the original formulation. In the case of driverless cars, it goes something like this.
Imagine, some years into the future, you’re a passenger in an autonomous vehicle, happily driving along a city street. Ahead of you a traffic light turns red, but a mechanical failure in your car means you’re unable to stop. A collision is inevitable, but your car has a choice: should it swerve off the road into a concrete wall, causing certain death to anyone inside the vehicle? Or should it carry on going, saving the lives of anyone inside, but killing the pedestrians now crossing the road? What should the car be programmed to do? How do you decide who should die?
No doubt you have your own opinion. Perhaps you think the car should simply try to save as many lives as possible. Or perhaps you think that ‘thou shalt not kill’ should over-ride any calculations, leaving the one sitting in the machine to bear the consequences.fn3
Hugo was clear about the Mercedes position. ‘Save the one in the car.’ He went on: ‘If all you know for sure is that one death can be prevented, then that’s your first priority.’
In the days following the interview, the internet was awash with articles berating Mercedes’ stance. ‘Their cars will act much like the stereotypical entitled European luxury car driver,’34 wrote the author of one piece. Indeed, in a survey published in Science that very summer,35 76 per cent of respondents felt it would be more moral for driverless vehicles to save as many lives as possible, thus killing the people within the car. Mercedes had come down on the wrong side of popular opinion.