Hello World

Home > Other > Hello World > Page 13
Hello World Page 13

by Hannah Fry


  Or had they? Because when the same study asked participants if they would actually buy a car which would murder them if the circumstances arose, they suddenly seemed reluctant to sacrifice themselves for the greater good.

  This is a conundrum that divides opinion – and not just in what people think the answer should be. As a thought experiment, it remains a firm favourite of technology reporters and other journalists, but all the driverless car experts I interviewed rolled their eyes as soon as the trolley problem was mentioned. Personally, I still have a soft spot for it. Its simplicity forces us to recognize something important about driverless cars, to challenge how we feel about an algorithm making a value judgement on our own, and others’, lives. At the heart of this new technology – as with almost all algorithms – are questions about power, expectation, control, and delegation of responsibility. And about whether we can expect our technology to fit in with us, rather than the other way around. But I’m also sympathetic to the aloof reaction it receives in the driverless car community. They, more than anyone, know how far away we are from having to worry about the trolley problem as a reality.

  Breaking the rules of the road

  Bayes’ theorem and the power of probability have driven much of the innovation in autonomous vehicles ever since the DARPA challenge. I asked Paul Newman, professor of robotics at the University of Oxford and founder of Oxbotica, a company that builds driverless cars and tests them on the streets of Britain, how his latest autonomous vehicles worked, and he explained as follows: ‘It’s many, many millions of lines of code, but I could frame the entire thing as probabilistic inference. All of it.’36

  But while Bayesian inference goes some way towards explaining how driverless cars are possible, it also explains how full autonomy, free from any input by a human driver, is a very, very difficult nut to crack.

  Imagine, Paul Newman suggests, ‘you’ve got two vehicles approaching each other at speed’ – say, travelling in different directions down a gently curved A-road. A human driver will be perfectly comfortable in that scenario, knowing that the other car will stick to its own lane and pass safely a couple of metres to the side. ‘But for the longest time,’ Newman explains, ‘it does look like you’re going to hit each other.’ How do you teach a driverless car not to panic in that situation? You don’t want the vehicle to drive off the side of the road, trying to avoid a collision that was never going to happen. But, equally, you don’t want it to be complacent if you really do find yourself on the verge of a head-on-crash. Remember, too, these cars are only ever making educated guesses about what to do. How do you get it to guess right every single time? That, says Newman, ‘is a hard, hard problem’.

  It’s a problem that puzzled the experts for a long time, but it does have a solution. The trick is to build in a model for how other – sane – drivers will behave. Unfortunately, the same can’t be said of other nuanced driving scenarios.

  Newman explains: ‘What’s hard is all the problems with driving that have nothing to do with driving.’ For instance, teaching an algorithm to understand that hearing the tunes of an ice-cream van, or passing a group of kids playing with a ball on the pavement, might mean you need to be extra cautious. Or to recognize the confusing hopping of a kangaroo, which, at the time of writing, Volvo admitted to be struggling with.37 Probably not much of a problem in rural Surrey, but something the cars need to master if they’re to be roadworthy in Australia.

  Even harder, how do you teach a car that it should sometimes break the rules of the road? What if you’re sitting at a red light and someone runs in front of your car and frantically beckons you to edge forwards? Or if an ambulance with its lights on is trying to get past on a narrow street and you need to mount the pavement to let it through? Or if an oil tanker has jack-knifed across a country lane and you need to get out of there by any means possible?

  ‘None of these are in the Highway Code,’ Newman rightly points out. And yet a truly autonomous car needs to know how to deal with all of them if it’s to exist without ever having any human intervention. Even in emergencies.

  That’s not to say these are insurmountable problems. ‘I don’t believe there’s any level of intelligence that we won’t be able to get a machine to,’ Newman told me. ‘The only question is when.’

  Unfortunately, the answer to that question is: probably not any time soon. That driverless dream we’re all waiting for might be quite a lot further away than we think.

  Because there’s another layer of difficulty to contend with when trying to build that sci-fi fantasy of a go-anywhere, do-anything, steering-wheel-free driverless car, and it’s one that goes well beyond the technical challenge. A fully autonomous car will also have to deal with the tricky problem of people.

  Jack Stilgoe, a sociologist from University College London and an expert in the social impact of technology, explains: ‘People are mischievous. They’re active agents, not just passive parts of the scenery.’38

  Imagine, for a moment, a world where truly, perfectly autonomous vehicles exist. The number one rule in their on-board algorithms will be to avoid collisions wherever possible. And that changes the dynamics of the road. If you stand in front of a driverless car – it has to stop. If you pull out in front of one at a junction – it has to behave submissively.

  In the words of one participant in a 2016 focus group at the London School of Economics: ‘You’re going to mug them right off. They’re going to stop and you’re just going to nip round.’ Translation: these cars can be bullied.

  Stilgoe agrees: ‘People who’ve been relatively powerless on roads up’til now, like cyclists, may start cycling very slowly in front of self-driving cars, knowing that there is never going to be any aggression.’

  Getting around this problem might mean bringing in stricter rules to deal with people who abuse their position as cyclists or pedestrians. It’s been done before, of course: think of jay-walking. Or it could mean forcing everything else off the roads – as happened with the introduction of the motor car – which is why already you won’t see bicycles, horses, carts, carriages or pedestrians on a motorway.

  If we want fully autonomous cars, we’ll almost certainly have to do something similar again and limit the number of aggressive drivers, ice-cream vans, kids playing in the road, roadwork signs, difficult pedestrians, emergency vehicles, cyclists, mobility scooters and everything else that makes the problem of autonomy so difficult. That’s fine, but it’s a little different from the way the idea is currently being sold to us.

  ‘The rhetoric of autonomy and transport is all about not changing the world,’ Stilgoe tells me. ‘It’s about keeping the world as it is but making and allowing a robot to just be as good as and then better than a human at navigating it. And I think that’s stupid.’

  But hang on, some of you may be thinking. Hasn’t this problem already been cracked? Hasn’t Waymo, Google’s autonomous car, driven millions of miles already? Aren’t Waymo’s fully autonomous cars (or at least, close to fully autonomous cars) currently driving around on the roads of Phoenix, Arizona?

  Well, yes. That’s true. But not every mile of road is created equally. Most miles are so easy to drive, you can do it while daydreaming. Others are far more challenging. At the time of writing, Waymo cars aren’t allowed to go just anywhere: they’re ‘geo-fenced’ into a small, pre-defined area. So too are the driverless cars Daimler and Ford propose to have on the roads by 2020 and 2021 respectively. They’re ride-hailing cars confined to a pre-decided go-zone. And that does make the problem of autonomy quite a lot simpler.

  Paul Newman thinks this is the future of driverless cars we can expect: ‘They’ll come out working in an area that’s very well known, where their owners are extremely confident that they’ll work. So it could be part of a city, not in the middle of a place with unusual roads or where cows could wander into the path. Maybe they’ll work at certain times of day and in certain weather situations. They’re going to be operated as a transport service.’


  That’s not quite the same thing as full autonomy. Here’s Jack Stilgoe’s take on the necessary compromise: ‘Things that look like autonomous systems are actually systems in which the world is constrained to make them look autonomous.’

  The vision we’ve come to believe in is like a trick of the light. A mirage that promises a luxurious private chauffeur for all but, close up, is actually just a local minibus.

  If you still need persuading, I’ll leave the final word on the matter to one of America’s biggest automotive magazines – Car and Driver:

  No car company actually expects the futuristic, crash-free utopia of streets packed with driverless vehicles to transpire anytime soon, nor for decades. But they do want to be taken seriously by Wall Street as well as stir up the imaginations of a public increasingly disinterested in driving. And in the meantime, they hope to sell lots of vehicles with the latest sophisticated driver-assistance technology.39

  So how about that driver-assistance technology? After all, driverless cars are not an all-or-nothing proposition.

  Driverless technology is categorized using six different levels: from level 0 – no automation whatsoever – up to to level 5 – the fully autonomous fantasy. In between, they range from cruise control (level 2) to geo-fenced autonomous vehicles (level 4) and are colloquially referred to as level 1: feet off; level 2: hands off; level 3: eyes off; level 4: brain off.

  So, maybe level 5 isn’t on our immediate horizon, and level 4 won’t be quite what it’s cracked up to be, but there’s a whole lot of automation to be had on the way up. What’s wrong with just slowly working up the levels in our private cars? Build cars with steering wheels and brake pedals and drivers in driver’s seats, and just allow a human to step in and take over in an emergency? Surely that’ll do until the technology improves?

  Unfortunately, things aren’t quite that simple. Because there’s one last twist in the tale. A whole host of other problems. An inevitable obstacle for anything short of total human-free driving.

  The company baby

  Among the pilots at Air France, Pierre-Cédric Bonin was known as a ‘company baby’.40 He had joined the airline at the tender age of 26, with only a few hundred hours of flying time under his belt, and had grown up in the airline’s fleet of Airbuses. By the time he stepped aboard the fated flight of AF447, aged 32, he had managed to clock up a respectable 2,936 hours in the air, although that still made him by far the least experienced of the three pilots on board.41

  None the less, it was Bonin who sat at the controls of Air France flight 447 on 31 May 2009, as took it off from the tarmac of Rio de Janeiro–Galeão International Airport and headed home to Paris.42

  This was an Airbus A330, one of the most sophisticated commercial aircraft ever built. Its autopilot system was so advanced that it was practically capable of completing an entire flight unaided, apart from take-off and landing. And even when the pilot was in control, it had a variety of built-in safety features to minimize the risk of human error.

  But there’s a hidden danger in building an automated system that can safely handle virtually every issue its designers can anticipate. If a pilot is only expected to take over in exceptional circumstances, they’ll no longer maintain the skills they need to operate the system themselves. So they’ll have very little experience to draw on to meet the challenge of an unanticipated emergency.

  And that’s what happened with Air France flight 447. Although Bonin had accumulated thousands of hours in an Airbus cockpit, his actual experience of flying an A330 by hand was minimal. His role as a pilot had mostly been to monitor the automatic system. It meant that when the autopilot disengaged during that evening’s flight, Bonin didn’t know how to fly the plane safely.43

  The trouble started when ice crystals began to form inside the air-speed sensors built into the fuselage. Unable to take a sensible reading, the autopilot sounded an alarm in the cabin and passed responsibility to the human crew. This in itself was not cause for concern. But when the plane hit a small bump of turbulence, the inexperienced Bonin over-reacted. As the aircraft began to roll gently to the right, Bonin grabbed the side-stick and pulled it to the left. Crucially, at the same time, he pulled back on the stick, sending the aircraft into a dramatically steep climb.44

  As the air thinned around the plane, Bonin kept tightly pulling back on the stick until the nose of the aircraft was so high that the air could no longer flow slickly over the wings. The wings effectively became wind-breakers and the aircraft dramatically lost lift, free-falling, nose-up, out of the sky.

  Alarms sounded in the cockpit. The captain burst back in from the rest cabin. AF447 was descending towards the ocean at 10,000 feet per minute.

  By now, the ice crystals had melted, there was no mechanical malfunction, and the ocean was far enough below them that they could still recover in time. Bonin and his co-pilot could have easily rescued everyone on board in just 10–15 seconds, simply by pushing the stick forward, dropping the aircraft’s nose and allowing the air to rush cleanly over the wings again.45

  But in his panic, Bonin kept the side-stick pulled back. No one realized he was the one causing the issue. Precious seconds ticked by. The captain suggested levelling the wings. They briefly discussed whether they were ascending or descending. Then, within 8,000 feet of sea level, the co-pilot took the controls.46

  ‘Climb … climb … climb … climb …’ the co-pilot is heard shouting.

  ‘But I’ve had the stick back the whole time!’ Bonin replied.

  The penny dropped for the captain. He finally realized they had been free-falling in an aerodynamic stall for more than three minutes and ordered them to drop the nose. Too late. Tragically, by now, they were too close to the surface. Bonin screamed: ‘Damn it! We’re going to crash. This can’t be happening!’47 Moments later the aircraft plunged into the Atlantic, killing all 228 souls on board.

  Ironies of automation

  Twenty-six years before the Air France crash, in 1983, the psychologist Lisanne Bainbridge wrote a seminal essay on the hidden dangers of relying too heavily on automated systems.48 Build a machine to improve human performance, she explained, and it will lead – ironically – to a reduction in human ability.

  By now, we’ve all borne witness to this in some small way. It’s why people can’t remember phone numbers any more, why many of us struggle to read our own handwriting and why lots of us can’t navigate anywhere without GPS. With technology to do it all for us, there’s little opportunity to practise our skills.

  There is some concern that the same might happen with self-driving cars – where the stakes are a lot higher than with handwriting. Until we get to full autonomy, the car will still sometimes unexpectedly hand back control to the driver. Will we be able to remember instinctively what to do? And will teenage drivers of the future ever have the chance to master the requisite skills in the first place?

  But even if all drivers manage to stay competentfn4 (allowing for generous interpretation of the word ‘stay’), there’s another issue we’ll still have to contend with. Because what the human driver is asked to do before autopilot cuts out is also important. There are only two possibilities. And – as Bainbridge points out – neither is particularly appealing.

  A level 2, hands-off car will expect the driver to pay careful attention to the road at all times.49 It’s not good enough to be trusted on its own and will need your careful supervision. Wired once described this level as ‘like getting a toddler to help you with the dishes’.50

  At the time of writing, Tesla’s autopilot is one such example of this approach.51 It’s currently like a fancy cruise control – it’ll steer and brake and accelerate on the motorway, but expects the driver to be alert and attentive and ready to step in at all times. To make sure you’re paying attention, an alarm will sound if you remove your hands from the wheel for too long.

  But, as Bainbridge wrote in her essay, that’s not an approach that’s going to end well. It’s just unrealistic
to expect humans to be vigilant: ‘It’s impossible for even a highly motivated human to maintain effective visual attention towards a source of information, on which very little happens, for more than about half an hour.’52

  There’s some evidence that people have struggled to heed Tesla’s insistence that they keep their attention on the road. Joshua Brown, who died at the wheel of his Tesla in 2016, had been using Autopilot mode for 37½ minutes when his car hit a truck that was crossing his lane. The investigation by the National Highway Traffic Safety Administration concluded that Brown had not been looking at the road at the time of the crash.53 The accident was headline news around the world, but this hasn’t stopped some foolhardy YouTubers from posting videos enthusiastically showing how to trick your car into thinking you’re paying attention. Supposedly, taping a can of Red Bull54 or wedging an orange55 into the steering wheel will stop the car setting off those pesky alarms reminding you of your responsibilities.

  Other programmes are finding the same issues. Although Uber’s driverless cars need human intervention every 13 miles,56 getting drivers to pay attention remains a struggle. On 18 March 2018, an Uber self-driving vehicle fatally struck a pedestrian. Video footage from inside the car showed that the ‘human monitor’ sitting behind the wheel was looking away from the road in the moments before the collision.57

  This is a serious problem, but there is an alternative option. The car companies could accept that humans will be humans, and acknowledge that our minds will wander. After all, being able to read a book while driving is part of the appeal of self-driving cars. This is the key difference between level 2: ‘hands off’ and level 3: ‘eyes off’.

 

‹ Prev