Autonomous vehicles: how safe is safe enough?

Team Five
Five Blog
Published in
5 min readApr 6, 2018

The statistics vary slightly depending on which research you read, but the studies agree that human error is a cause in over 90% of road traffic collisions (RTCs).

The latest Department for Transport figures show that there were 1,792 road deaths in the UK in 2016, with a further 24,101 people seriously injured.

When more than 9 in 10 RTCs feature human error as a contributory factor, there is clearly scope for massive improvements in road safety if we can remove human error from the driving task. Humans are brilliant at many things, but looking at the statistics it’s hard not to conclude that there is room for major improvement when it comes to the driving task.

Given that human drivers can sometimes underperform, how consistently safe do autonomous vehicles (AVs) need to be?

To answer that question, some background is helpful. Whereas fully autonomous systems (known as Level 5 autonomy) require no human input, driver assistance technologies which automate the driving task to some degree, but not completely, are described as Level 2 or 3 autonomy. The latter require the human driver to be able to take back control at a moment’s notice.

The problem with these features is that — despite their popularity — they pose serious safety issues. Google’s research using in-car cameras shows that even when explicitly told not to take their attention off the road, drivers quickly place too much faith in the car’s ability to handle any scenario and start checking email or watching movies. Ford has observed the same problem and came to the same conclusion.Outside of a test environment, recent events have proven that these human concentration lapses often have tragic consequences.

The problem is that this behavioural trap is so catastrophic and permanent that the only safe approach to autonomous vehicles is to avoid car and driver sharing any elements of the driving task. Full Level 5 automation has to be the goal, but we’ll arrive at Level 4 autonomy sooner, where the vehicle can operate entirely without human oversight in a confined area.

When alert and unimpaired, humans can make good drivers but the problems start when they fail to maintain one or both of these states. That’s why humans are at least 200 times more likely to be the cause of a crash than the vehicle itself. This is why a future with highly automated vehicles that require no human input (L4 and, eventually, L5) has the potential to be much, much safer.

The autonomous vehicle technology in development today aims to replace the high level perception and reasoning that humans use when driving well. This means that these systems must respond appropriately to a far broader range of scenarios than current Level 2/3 systems.

On the whole, the automotive industry uses risk-based safety standards which have led to an ultra-low prevalence of vehicle failures. The strictest standards, designed to prevent failures that can result in death or serious injury, demand a maximum undetected failure rate of just one in every 11,000 years. However, applying the same failure targets to autonomous systems aimed at replacing human control is an infeasible benchmark. So how safe is “safe enough” for autonomous vehicles, and how do we get there? Extensive verification and validation of these systems clearly has a vital role to play (more on that to come).

UK Department for Transport Statistics: Reported road accidents and casualties, Great Britain, 1950–2014

We know that autonomous vehicles have the potential to be far safer than human drivers. Driverless cars that are just twice as safe as human drivers would lower collision rates to as little as 200 per billion driven miles. Serious injury and death on our roads would be halved. Even if they are still some distance away from the current standard (for low-complexity components) of one undetected failure every 11,000 years, there’s a clear moral and practical case for an intermediate target for autonomous systems.

That might sound simple, but even with a more feasible (lower) failure rate target, the verification and validation effort for autonomous systems is still vast. Existing standards and techniques do provide a useful starting platform that can be adapted to begin growing our confidence in the safety performance of fully autonomous systems prior to their wider-scale deployment on our roads.

A staged approach should be favoured for real-world deployment. Systems should be validated in the real-world over limited length fixed routes that restrict the possible scenarios and hazards a vehicle could encounter. This approach would lead to consumers enjoying the enhanced safety, lower cost and greater convenience of autonomous ‘mobility as a service’ sooner than a ‘go anywhere’ approach to testing.

In addition to testing, we also need to communicate with government and the public to explain how AVs will be made safe.

Driverless cars will improve societies and make our world a better place with lower congestion, quicker journeys, cheaper travel and improved air quality. But the primary objective for any autonomous vehicle technology must be to reduce death and injury. For the public to support autonomous vehicles, the industry needs to prove that these systems are able to meet the safety threshold to which we agree to as a society. We’ll explain more about the techniques we can use to prove compliance with such a safety threshold, without having to drive billions of miles on real roads, in blogs to come. But public engagement on what these technologies can and cannot do along with appropriate safety thresholds that balance genuine and valid public concerns with statistical improvements in road safety should all be part of the process.

Simon Tong, Safety & Human Factors Manager, FiveAI

--

--

Team Five
Five Blog

We’re building self-driving software and development platforms to help autonomy programs solve the industry’s greatest challenges.