“We need an industry-wide safety certification framework for autonomous vehicles”
FiveAI publishes paper as call to action
By Lucy Yu, Director of Public Policy, FiveAI
Today, FiveAI publishes a paper outlining steps towards an industry-wide framework to ensure the safety of highly automated vehicles on UK roads. Here’s a summary of the key principles and how they can be achieved.
It’s time to move safety forwards — together
Safety matters. Across the evolving autonomous vehicle industry, everyone is talking about safety and how to get it right. And no wonder — this transformative technology is in our sights, but the risks remain real and many problems remain unsolved. New transport technologies and services must improve citizens’ lives, not endanger them. The safety of self-driving vehicles must surpass human levels from the get-go, and continue to improve — fast.
Proving safety is also key to public acceptance. Without this acceptance, vehicles and services will not be adopted at the rate necessary for their many potential benefits to be unlocked. All this talk is vital and welcomed but, with the industry moving at such a swift pace, safety rhetoric must be matched with pragmatic, collective action.
Working closely with government, the industry needs to reach a new level of transparency. Regulators and businesses need to come together to agree on a framework that will drive high safety standards across the board, and guarantee ongoing safety innovation. Today, FiveAI publishes a landmark paper: ‘Certification of Highly Automated Vehicles for Use on UK Roads: Creating An Industry-Wide Framework for Safety’. We’ve united globally leading thinkers in the verification of complex autonomous systems and put together a set of insight and research-backed proposals and questions for industry and government, on the challenge of seeking and sharing verification evidence.
Our aim is to spark informed, collaborative decision making that quickly leads to the creation of regulation that 1. means self-driving innovation can thrive in the UK; 2. ensures the highest safety standards; 3. encourages all companies developing self-driving tech to share their safety findings via a best practice process that drives improvement, enhances safety for all citizens, and allows companies to protect their unique IP; and 4. takes meaningful steps towards greater transparency with the public.
Key principles, at a glance
The safety certification framework we outline will:
Ensure that commercially owned tech is verified independently and rigorously
What? The UK should create a market that allows for the independent, unbiased verification of complex autonomous systems that are owned by, and being developed by, commercial companies.
Why? Companies can’t be allowed to verify their systems on their own, unchecked. Verification of this revolutionary new tech calls for new, complex forms of simulation, drawing on techniques developed in the chip and aerospace industries. Many companies, particularly traditional automotive companies, do not have sufficient experience in this area.
How? We outline a framework based on a governing Certification Authority supported by independent test houses. The test houses control access to scenario test cases submitted by the industry. A publicly described test oracle determines whether a scenario test case has been passed or failed by the self-driving system, based on a safety threshold set by the Certification Authority.
Place transparent, hyper-scale and high-fidelity software simulation at the heart of the process
What? In an infinite real-world state space, simulation plays an important role as a tool for verification, helping developers to find and generate tricky scenario test cases and to complete faster-than-real-time test coverage. Currently, there isn’t a single, consistent, industry-wide approach to simulation and there is little public information about the parameters of the simulation models used by different developers.
Why? To ensure safety, the simulation environment must itself be validated. Greater transparency will allow the industry, regulators and the public to know that a given simulation is trustworthy and conducive to safety. For a simulation to be trustworthy, the simulator used must portray an appropriately accurate representation of reality. And the simulator needs to test the full stack self-driving system, not just the prediction, behaviour and control (‘motion planning’) aspects of the software. To do this requires a model of the vehicle’s sensor limitations and their ability to perceive the driving environment.
How? Government — through the Certification Authority — can play an important market-making role for the creation and licensing of simulation tools. It can do this either by ensuring that development expenses are capable of being leveraged into meaningful revenue streams by technology developers, or by sponsoring a cross-industry unit.
Create a means for companies to share their safety findings, with incentives to do so
What? Companies are rigorously testing self-driving vehicles. This involves actively looking for failures in their systems. When they discover a scenario test case in which their system fails, they can fix it. Through this process, complex autonomous systems become increasingly safe. Right now, there isn’t a smart, functioning way for companies to collaborate and share this data.
Why? Each business will discover its own unique set of challenging test cases. By sharing them across the wider industry, each company will have to demonstrate a safe self-driving system for its own most challenging scenario test cases and, crucially, for those of its competitors, too.
How? Enhanced safety will be fueled through competitive collaboration, as well as smart regulation. Scenarios can be discovered and submitted to a governing Certification Authority. Independent test houses would provide controlled access to validated scenarios so that multiple companies can test their own systems against these shared scenarios. To protect the IP and innovation spend of individual companies, the full details of the scenarios do not need to be shared — just the core details that enable safe, comprehensive testing for all.
Provide a set of road rules and driving codes and conventions to maximise safety
What? Driving standards and road rules exist to make our roads safer and encourage the free flow of traffic. But, in some cases, following rules in an overly rigid manner could reduce safety — rules need to be interpreted intelligently. To promote the safety of automated driving systems, autonomous vehicles must have the same options as human drivers. This implies much more than a simple digital map of road rules, but also some formalisation of the ‘exception handling’ rules that are implicit in common sense human behaviour.
Why? Moment by moment, human drivers exercise interpretation, discretion and common sense when applying standards and rules to real driving on real roads. For example, crossing a centre dividing line to avoid a lane obstruction, or judging how to behave if traffic lights are defective. These approaches and value judgements could be important for safety, but are not currently codified in a machine-readable way.
How? Devise and publish a Digital Highway Code.
A technical perspective
The paper brings together globally leading academics with diverse specialisms, uniting policy, technology and engineering. Professor John McDermid is Europe’s foremost authority on safety assurance of high-integrity systems, an advisor to FiveAI, and one of the paper’s contributors.
Reflecting on the paper, John commented:
“One of the motivating ideas behind autonomous driving is to make systems that are safer than humans. However, it’s unlikely the public would accept the uncertainty of this new tech unless it was significantly safer than a human.
The industry and government need to collaborate around a framework that will ensure the safety of autonomy. As well as the ability to share scenario test cases, this must also include arriving at an agreed format for the way in which data around autonomous decision making is stored and made available for analysis. It’s important that the industry engages regulators in these discussions.
Ultimately, by taking this collective action, public confidence will be improved, services can come to market faster and we can all experience the benefits this technology will bring.”
This is no easy undertaking. Creating the right regulatory framework will be deeply complex — technologically, socially, politically, and commercially. The paper begins to address the intersection of engineering, business, politics and citizens’ everyday lives, but further discussion and decision making are needed.
FiveAI is committed to driving these conversations forward, through multiple projects and initiatives. As Europe’s leading autonomous vehicle company, it’s a duty we feel passionately about. Right now, we’re proud to be working closely with the UK Department for Transport for its public dialogue project on connected and automated vehicles, and to be working with the Law Commission on its review of the UK’s legal framework for automated vehicles. Further projects, initiatives and events are currently being planned.
With this new technology moving at such a fast pace and a transport revolution just around the corner, now’s the time to be taking practical action to ensure the safety of autonomous vehicles — for everyone.
Get the full paper at https://five.ai/certificationpaper