Guaranteeing safety: Verifying autonomous vehicles

An interview with Professor Alessio Lomuscio

Team Five
Five Blog

--

We’re thrilled to welcome Alessio Lomuscio to our team. He joins us as a scientific advisor. We caught up with Alessio to talk about his research interests and what he’ll be bringing to Five.

What will you be focusing on for Five?

I’m here to help Five tackle the challenge of verifying autonomous vehicles. I have a clear aim: to help Five find ways of giving guarantees that Five’s cars are safe, so passengers can step inside the vehicle knowing it will do no harm to them or those around them.

Why do self-driving vehicles demand new approaches to verification?

The general public will experience the car as a vehicle that performs journeys on our cities’ roads, getting people from A to B quickly and affordably. But, under the proverbial bonnet, there will be plenty more going on. These are not cars as we’ve known them. They’re autonomous in the sense that the car will be taking a number of decisions independently during the journey.

With this in mind, we need to think about how we can give guarantees that the actions the vehicle will perform are safe with respect to its integrity and the environment. Let’s remember that we do not only have to consider the passengers but also the environment, which will be diverse and complex, encompassing other vehicles, pedestrians, cyclists, buildings, weather conditions, and more.

To provide guarantees, we’ll of course perform extensive testing. This will give us some confidence about the way in which the vehicle will behave over a broad range of standard and corner cases. But testing is necessarily incomplete — however extensive our testing, it can never be exhaustive. There are only a finite amount of situations one can test.

What approaches will you be exploring?

We need to discover ways to guarantee to the best of our abilities that the decision making system of the vehicle, i.e., the car’s onboard ‘reasoner’, is making safe choices. By doing this, we will be able to say that the vehicle will never perform a particular sequence of events, such as never endangering a pedestrian in order to reach its destination faster. The overall aim of this work is to give guarantees about the behaviour of the vehicle before it’s deployed. Our exploration will take place in the lab, where we’ll be assessing the vehicle’s control system.

We’ll also aim to find ways to give guarantees about the car’s sensors and classifiers, especially in terms of vision and scene reconstruction. For example, we need to ensure that pedestrians, objects, and other vehicles are always correctly identified. The scientific process will be one of ‘divide and conquer’. We’ll address smaller building blocks (sensors, scene reconstruction, etc.) then move towards higher level guarantees on the overall system.

How has your research primed you for this process?

I studied engineering for my undergraduate degree and formal logic for my PhD. Since around 2000, I’ve been working on the verification of AI systems. Back in 2000, this was unexplored territory. Today, it’s great to see there is a growing attention to these themes both in the public and private sector.

Over the past 15 years, I’ve been especially interested in developing methods based on model checking for agent-based systems — distributed autonomous systems that think for themselves. I’ve contributed to developing methods to verify agent-based systems, and have witnessed the discipline progress. In the past, it was possible to guarantee systems with about 10⁵ ‘states’ — the possible situations that the system may encounter at run time. Today, sometimes it’s possible to analyse 10⁴⁰ states and beyond. These are large numbers. With abstraction methods we often analyse state spaces that are infinite.

I’ve applied these techniques to autonomous submarines with our colleagues in Southampton to ensure they perform correctly. More recently, my colleagues and I have been looking at robotic swarms, such as flying drones. Our focus, again, has been to develop ways to give guarantees of overall behaviours, such as flocking, aggregation, and formation flying. Again, the question has been “How can we guarantee if these happen or not?”

Another area I’ve focused on in the past is developing automated reasoning methods for considering fault and tolerance, something I feel will be extremely relevant to Five. When assessing autonomous vehicles, it’s not enough to just ensure the system’s behaviour when everything is working correctly. We also need to think through what happens when things go wrong, to ensure a system is resilient. What if a vehicle sensor was to fail? Can we predict the consequences of faults and mitigate their impact? To achieve that, we will need to apply existing methods and develop new ones, to perform safety analysis.

How would you define the verification challenge, in a nutshell?

We have two key questions to address. The first is “What do we verify?” That is to say, which properties of the system do we want to reason about? The second one is “How do we do it?” That is, what techniques do we develop to do that? Much of my prior work will be relevant to my activities with Five. But at the same time, this project is really a new challenge. The complexity of an autonomous car is enormous and we need to address the question of verifying its autonomy.

Digging deeper, an exciting development has been taking place in AI for some years now, and it’s been driven by techniques from the realm of Machine Learning. Many sensors and classifiers are now derived via ML algorithms; these are known to be fragile, presenting new challenges for verification and debugging.

What excites you about working with Five?

This is the right time, and a deeply interesting time, to be asking complex questions about autonomy. After so many years of work and research investment in AI, you can see it’s coming of age. Autonomous systems are almost here, including self-driving cars. We have the ability to build them. The biggest challenge, however, is to build them in such a manner that they are safe and reliable. This isn’t just about making machines, but about making safe machines that serve society. That’s what motivates me to work with Five.

The business is a technological leader. The tech is superb and the teams have made exceptional progress with their prototypes. But this is about more than just the tech. It’s about society. Transforming mobility is one of the core problems of our age. Service models for autonomous vehicles vary, but Five exists to deliver a safe, shared, sustainable, accessible and affordable experience for citizens. To be involved in solving mobility from a societal perspective is a huge privilege. It’s meaningful, and it matters.

About Alessio Lomuscio, PhD, FEurAI
Alessio is Professor of Logic in Multi-Agent Systems in the Department of Computing at Imperial College London, where he leads the Verification of Autonomous Systems Group. He is a Fellow of the European Association of Artificial Intelligence and currently holds a Royal Academy of Engineering Chair in Emerging Technologies. Alessio has published over 150 conference papers and over 30 journal papers. He is a scientific advisor to Five.

--

--

Team Five
Five Blog

We’re building self-driving software and development platforms to help autonomy programs solve the industry’s greatest challenges.