Managing Risk and Hazards for Autonomous Vehicles

Team Five
Five Blog
Published in
4 min readApr 13, 2018

The primary function of an autonomous vehicle (AV) is to transport a passenger from point A to point B in a safe and efficient manner. AVs will encounter many other road users and environmental scenarios, and a safe journey requires the AV to observe and respond to relevant objects or hazards along the way.

We all know what normal, safe driving looks like. In the UK, learner drivers are tested on their knowledge of the Highway Code and advanced motorists are encouraged to follow the guidance set out in Roadcraft: The Police Driver’s Handbook.

Implementing this code in an AV is far from simple. To start with, the AV must consider the context within which different objects or road users appear in the scene. Processing the entire scene in order to determine such contexts implies sifting through a significant amount of information. For instance, a dustbin on the payment would be uninteresting and routine, while the same dustbin moved on to the road by a couple of feet would now be a hazard!

Through semantic segmentation, perceptual reasoning and learned experience, AVs can use all the contextual variables in an environment to judge what might reasonably happen in a scene. This informs planning and safe control of the vehicle. As an example, take a reflection caused by a puddle on the road, which could look like an object to be avoided. But the AV knows it’s raining so it can allow for the possibility that the reflection may be in a puddle and is not actually an object, hence continuing to drive unimpeded.

Dr. Subramanian Ramamoorthy, VP Prediction and Motion Planning, FiveAI

Everything that a vehicle encounters has the potential to be hazardous. So, AVs are not programmed by giving them an explicit list of ‘hazards’ and ‘non-hazards’. Instead, we recognize that there are degrees of risk associated with every object. So, the AV learns about hazards from a combination of data-driven modelling and human input regarding specific scenarios. This process is tightly coupled with a similar one for verification and safety testing.

At FiveAI, our entire architecture is built to perform safely across a broad spectrum of potential hazards. Our software takes into account the various actors within a scene, whose motion is integrated within a common map. The decision making logic then reads this map and decides on the course to plot by evaluating all of the risks and calculating the safest path. The more risks there are, the more they propagate through our architecture and the harder it is for an AV to chart a safe course. The car makes decisions about when to slow down or stop depending on the level of such uncertainty.

Our architecture accounts for risks in the present. It also considers the future, to envision potential futures given the present evidence. If you can see a pedestrian on the pavement (sometimes, even if you do not immediately see them, such as in a school zone), there is a chance they could step onto the road and AVs have to always account for that possibility.

AVs also need to be assertive enough to make progress without sacrificing safety, which is a complex problem to solve. For instance, when merging into busy traffic on a multi-lane road, drivers have to decide when to cut in, and in front of whom — using signals of intent and informal contracts based on reading behaviour or visual cues.

Complex UK roads where AVs need to be assertive enough to make progress without sacrificing safety

These are interactive decision making scenarios which mathematicians often model as games. For AVs to make safe progress in busy cities, they need safe and efficient strategies for playing these games.

Our AVs will have to cope with all of the challenges that urban driving presents. In dense, urban environments, such decisions must be made repeatedly and routinely. This means that the techniques we deploy — for all operations ranging from computer vision to motion control — must not only be accurate, but also tuned and tested to be reliable.

London is precisely the type of environment that will put AVs through the most rigorous tests. The weather is changeable, often rainy and dark. Lighting tends to be inconsistent, and there are numerous challenging actors in any one scene. Our goal is to train our AVs to navigate these complex environments safely. When we succeed in London, we will be confident that we have a solution that can be made work across the rest of Europe.

- Dr. Subramanian Ramamoorthy, VP Prediction and Motion Planning, FiveAI

--

--

Team Five
Five Blog

We’re building self-driving software and development platforms to help autonomy programs solve the industry’s greatest challenges.