How to ensure the safety of Self-Driving Cars: Part 1/5

Jason Marks
3 min readJun 5, 2018

Would you let an unmanned Uber take you to the airport? Or a driverless school bus drive your kids to elementary school? You’re probably wondering how we can be sure that the cars of the future are safe, especially if there’s a robot behind the wheel. This series will dissect how the designers and engineers building self-driving cars can objectively ensure that autonomous vehicles make the right decisions at the right time to save lives. The goals of the mobility revolution are to reduce or eliminate traffic fatalities, to decongest cities and roadways, and to ultimately make better use of our natural resources.

This series is intended for all readers, and I apologize to the engineer readers for the high-level nature of the technical content. Enjoy!

Part 1: Optimizing safety of the self-driving car

When we talk about vehicle safety, we look at how many traffic incidents occur for every X-number of miles driven. The fatality rate for human-driven vehicles was at 1.1 fatalities per 100 million vehicle miles driven in 2012 (source). Autonomous vehicles must perform at least an order of magnitude better for many of us to consider them “safe.” This means that less than one fatality should occur per 1 billion autonomous miles driven.

But were not at a place where autonomous vehicles are getting billions of miles on the road, and we’ve already encountered our first fatality in Tempe, Arizona with the self-driving Uber hitting a pedestrian. Not to mention, there’s no real regulation of autonomous vehicle safety beyond “self-certification” and “governmental recommendations.” So, what can truly be done to give the public the confidence to use autonomous transport?

This question falls on the shoulders of the engineers, computer scientists, and architects of the self-driving vehicle. Namely, the software that sits within the vehicle must operate in such a way that we can say “I trust the brain of this vehicle to make the right decisions.” That brain can be quite complex, consisting of over 250 million lines of code (source, extrapolated), but fundamentally, it works by a “sense, plan, act” methodology:

Figure 1: Sense, Plan, Act Methodology, Jason Marks

Often you will hear this methodology broken down into what is called the “AV Stack,” or the Autonomous Vehicle software Stack. This can look quite complicated when viewed with more specificity, but it is always of the same “sense, plan, act” methodology:

Figure 2: AV Stack, Murthy Nukala

So how do we make sure that the “AV Stack” is sensing the right things, planning the best possible decision, and acting out that decision correctly? We’ll break down exactly how engineers can be sure each of these things are happening on the autonomous vehicles they build. We will look at what’s currently being done, what can be done in the near future, and what still needs significant work to accomplish. We’ll then wrap up with a discussion on how to make this perfect, safe world a reality.

What should be noted is that it does not matter what “level” of Autonomy the vehicle in question is, the same methodology will be implemented:

Figure 3: SAE Levels of Driving Automation (Source)

Read the Rest of the Series: How to ensure the safety of Self-Driving Cars

Part 1 — Introduction

Part 2 — Sensing

Part 3 — Planning

Part 4 — Acting

Part 5 — Conclusion

--

--