Testing autonomous vehicles (AVs)

Quick summary of NHTSA’s framework

Mohsen Khalkhali
Source

The race to autonomy (full L5) has been synonymous with a race to collecting data. The industry has been promising the allure of selfdriving consumer cars for a very long time, however the recent breakthroughs in deep learning have made it possible to build highly automated selfdriving systems much more accurately. The problem with these systems is that they are hard to test and verify. RAND estimates that 275m miles of autonomous driving are needed to show reliability!

In order to understand the current industry testing practices, I looked at the NHTSA’s most recent testing framework released in September 2018 and summarized my findings here. I conclude with what I believe is the most balanced method of testing at the end.

What should be tested?

  1. Tactical maneuver behaviors (e.g. performing a low-speed merge into an adjacent lane.)
  2. ODD elements — operational design domain (The relevant ODD elements generally define the operating environment in which the AV is navigating during the test e.g. roadway type, traffic conditions, or environmental conditions).
  3. OEDR capabilities — object and event detection and response (OEDR capabilities relate directly to the objects and events the AV encounters during the test e.g. vehicles, pedestrians, traffic signals).
  4. Failure mode behaviors (e.g. GPS receiver failure by unplugging coaxial cable between GPS antenna and receiver after AV has begun moving and before it begins changing lanes).
Sample AV Scenario Test Descriptor

How they should be tested:

Through test challenges. Two primary categories of challenges to consider when developing and conducting tests:

  1. Challenges associated with AV technology (focus on some of the characteristics of the technology and the underlying implementations of the integrated hardware and software systems, e.g. Probabilistic and non-deterministic algorithms, Machine learning algorithms, Digital mapping needs, Regression testing)
  2. Challenges associated with test execution (highlight the expansiveness of the conditions that vehicles may encounter and handle with minimal, if any, input or guidance from a human, e.g. Testing completeness, Testing execution controllability, Testing scalability, Unknown or unclear constraints/operating conditions, Degraded testing, Infrastructure considerations, Laws and regulations, Assumptions)

Techniques for testing:

  1. Modeling and simulation, (subsets: SIL, HIL, VIL)
  2. Closed-track testing
  3. Open-road testing

These techniques offer a multifaceted testing architecture with varying degrees of test control, and varying degrees of fidelity in the test environment. The table below shows how they compare on a number of parameters:

Comparison of testing techniques

Failures to uncover:

  1. Suboptimal performance (e.g., hugging one side of a lane, driving slower than allowed, taking an inefficient route or trajectory)
  2. Unexpected/unpredictable behavior (e.g., sudden acceleration/deceleration, erratic steering oscillation)
  3. Unsafe behavior (e.g., driving out of desired lane, not reacting to relevant obstacles)
  4. Collisions

Outcome provided:

Outcomes are largely project dependent and very fragmented, therefore learnings are silo’d and collaborative learning does not happen.

  1. AdaptIVe (Technical assessment — performance of AV features; User-related assessment — interaction between user and AV features; In-traffic assessment — effects of AV on surrounding traffic and non-users; Impact assessment — effects of AV features on safety and environmental aspects)
  2. PEGASUS (identify formal performance metrics for test techniques)
  3. Many more public and private ones…

Standardization & Collaborative Learning

We believe we can standardize the outcomes by building a data exchange so that data sharing becomes second nature to these tests. The benefits will be consensus about safety standards, reduction in research and development costs and acceleration of large-scale safe deployment of AVs.

Test tracks are the top venue for standardization, because they already act as shared facilities providing services to industry and government. In addition, they present the perfect balance of Control-Fidelity, they’re not too controlled like modeling and simulation, nor too open like open-road tests.

This is the first post in a series exploring commercialization of Autonomous Vehicles (AVs) in my quest to build a smart mobility venture. If you also believe in applying AI to achieve effortless mobility for everyone, leave me a comment below.

Reference: https://www.nhtsa.gov/sites/nhtsa.dot.gov/files/documents/13882-automateddrivingsystems_092618_v1a_tag.pdf

Mohsen Khalkhali

Written by

playing @ intersection of wisdom & wealth. When things get tough, remember the #PaleBlueDot. #venturecapital writer

Welcome to a place where words matter. On Medium, smart voices and original ideas take center stage - with no ads in sight. Watch
Follow all the topics you care about, and we’ll deliver the best stories for you to your homepage and inbox. Explore
Get unlimited access to the best stories on Medium — and support writers while you’re at it. Just $5/month. Upgrade