The “Trust Sandwich” for Autonomous Vehicle Safety

Ro Gupta
4 min readAug 26, 2019

--

Historically, safety standards for autonomous vehicles were a topic that was high on platitudes, low on substance. But after a series of high-profile incidents, and the federal government slowly beginning to develop a regulatory framework, this is starting to change. In recent months, several industry groups have come forward with competing plans to begin setting universal safety standards for autonomous vehicles. However, there have been two questions that key stakeholders such as governments, insurers and the general public have been asking for years, that the industry has not been able to answer satisfyingly:

  1. How Safe is Safe Enough?
  2. How/When/Who Should Verify Safety?

For question 1, a few recent answers provide insight into the direction the industry is headed. At the July Automated Vehicles Symposium in Orlando, Aurora expressed “safe enough” as “free from unreasonable risk,” a principle embraced in the emerging Safety of the Intended Functionality (SOTIF) standard which in turn is based on earlier functional safety standards from the ISO and EIC industry associations. However Aurora did not go into specific definitions of “unreasonable” or what that would imply for metrics like crash or fatality rates, which are what most mainstream stakeholders tend to want to understand and ultimately consider.

The PEGASUS project, comprised primarily of German automotive companies, did begin to address the question in this way, but stopped short of an answer, and instead presented data on societal acceptance of fatality rates historically (e.g. what have we come to accept from a major technology like aviation vs. a voluntary activity like mountain climbing).

Slide from the PEGASUS keynote at the 2019 Automated Vehicles Symposium about societal risk acceptance of various technologies and activity types in history.

Volvo went as far as we’ve seen anyone go in answering the question head-on. Using a highly analytical approach from decades of their crash data, they are the first major automaker to put a public stake in the ground, describing “safe enough” as: attentive, skilled, experienced driver performance, as defined by their statistical Reference Driver Model. While we are still a ways from industry convergence, all of the above is encouraging progress.

For question 2, however, we have seen far less clarity on who should verify safety in driverless vehicles, and how and when they should do it. As a supplier of one of the core safety components of production AV systems (HD maps), CARMERA is getting this question a lot more in 2019. Based on our observations of recent trends, the likely robustness required for broad acceptance, and the feasibility of implementation, we believe that the ultimate protocol will evolve to something resembling a “Trust Sandwich.”

This (admittedly hokey) term stems from the “trust but verify” proverb made famous in the U.S. by Ronald Reagan. The concept of “trust” in AV is particularly significant given the heavy reliance on neural network AI-based approaches that tend to be difficult to reverse engineer by outsiders when attempting to explain the decisions made that answer the core questions AVs face: Localization (Where am I?), Perception (What’s around me?), Planning (What should I do next?), Controls (How should I do it?).

Using the trust <> verify dichotomy, we believe the evolution will look something like this:

  • Early-Mid 2010s: Trust
    Google X project is underway with very little oversight.
  • Mid-Late 2010s: Trust → Verify
    State governments start to require ex-post mileage, disengagement and crash reporting, but with highly variable reporting rules and methodologies.
  • 2020s+: Verify → Trust → Verify
    The “sandwich” — regulators and insurers require, and are able to meaningfully assess risk and validate results, based on standardized ex-ante AND ex-post information. This includes simulation and on-road performance, as well as the presence of key redundancy technologies including continually updated HD maps, driver monitoring systems and remote operation, depending on the intended use.

This evolution is not dissimilar to how human certification and insurance works today with respect to issuing drivers licenses, setting premiums and adjusting them over time. A certain level of familiarity will be important because the agencies and carriers we talk to are concerned with any dramatic deviations from their decades-old methods.

Regardless, in order to meet production schedules in the early 2020s, AV companies need to establish their minimum safety standards by the end of this decade, while the gatekeepers stake out their position in ensuring appropriate levels of transparency and scrutiny. As these paths converge, AV developers and their suppliers will need to prepare to comply on both sides of the emerging “trust sandwich.”

Thanks to Voyage CEO, Oliver Cameron, former Waymo CBO, Shaun Stewart, and Apex.AI CEO, Jan Becker, for reviewing drafts of this. An abridged version originally appeared on Axios. Follow Field of View and CARMERA on Twitter for future viewpoints on the safety and verification implications of AV technologies, including HD maps.

--

--