The Blame Game: Mobileye’s Mathematical Formula for AV Safety Proves Only That Self-Driving Cars Are Not Inevitable Without Some Help From The Law

Autonomous Law
8 min readOct 24, 2017

--

Mathematical models are often used as a drunk person uses a lamppost; for support, rather than illumination.

— Proverb

PART ONE

Last week Mobileye published a white paper that proffers a mathematical method of programming and verifying autonomous vehicle (“AV”) safety. The summary is entitled, “A Plan to Develop Safe Autonomous Vehicles. And Prove It.” It does so by defining safety not as an absence of crashes, but as an absence of fault for crashes:

As absolute safety is impossible as long as AVs share the road with human drivers, we start by defining an AV as safe if it doesn’t cause collisions.

Dubbed Responsibility-Sensitive Safety (“RSS”), the proposal is just that: programming that is sensitive not to crash avoidance, but to blame avoidance — as determined by application of the Vehicle Code, rights-of-way, and ‘common sense.’ AVs programmed with the RSS formula will never take action(s) for which they could later be held legally at fault, potentially making manufacturers immune, ab initio, from liability when accidents inevitably occur.

Our solution is to set clear rules for fault in advance, based on a mathematical model. If the rules are predetermined, then the investigation can be very short and based facts, and responsibility can be determined conclusively. This will … clarify liability risks for consumers and the automotive and insurance industries.

The RSS proposal not only ignores the vast differences between liability and safety (PART TWO), but the very suggestion reveals a responsibility-evading approach to auto safety which rejects every social, cultural, and human aspect of driving, including reasonableness and good judgment. (PART THREE.) RSS treats everyone else on the road as competitors for blame instead of a partners for safety.

Understanding the law is an essential piece of AV programming, but it is not a valid safety standard, let alone a minimum one. It is not enough not to cause accidents — not unless AVs are the only cars on the road. AVs must equally be able to avoid collisions, even when not their fault. What Mobileye has actually proposed is a liability standard, and it is far below current strict product liability standards, and even reasonable care. (PART FOUR.)

Most significantly, the white paper reveals a crack in the veneer of inevitability that the industry has thus far projected, and the crack goes straight to the core of its viability. Mobileye is afraid that without forward-looking rules-based safety (rather than historical data- and performance-based safety), AVs will never be able to scale to the tens of millions of cars, and may wind up being “simply a very expensive science experiment.

It’s afraid!” (from Starship Troopers)

According to the abstract:

[E]ngineering solutions that lead to unleashed costs will not scale to millions of cars, which will push interest in this field into a niche academic corner, and drive the entire field into a “winter of autonomous driving.”

We are not used to hearing dour warnings from car makers and tech companies about AV prospects. What is really going on here?

The industry has invested $80 billion into AV development so far, and no one expects anything but exponential escalation. An arms race for continual performance improvements based on more sensors, more data, and more processing power cannot be won, says Mobileye — at least not by any company other than Waymo.

Mobileye’s white paper thus seeks to unite and rally the industry around its prospective, rules-based liability proposal and to “collaborate with global standards-bodies and regulators” (lawmakers) to redefine safety so that the cost to meet the minimum safety standard is low, and the cost of decisional failure, at least, is zero.

Mobileye said what most already know; the business proposition for AVs simply will not succeed without changes in tort law. And as described below in Part Two, it will also require human drivers to meet them more than half way.

PART TWO

The Big Picture: Safety, Legality, and Culture Are Not The Same

The driving universe is comprised of what is safe, what is legal, and what drivers actually do (culture). See Figure 1.

Figure 1

Safety vs. Legality

SAFE and LEGAL circles speak for themselves. While they are undoubtedly correlated (safety is usually the purpose of the law), they diverge in lots of places. For example, crossing a double-yellow line into oncoming traffic is not LEGAL, but it can be SAFER than staying in one’s lane if it avoids a certain collision. Alternatively, braking abruptly for a mouse in your lane might be LEGAL, but it is not SAFE if a following truck doesn’t expect it. Mobileye wants a pure fault-based safety standard (i.e., all actions taken within the LEGAL circle are redefined as SAFE).

We have already seen this construct fail in the automated driving context:

In Tempe, Arizona this March, an Uber set to self-driving mode went blindly into an intersection at nearly 40 mph, where it encountered an oncoming car making an unprotected left. The Uber Volvo was within the speed limit and had the legal right of way, but it still ended up on its side, nearly wrapped around a light standard with two engineers inside. For safety’s sake, a human driver would have used caution approaching the intersection to avoid a potential crash. Responsibility-Sensitive Safety would assume caution only if it might later be found at fault for a crash. With no legal imperative or potential blame to slow its roll, the RSS-vested AV would unwittingly continue full speed ahead into ‘faultless’ danger — just as Uber’s Volvo did.

Informed people will not want to get into an RSS-programmed AV. Such vehicles are designed primarily to avoid courtrooms, not collisions.

Culture

The CULTURE circle is simply what drivers do (how they drive). Human drivers usually obey the law and are safe (the ideal center). Sometimes people will stray from what is LEGAL, but they remain SAFE (e.g., going 60 in 55 mph zone when all other traffic is doing 70). People who drive in an illegal AND unsafe manner are easy to spot; they are the a--holes — the literal bottom of the CULTURE circle.

All CULTURE is local, and that’s the problem: local doesn’t scale. One cannot program a car to behave as California drivers do and expect that same behavior to be understood in New York, let alone India, even though traffic laws in all three are nearly identical.

Read Alex Roy’s account of driving in India. He observed that the gap between Indian laws and driving culture makes AVs unlikely to succeed there — at least not until the culture, aided by improved infrastructure and foreseeable enforcement, completely changes.

Knowing When to Move Between SAFE And LEGAL Circles

For simplicity’s sake, Figure 1 depicts the circles as the same size, perfectly round, and relatively equidistant from one another. However, some of the overlapping segments are obviously larger than others. For example, SAFETY may correlate more with CULTURE than with what’s LEGAL.

The more law-conforming a CULTURE (aided and encouraged by compliance-inviting infrastructure and fair enforcement), the more the center sweet-spot will overlap. In India’s case — compared to the U.S. — the three circles are all a bit further away from each other, creating a smaller ideal center. Its fatality statistics reflect the increased danger.

No single circle is the most desirable for AVs to inhabit exclusively:

  • An AV that performs only within the LEGAL circle is not always SAFE, partly because it is not always what people expect. For example, snapping abruptly into a left turn only lane so as not to cross any painted lines is LEGAL, but it is a rough ride for passengers, and unSAFE if the car behind drifts across the painted island bump-out, as they often do.
  • An AV that performs only within the SAFE circle would be so cautious at avoiding accidents, in some circumstances it would leave the roadway entirely (ILLEGAL) just to avoid an infinitesimally higher chance of collision with a car coming from the other direction. Safe driving means reading subtle communications from others, understanding the context, and knowing the right time and place to extend trust.
  • And finally, an AV that drives like people do … is what gets 1.2 million people killed every year worldwide. If AVs did only what people do, as safely as they do it (think CULTURE, minus the a--holes), each would have to be programmed for its own culture, which varies by country, state, county, and city. As a business model, that does not scale.

Being grounded in all three circles, and knowing when to step outside the lines of one or more at just the right moment, is called judgment. In law, we use the reasonable person standard to decide if someone used it correctly. It is entirely contextual, and right now computers do not have it. For a car to drive itself in a truly safe, legal, and culturally-expected way, we will need nothing short of an AI breakthrough.

In the meantime, we have to determine just how much people will alter their behavior (CULTURE) to accomodate AVs.

The Great Negotiation

We are living in a period of Great Negotiation, in which humans have to decide just how much they will alter their own behavior to accomodate automation. We do it for phone trees and robo-calls (responding at the prompts with an enunciated, menu-conformed exclamation). People do it in Ann Arbor Michigan for automated pizza delivery cars (going out to the car in the street, instead of a driver coming to the door). We’ll all soon go to the lobby of our apartments to collect our Amazon Locker packages instead of having them brought to our doors.

In every instance, automation means making things easy for the automator, but convenience is actually lower for the recipient of the service. By most accounts, the Great Negotiation has been one-sided so far. But no other ‘last-mile’ demand is anything like autonomous driving. Placed on the table by Mobileye for our consideration are 1) a waiver of reasonableness, 2) a constant threat of death, and 3) complete immunity from liability. Will we jump on it?

The Real Danger is That MobilEye’s Proposal Could Become Law Nationwide.

While we might seek comfort in the fact that Mobileye’s liability standard is not theirs to choose, the lobbying power of AV manufacturers is powerful — as demonstrated by the recent House and Senate legislation, which clears away state laws and federal safety standards for AVs the same way a fire clears a Brazilian rainforest for crops.

The scariest thing about Mobileye’s proposal is that it has a good chance of becoming a national law. In Tennessee, Arizona, Florida, and some other early-AV adoption states, AVs are already authorized to drive on state roads so long as they comply with their state’s respective Vehicle Codes. That is a de facto safety standard, though not a liability standard. Not yet, at least.

In reality, L4 AVs will be deployed long before they are able to exercise reasonable judgment. But human drivers are adaptable in a way computers are not. The most likely outcome is that AVs will be programmed exactly as Mobileye suggests, and our driving CULTURE will learn to anticipate and adapt to AV driving behavior, just as drivers adapt to all the unique modes, varying skill levels, and unanticipated obstructions they find on the road daily.

Ultimately Mobileye’s blame-based solution is short sighted. People who want to be AV passengers (less than half of the public at the moment) won’t be satisfied with winning compensation for their injuries in the courtroom; they want to not be injured in the first place.

Parts Three and Four will be posted next week…

--

--