NHTSA saved children from going to school in autonomous shuttles and leaves them in danger everywhere else

Michael DeKort
Predict
Published in
4 min readOct 24, 2018

This week NHTSA started to do the right thing. They determined that EasyMile’s school shuttle in the Babcock Ranch community in Florida was unsafe. In their report they stated, “Innovation must not come at the risk of public safety,” said deputy NHTSA administrator Heidi King in a statement. “Using a non-compliant test vehicle to transport children is irresponsible, inappropriate and in direct violation of the terms of Transdev’s approved test project.”

This of course begs the larger questions. Why is not “irresponsible” or “inappropriate” for those very same children to be in or around ANY autonomous vehicle in development, especially those carrying them in a commercial vehicle like Waymo is doing? Or for anyone else to be exposed to the same risk? (Voyage does exactly this in retirement communities.) What this all comes down to is whether NHTSA understands that handover, the act of the vehicle giving control back to the human when it deems necessary, cannot actually be made safe in critical scenarios. NASA, Missy Cummings and several AV makers, OEMs and leaders in the space like Waymo, Ford, Volvo and Chris Urmson have said handover or L3 is dangerous and should be skipped. (This despite ALL of them still using the practice. The reasons for that follows.) NASA and several studies have clearly shown that handover cannot be made safe in critical scenarios because the time to regain proper situational awareness, to do the right thing the right way after the vehicle hands over control, cannot be provided. No matter what monitoring and notification system you use. NHTSA might know this (or knew it and ignored it) in 2015 when it said in their L3 report that these systems could be made safe in critical scenarios. The problem of course was they ignored situational awareness issues on purpose.

Beyond the use of handover for L3 vehicles in use there is the even more dangerous use of it to develop these systems. (Which includes that shuttle in Florida.) It is a myth that public shadow driving is a viable method to create self-driving vehicles. That process can never come close to creating a driverless vehicle. You cannot drive the one trillion miles or spend over $300B to do so. You also cannot run thousands of accident cases thousands of times each. Those will of course cause thousands of needless deaths. This situation is so onerous that Mark Rosekind, the former administrator of NHTSA, who was in charge during the running of that 2015 L3 study, actually said the deaths that have occurred and the ones that will occur are for the greater good. And he said there wasn't enough data showing handover was dangerous. How convenient? Shame his own study skipped looking into situational awareness issues. And isn't is ethical and professional of him to tell use we need to be Guinea pigs, an ends to a means or necessary evil? All for a practice that is doing the exact opposite of the intended goal. The lives saved by a truly autonomous vehicle still never be saved because they will never create one. And they will actually take thousands of lives needlessly as they fail. (This summer I attended a conference where the current head of the NTSB Highway Safety division, Robert Molloy admitted, after some prodding, there is a period of time where proper situational awareness cannot be provided. And that simulation should be used far more often than shadow driving.)

There is the other major enabling factor. The reason NHTSA has been so slow to act. Another myth is that industry will ever do the right things when being allowed to run free. Restricted by only “guidance” or “frameworks”. (With many being voluntary.) Please someone tell me when this approach has ever resulted in actual best practices? Medicine? Nuclear power? Cybersecurity? Air travel? All this process does is enable the lowest of practices and provide the legal air cover to do so. DoT’s statement on this was that detailed criteria or testing should not be written yet because the technology hasn’t sorted itself out yet. What does the technology have to do with passing a driver’s test? Or vision (sensor) test? In most cases how something is done is not the issue. What is important is what you are able to do. (The DoT made this statement after the GAO admonished them for not creating test procedures.) Of course, the clear example is air travel. The same pattern of events occurred. Industry was left to create the Wild West, tragedies occurred, the press pounced, hearings were held, lawsuits ensued, and the FAA was born. From that they created the process, practices and testing criteria that led to 6.4 sigma air safety.

Now that NHTSA has taken this action it is incumbent upon them to take further action everywhere handover exists in FMCSA vehicles, especially in vehicles in development. (Which is all of them.) Beyond that I would find it hard to believe they can or should stop there. Why should a child be at risk in or near a Tesla, Cruise or any other passenger autonomous vehicle in development or that uses L3/handover?

Finally, there is a much better way to do this. One that take the impossible to possible. The solution is to use aerospace/DoD/FAA level simulation, safety and engineering practices. There should be a simple progression. You must prove why you cannot do something in simulation. Then why you cannot do it on a track. And finally, when you are allowed on the road, which would be less than 1% of the time (there will be millions of scenarios. Most to prove the AI isn’t confused or misidentifying.)it is done in a structured manner. Not a free for all.

Please find more in my articles below

SAE Autonomous Vehicle Engineering Magazine End Public Shadow Driving -https://www.nxtbook.com/nxtbooks/sae/ave_201901/index.php

Common Misconceptions about Aerospace/DoD/FAA Simulation for Autonomous Vehicles

https://medium.com/@imispgh/common-misconceptions-about-aerospace-dod-faa-simulation-for-autonomous-vehicles-2b3ad84b0aa1

The Hype of Geofencing for Autonomous Vehicles https://medium.com/@imispgh/the-hype-of-geofencing-for-autonomous-vehicles-bd964cb14d16

The Autonomous Vehicle Podcast — Featured Guest

https://www.autonomousvehiclespodcast.com/

--

--

Michael DeKort
Predict

Non-Tribal Truth Seeker-IEEE Barus Ethics Award/9–11 Whistleblower-Aerospace/DoD Systems Engineer/Member SAE Autonomy and eVTOL development V&V & Simulation