Pittsburgh’s Autonomous Vehicle Executive Order — The Devil is in the Details
Recently the Mayor of Pittsburgh Bill Peduto released his executive order governing the development and testing of autonomous vehicles in the city.
While some good can come out of this the devil, and future number of unnecessary casualties caused by public safety driving, is in the details. Without specifics and the use of proper simulation for 99.9% of this citizens of Pittsburgh will be killed needlessly, especially when the simple and hyped scenarios these companies are running evolve into complex, dangerous and actual accident scenarios. Thousands of them they will have to run thousands of times each.
Most AV makers use public shadow, safety or remote driving to develop and test these systems. There are two parts to this. Where the driver has steering control, usually to collect data or show the system how to drive if using imitation learning, or where the system has control and drives and the human monitors that. Dominate use of these methods can never result in anyone getting remotely close to L4. You simply cannot stumble and restumble on enough scenarios. In addition there are two major safety issues. Handover, which is where the human has to take back steering when needed, cannot be made safe in critical scenarios. And the other area being the development and testing of actual accident scenarios.
The proper approach is aerospace/DoD/FAA level simulation for over 99.9% of the development and testing augmented that is informed and validated by test tracks and very limited but controlled safety driving. (Testing and gathering of the data while the driver drives is encouraged to aid in this). Without this there will be thousands of people harmed or killed in Pittsburgh as these companies move into more complex scenarios and training/testing thousands of accident scenarios thousands of times over. Nothing can be perfect but can get to Six Sigma using proper simulation and test tracks to minimize safety driving down to less than 1%.
Some good high level parts in order but again the devil is in the details.
2 A/B — Identify place and manner of testing
As in all the cases the danger exists when the driver cedes steering control to the system to evaluate it. (Where the driver is collecting data and is in control of steering there is no safety issue.) In order to conduct any public safety driving a progression of activities should be conducted first. Data showing the exact test scenarios, and the systems performance, in the exact test locations should be provided. The progression should be simulation to test tracks to real-world. If the simulation is the right kind less than 1% of testing should have to occur in the real-world. If it does people’s lives are at risk for no reason. (Most AV makers use inferior simulation. Either home grown or from most of the companies in this industry. They lack the real-time and model fidelity required. Those models being the environment, vehicles, tires, roads and sensors. General non-specific or specific but not highly detailed models are NOT good enough. What is needed is technology from aerospace/DoD/FAA to be able to develop this precision.
2D — Stipulate why public testing is done
Notice this is testing not data gathering or imitation learning. Where testing involves safety driving the AV maker should explain EXACTLY why this has to be done in the public domain. Remember less than 1% should have to be if proper simulation is used. If they say this is not the case they do not have nor understand the right approach and technology for this.(Most have only IT or gaming experience and will not know what they do not know.)
2F — Data
Full scenario data should be provided- before and after. To include:
Exact location
Possible fixed and moving objects
Expected/actual track of Ego vehicle with speed
Verification of what it is not possible to do this in simulation.
Simulation — Items to disclose or prove:
- Models — show performance curves or comparison to real-world wherever the information is required by any sensor or the human
- Environment — exact location- weather and time of day changes that are relevant
- Fixed objects-roads, lines, curbs, lights, construction items etc
- Moving objects — Need to prove AI/ML will not be confused — several million combinations should be proven. People, animals, other vehicles, bikes and miscellaneous items
Need to show the sensor/perception system recognizes what is actually there. Since there are cases these systems get this wrong it is crucial this capability be proven to a very high degree for any given area or geo fence
Need to show massive amounts of variation — color, size, location. Quantity etc in various times of day and weather
Accident Cases Tested in Simulation — Need to show some Sigma level of accident scenarios that could occur in that location have been successfully tested in simulation or on test tracks. Simulation is preferred since most objects and their variations will never be created for test tracks. Test tracks and limited public safety driving should be used to inform and validate the simulation.
Simulation verification data from test tracks or real-world (limited need)
Applicable laws, regulations and Social Cues/Norms/Ethics
Disengagement data with root cause linked to the scenario
Note — If anyone tells you all of this is not possible, especially to do 99.9% of this in the proper simulation system they are wrong. Odds are they have never seen aerospace/DoD/FAA simulation and sensor technology. If they tell you it will take too much time and money to create it is a fraction of the cost of safety driving literally going on forever at a cost of over $300B per company. This entire system can be built for LESS than Uber spends in one year on their current reckless and untenable approach.
SAE Autonomous Vehicle Engineering Magazine
End Public Shadow Driving
Common Misconceptions about Aerospace/DoD/FAA Simulation for Autonomous Vehicles