Why Self Driving Cars Are Not Safe

Marianne Bellotti
Software Safety
Published in
9 min readJul 8, 2022

--

As long as humans can blame the machine, they will use autonomous features incorrectly.

The other day I watched the New York Times new documentary on Tesla’s safety record and Elon Musk’s role in exacerbating it. I knew about Autopilot and have ridden in a Tesla with Autopilot engaged (but with the driver fully alert and hands on the wheel), but the documentary made it clear to me that the way Musk defines the problem he’s solving makes a safe solution impossible.

There is no advancement in machine learning or AI that will make Tesla autonomous and safe. Not next year, not in two years, not in fifty years. The problem that will keep safe and autonomous just out of Musk’s reach isn’t about accuracy or innovation … it’s because they have the delegation pattern wrong.

The Fallacy of Human Error

Self driving cars are sociotechnical systems. The bits that are doing the calculation are interacting with and responding to living creatures, creating a feedback loop in which the machine alters the calculation based on input from the environment and the environment responds to the behavior triggered by the calculation. Musk focuses only on the technical part and ignores the partnership between the car and the person.

--

--

Marianne Bellotti
Software Safety

Author of Kill It with Fire Manage Aging Computer Systems (and Future Proof Modern Ones)