Self-driving cars: who’s to blame for accidents?

Enrique Dans
Enrique Dans

--

Joshua Brown’s death in a Tesla Model S after hitting a truck that had crossed his lane is tragic: an unfortunate and unlikely confluence of events. Everything indicates that the accident happened because Brown failed to take the necessary precautions. He was apparently watching a movie; he had previously filmed himself using the vehicle’s self-drive mode in unsafe ways, and had ignored warnings that he should have his hands on the wheel at all times.

The chances of a long truck crossing a motorway and that because of its physical characteristics the autopilot failed to detect it, while the driver was simply distracted, is highly unusual. The algorithms that manage the self-driving system will now have incorporated the accident information, making such an event highly unlikely in the future. The problem here is not a technological one, but the failure of a human to act responsibly.

Brown’s death is tragic, but needs to be seen in context. Many thousands of people will have died on the roads since as a result of human error. Self-driving cars will save many of those lives in future, and calling a halt to their development because somebody didn’t follow the guidelines recommended by the manufacturer would be highly irresponsible. I repeat: the problem is not the autopilot but human nature. If the US National Highway Traffic Safety Administration decides that Tesla must disconnect its autopilot systems the result will be more deaths on the road.

In short, the problem is about human nature: Brown failed to understand the difference between a driving aid that still required him to pay attention to the road and a fully self-driving system that would allow us to relax and do other things while the car did the work.

Which explains Google’s approach: remove the human from the equation completely and go straight to a fully self-driving car by skipping phases 2 and 3 and moving straight to phase 4. It’s certainly one way of moving forward, but it doesn’t mean that it isn’t worth taking the time to fully explain to people what they can expect from a Tesla vehicle. The new version of its autopilot is on the way and will doubtless make the best use the valuable data collected over the course of millions of kilometers.

Should Tesla continue testing its autopilot system while there is a risk of another accident? Of course it should, because this is the best way to reduce the possibility of other accidents. This is how science and technology works: things are tested and tried out, the results are assessed, they are corrected, and all within the limits of caution. Statistics bear this out.

Self-driving vehicles will, in the long-run, save lives. Bringing a halt to developments will provoke more road deaths. We have to understand that self-driving vehicles are safer than those driven by humans.

Other recent accidents, such as Albert Scaglione’s seem to have been caused by imprudence and speeding, rather than autopilot. He will likely face charges. A previous accident in which a Tesla crashed into a wall seems to have been to do with a pedal mix-up and sharp acceleration, again, rather that the autopilot. There where no major injuries.

As with all early phases of technology, in this case we can expect more cases where humans blame self-driving systems instead of accepting their responsibility. But the truth will out, as the data collected by these vehicles shows. The weak link here is not technology, but people, as decades of road accidents proves.

Any obstacles we put in the way of developing self-driving cars means we will just take longer in removing clumsy humans from the equation. And it is this, not an update of Tesla’s autopilot, that would be the irresponsible thing to do.

(En español, aquí)

--

--

Enrique Dans
Enrique Dans

Professor of Innovation at IE Business School and blogger (in English here and in Spanish at enriquedans.com)