Uber, Volvo and fatal accidents involving self-driving cars

Enrique Dans
Enrique Dans

--

Yesterday, at 10 pm local time in the Arizona town of Tempe, the first fatal crash involving an autonomous Volvo XC90 being road-tested by Uber took place, as initially reported by a local TV station. The event, according to the reports I have read, happened when a 49-year-old woman, Elaine Herzberg, was walking with her bicycle across a road outside a crosswalk and was hit by the vehicle. The driver, Rafaela Vasquez, aged 44, was not carrying passengers and was behind the wheel, but with the vehicle in autonomous mode, and neither her reaction nor the car’s sensors were sufficient to prevent the fatal incident. Uber is cooperating with the Tempe police department to establish the facts, and has expressed its regret for the death and temporarily suspended road tests. In June 2016, Joshua Brown was killed in his Tesla while using it in self-driving mode, but after an extensive investigation the company was found not to be responsible for the accident.

What conclusions should we draw from this event? Firstly, while deeply regrettable, we should not lose sight of the fact that the accident rate for autonomous vehicles is very low. During the time that different companies have been road testing self-driving cars, there have been far fewer accidents than those involving vehicles driven by people. All technology results in accidents, and the only thing that can be expected when it is developed and put into operation is that it produces fewer than the previous technology, as is the case. Aside from accepting responsibility, the appropriate lessons must be learned from accidents to prevent them happening again.

To argue that autonomous vehicles are a danger because one has been involved in an accident makes no sense, as would imposing stricter controls on them: no one is interested in relaxing safety measures and the accident was an exception. What’s more, the sensors on an autonomous vehicle provide much more information about road conditions than a person can, but under certain circumstances, even that does offer total protection against accidents.

Nevertheless, there are indications that Uber, whose current astronomical valuation and future depend on self-driving vehicles, may be cutting corners in its haste to get self-driving vehicles on the roads. Success depends on algorithms progressively accumulating road experience, but Uber may be tempted to put vehicles on the road that have not yet been exposed to situations that, while highly unlikely, are still possible under normal conditions. The process of obtaining safe and reliable vehicles is long, and there are no shortcuts. Uber’s autonomous vehicles have been involved in more accidents than any other player in the field, which is not good news.

As for Volvo, which makes the XC90 involved in the accident, it has no responsibility: it provides the vehicles being road tested by Uber, but not the software used for autonomous driving. The Swedish company has said it will accept responsibility for any accident involving one of its vehicles when in autonomous mode, but this only applies when they are using its own software, as is the case with the road tests currently underway in Gothenburg. So far, the spotlight has been on Uber, as developer of the software that controlled the vehicle, and not on the manufacturer of the vehicle.

The decision by the governor of Arizona to make his state the first to allow fully autonomous driving is not under question either: lives have not been endangered since Doug Ducey signed an executive order in 2015 supporting the testing and operation of self-driving vehicles in the Sunshine State, because autonomous vehicles continue to prove themselves safer than those driven by humans. Arizona, because of its good climate and its network of well-maintained roads, is an ideal place to develop this type of technology, which far from turning its residents into guinea pigs, is aimed at protecting them by using a technology that provides greater safety.

As said, there are inherent risks with all technology. In the case of a motorized machine weighing almost two tons that moves people around, there will be accidents: that does not mean we should not abandon development of that technology, but to try to do so as safely as possible. An accident involving this particular technology does not make it bad or unviable, or mean that it should be halted. Autonomous vehicles are the future: they were before yesterday’s accident and they still are. Let’s keep this in perspective and hope that common sense prevails.

(En español, aquí)

--

--

Enrique Dans
Enrique Dans

Professor of Innovation at IE Business School and blogger (in English here and in Spanish at enriquedans.com)