IMAGE: Velodyne LiDAR

Self-driving cars: accidents, safety measures, and responsibility

Enrique Dans
Enrique Dans
Published in
5 min readMar 26, 2018

--

The tragic death last week of Elaine Herzberg, who was run over and killed by one of the 200 self-driving Volvo XC90 being road tested by Uber in Arizona requires us to consider the issue of responsibility as we continue to develop important new technologies.

While still awaiting the conclusions of an investigation by the National Transportation Safety Board, everything seems to indicate that Uber has handled its autonomous driving program poorly. As a result, as Brad Templeton notes, things look bad for the company. No one denies that the victim should not have been crossing a main road in a poorly lit spot at night, but as more experts analyze the abundant video evidence — let’s not forget that we are talking about a completely sensorized vehicle — it seems that Uber’s vehicle did nothing to limit the impact on the victim and that the company had been cutting corners with its road testing program so as to reduce the time needed to launch a commercial vehicle transport service without driver.

The first suspicions surround the vehicle’s failure to brake. The manufacturer of the vehicle’s pulsed laser radar (LiDAR), Velodyne LiDAR, which split from its parent, Velodyne, in 2016, is involved in more than 25 autonomous driving development programs with companies such as Volvo, Ford, Mercedes-Benz, Tencent, DJI, Baidu, TomTom, Here, Bing or SAIC, says it is baffled by the accident data released so far, and says its components are not to blame. In an email, Martha Thoma Hall, president and Chief Business Development Officer, has defended the company, saying:

We are as baffled as anyone else (…) Certainly, our LiDAR is capable of clearly imaging Elaine and her bicycle in this situation. However, our LiDAR does not make the decision to put on the brakes or get out of its way (…) it is up to the rest of the system to interpret and use the data to make decisions. We do not know how the Uber system of decision-making works.

The data Uber has made available about its autonomous driving program suggest it is lagging far behind competitors such as Waymo. In November 2017, Waymo vehicles were already capable of traveling almost 50,000 kilometers without any human intervention. Cruise (acquired by GM in 2016) said in October of the same year that it had reduced driver interventions to only three in almost 45,000 kilometers. But Uber’s safety drivers still need to take the controls of vehicles around once every 20 kilometers, a figure that could possibly justify not allowing its vehicles to be tested under real road conditions yet. In March 2017, Uber required driver interventions approximately every 1.3 kilometers. And Uber’s response? That metrics of the number of interventions per distance traveled is not a good indicator to assess the safety of an autonomous vehicle. It may well be that metrics don’t collect all the information… but it is perhaps significant that this particular metric puts Uber in a poor light.

That said, the presence of a safety driver should significantly reduce the possibility of accidents: all the companies currently road testing autonomous vehicles seem to use drivers who have undergone three-week training courses that include a manual driving test, a written evaluation, training in the classroom and on the road, and that value reaction time, military or delivery experience, the ability to type more than 50 words per minute and to carry out basic vehicle checks. A safety driver is typically paid around $23 an hour by Cruise or $20 by Waymo, but Uber doesn’t say how much it pays. The company uses about four hundred drivers in the two hundred vehicles it has in Pittsburgh, Phoenix, Toronto, Tempe and San Francisco. Uber has come under fire for employing Rafaela Vasquez, the driver involved in the accident, because she had served prison time for theft and falsifying documents. That said, her offenses were committed in 2000, since when she had a clean record; what’s more, the company has a policy offering former prisoners a second chance, as long as they haven’t committed a crime in the previous seven years.

Why is Uber in such a hurry to get its self-driving vehicles on the road, even if they may not be ready? Firstly, because its astronomical valuation is based on being able to eliminate drivers from the cost of operating its vehicles. Secondly, recent changes at the top: everything indicates that Dara Khosrowshahi, the CEO who succeeded Travis Kalanick, Uber’s controversial founder, had seriously considered canceling the autonomous driving program when he took over, but was convinced by the board to continue with it. The new CEO had planned a visit to Arizona in April, and those in charge of the program wanted to offer him an autonomous driving experience that included exceptional situations (so-called edge cases, which Khosrowshahi himself has identified as the real challenge), and that could have been a factor in speeding up the tests.

The company has increasingly focused on accumulating miles rather than on tests involving passengers, changed safety drivers’ priorities, while merging the responsibilities of test drivers with that of transporting passengers and putting just one driver in its vehicles instead of two. As a result, after taking a year to accumulate its first million miles in September 2017, took just 100 days to get to the second million and aimed for the third million in an even shorter period, in a bid to provide its algorithms with more data to learn from. Other companies, such as Waymo, seem to have been much more rigorous in training algorithms in situations that do not put the public at risk, instead using videogames and specialized facilities. Waymo’s CEO, John Krafcik, has singled out Uber’s responsibility by saying that his company’s vehicles would have been perfectly able to handle a similar situation.

Is Uber guilty of recklessness? Speculating and drawing conclusions before the NTSB has finished its investigation is not only reckless, but unnecessary: there will be no shortage of data to analyze what happened, why, and where responsibility lies. It is important when assessing these types of issues to separate the development of a particular technology from how it is used by private companies, which might act irresponsible way.

There is little doubt that autonomous driving is the future: more and more countries are authorizing road testing, which in the main is being carried out by companies able to demonstrate that these tests can be carried out in a reasonably safe way. Right now, the important thing is to avoid over-reacting, and to remember that this is a technology with enormous potential to reduce accidents, to provide mobility for all, and that if we are to reduce the amount of traffic in our cities, it must continued to be developed. The way to save lives on our roads is not by hindering the progress of this technology, but instead by introducing it responsibly. As is so often the case, any danger lies not in the technology, but in the way it is used.

(En español, aquí)

--

--

Enrique Dans
Enrique Dans

Professor of Innovation at IE Business School and blogger (in English here and in Spanish at enriquedans.com)