The first time in history that a car killed a pedestrian was 1896. The vehicle belonged to the Anglo-French Motor Car Company and at the time of the accident was moving at 4 mph (not a typo — 4mph or 6.4 km/h) during demonstration rides in London.
The accident did not have much of a public outrage and the case was closed with the hope that “such a thing would never happen again”.
Almost 122 years later, the first fatal accident with an autonomous car was recorded with the vehicle moving at 40 mph (64.4 km/h).
All media attacked severely the autonomous vehicles companies and claimed that self-driving cars cannot provide the safety of a normal vehicle, not to mention to reduce (if not eliminate at all) the black statistics about the fatal accidents on the roads.
Let’s dig into the problem and see if this fatal accident could have been avoided and if we can solve the issue for the future.
Before going into the details of the actual accident, let’s first focus on the technology behind autonomous vehicles.
How self-driving cars actually work?
Here is a good and clear explanation by Medium on the technology behind self-driving cars. There are 5 main components:
1. Computer Vision — the computer scans over the lanes and the path, and then translates huge amounts of data in order to capture the other vehicles on the road. The process is called Deep learning and it’s task is to learn what the other cars look like so it can subsequently “see” them on the road.
2. Sensor Fusion — as long as the vehicle sees the other cars, it would need as well to map them on the road — in other words, to have an augmented representation of the distances, velocity of the other vehicles and objects etc. This happens via various radars and sensors. In order to detect all the other objects on the road and understand how they move, autonomous vehicles use a LIDAR technology.
This is a method that measures the distance to an object by sending pulsing laserlight to the object and measuring the reflected pulses with a sensor. The time taken for the reflected signals to return and the wavelengths are then used to create a 3-D representation of the object. Being armored with such radars, the vehicles are able to create a 3-D map of the surrounding
3. Localization — once we have a clear picture of the local environment, it is important to have a 100% accurate localization of the self driving vehicle. GPS are said to be +/- 2 meters accurate, but when it comes to driving and following the lanes, these 2 meters could be fatal. Imagine, for instance, that instead of following the lane, the vehicle is moving on the sidewalk and putting all pedestrians at risk. Accurate localization is crucial when it comes to self-driving vehicles.
4. Path Planning is related to the actual planning of the vehicle’s path towards the chosen destination. Path planning requires that the vehicle take hundreds of decisions throughout the itinerary — it has to take into account the other vehicle’s trajectories, how they are moving from lane to lane, whether the vehicle has to slow down or shift lanes and so on.
5. Control — the final step is the operational implication of the above steps or, in other words, how to move the steering wheel and how to hit the break.
What actually happened during the accident?
Now that you have the big picture on how self-driving cars work, let’s get back to Uber’s accident and have a look at the video captured by the vehicle’s dash camera:
Looking at the video, it is important to note several important points:
First, self-driving cars “see” with the lidar and radar technology, which is not affected by darkness — it should work as well in the day as in the night. In this term, whether a human driver would see the woman with the bicycle has nothing to do with the fact that the Uber self-driving car did not detect it. The woman was crossing the street, coming from the left lane and was not behind any other object, which might have otherwise hidden her from the lidar and radar. Therefore, the reason for the accident might have been a technology malfunction, which could have happened in a day with perfect visibility.
You can check out these other interesting videos with autonomous vehicles:
Second, there was an operator behind the wheel at the moment of the accident who could potentially have prevented the accident. In addition, the human eye is able to detect much more details in the dark than what is recorded by the camera. However, one cannot simply claim that the operator could or could not have avoided the accident. The difficulty to evaluate the situation stems from the fact that the operator’s reaction cannot be quite compared to the reaction of a non-autonomous vehicle driver. The operator might have been so used to relying on the technology and not doing anything that at the moment of the accident her reaction might have taken longer than that of a driver who does not rely on automation.
But even if this was a non-autonomous vehicle with a fully conscious and focused driver, would it be possible to avoid it? Here is an interesting estimation by a Reddit user who calculates the distances and whether it would be possible for a human driver to avoid the situation.
However, as mentioned above, the self-driving vehicle neither uses the small dash camera to detect objects, nor should be affected by the darkness or low visibility. Hence, direct comparison with a human driver is pointless.
So what can probably explain the glitch in the technology?
Back in 2016, Uber decided to retire its autonomous driving fleet of Ford Fusions and move to Volvo sport utility vehicles. This decision was also accompanied by a decrease of the sensors, detecting objects on the road. Below image from a Reuters article illustrates the changes in safety sensors:
According to industry experts and former Uber employees, the change of the number of safety sensors created more blind spots around the perimeter of the vehicle than there were with the previous generation Uber cars and than the competition has.
Hence, it is possible that the vehicle’s lidar sensor failed to detect the pedestrian and yet, other sensors should have detected it anyways.
Let’s have a closer look at Volvo’s systems:
While the official report of the accident is not available yet, the malfunction is mainly blamed at the top-mounted lidar system, providing a 360 degree 3D scan of the surroundings of the vehicle several times per second. Even though heavy snow and fog may decrease the range of accuracy of the system, the lidar laser would still be able to detect objects from few feet to a few hundred feet.
Uber’s lidar system is made by Velodyne — one of the top suppliers of sensors for autonomous vehicles. As long as the lidar laser provides a 360-degree circle around the vehicles, it fails to detect objects closer to the ground (e.g. animals) as it has a narrow vertical range. That’s the reason why other autonomous vehicles companies use more lidars on their cars; in contrast to the solely 1 lidar in Uber’s vehicles, Waymo uses six lidar sensors, while General Motors uses five, according to information from the companies.
Uber did not comment on the decision to reduce the number of lidars and redirects all questions of blind spots to Velodyne. The President and chief business development officer at Velodyne confirmed that with the rooftop lidar there is a roughly 3 meter blind spot around the vehicle and a side lidar is needed to detect pedestrians, especially at night. You can check more details in this article.
It is not clear yet why the other radars failed to detect the pedestrian and if it would be able to avoid the accident under other circumstances. Until the official report is published, we can only analyze the known facts and guess about the unknown variables.
Let’s hope that such a thing would never happen again with autonomous vehicles…