Where Self-Driving Cars Fall Short, and How We Can Make Them Safer

Andrew Kouri
lvl5
Published in
5 min readApr 13, 2018
lvl5 deep-learned lane detection (blue), combined with “virtual lanes” from our HD map + localization (green) on 101S in Mountain View

Recent fatalities involving autonomous vehicles demonstrate that we, as an industry, still have work to do to make self-driving safe. It is debatable whether or not autonomous vehicles currently cause fewer fatalities per mile than human drivers. Regardless, we are insisting that the bar be set higher: Unlike human drivers, electronic systems can benefit from sensor redundancy, which should make these systems safer — yet few companies are taking full advantage of this opportunity. Redundancy is critical in any system where the driver isn’t in full control of the vehicle, and HD maps are one of the best forms of sensor redundancy available today. We believe that one day autonomous vehicles have the potential to be orders of magnitude safer than human drivers. We want to expedite the arrival of that day.

In a semi-autonomous vehicle crash in Mountain View last month, we found that it is possible the vehicle could have misinterpreted the right lane line (of the diverging lane) as the left lane and steered itself into the barrier. In Tempe, AZ, where an Uber autonomous vehicle struck and killed a pedestrian, better maps integration would have allowed the vehicle to accurately filter the pedestrian from random sensor noise. In both cases, tight integration with an accurate map would have certainly prevented or mitigated the severity of these tragedies.

After investigating the site of the Mountain View crash, we found that there are a number of particularly challenging factors that might confound an autonomous vehicle.

Payver footage — Lane paint near crash site in August, 2017
Payver footage — Lane paint 1 week before crash in March, 2018

First, the faded lane markings are problematic for any system that relies heavily on real-time detection of the lane markings. As you can see from the video at the top of this page, even our own deep-learned, real-time lane line detection is flaky at this point, which could be the case for OEMs as well. We have been collecting video data near the crash site for over a year, and have over 1,000 video traces of the area collected throughout time. Change detection on the video footage has shown that the markings are subject to extreme wear and fading — and even though the markings are occasionally repainted, they quickly wear down.

Second, we found that Caltrans placed a temporary barrier in front of the crash attenuator two days before the March 23rd accident. In our map, both the “bumble bee” hazard placard on the crash attenuator and the temporary traffic control barrier are present.

Payver — March 21, ’18 — Plastic Temporary Traffic Barrier occludes metal “bumble bee” hazard placard.

Matching these objects from the map with what the vehicle sees from its sensors in real time allows the vehicle to determine what is an abnormality. Sensors including RADAR and cameras are prone to false positive detections (seeing something when indeed nothing relevant is present), which necessitates car manufacturers tuning down their response actuations (otherwise the car would be slamming on the brakes all the time). With an up-to-date HD map, these common false positive areas can be filtered and the response actuation threshold can remain high.

You can see that the objects around the crash site changed on a day-to-day basis; the attenuator was in tact March 10, crumpled on March 12, cones and a temporary traffic control barrier were placed before March 21, and the accordion was restored sometime before the 26th. These changes highlight the importance of a daily, if not hourly, refresh rate on the map. Simplistically, if an out-of-date map clues the radar to expect a bright metal placard, but a plastic traffic control barrier attenuates that radar signal, the vehicle may believe that the path ahead is clear.

Noisy GPS Data from Payver Trips on the 101S-85 interchange

Finally, because the HOV interchange where the crash occurred is a relatively complex area, unfiltered GPS traces show a high level of ambiguity, meaning OEMs cannot rely on GPS alone when following a map. To properly utilize an HD map, the vehicle must localize itself using other locally static features in the environment. Our system, for example, uses features including signs, barriers, and other lanes to perform localization. As the road environment changes, these features need to be updated in the map. This is why we place such a high value on collecting data on a daily basis.

Once localized in a map, self-driving algorithms can see “virtual lane lines” that are used to sanity-check the real-time detections and provide more confidence for the autonomous vehicle to brake in an anomalous situation. These virtual lane lines are available during rainy, dark, or otherwise low-visibility situations where a camera-based system would fail — including when the paint on the road wears.

Relative to the cost of hardware such as LiDAR, maps are incredibly inexpensive. They can also be deployed to fleets via over-the-air software updates so we don’t need to wait years to have them implemented in our vehicles.

lvl5 computer vision (without camera fine-tuning) detects the pedestrian and her bicycle. Our semantic segmentation dataset spans millions of miles of diverse driving conditions.

We also have over 5 million miles of semantically labeled data for pedestrians, lanes, roads, barriers and medians from around the world. We believe that this dataset can help improve existing computer-vision-based object detection algorithms. Our goal is to make incremental improvements, deploying our technology in autonomous vehicles as soon as it can enhance the driving experience. We can help reduce the number of fatal accidents today — not in 3 years. You can read more about our mission here.

--

--