Challenges in pre-mapping and perception facing self-driving cars

zach osman
Writing for the Future: AI
4 min readAug 2, 2018

By Zachary Osman

POV: Self-driving fully autonomous Tesla driverless car driving on highway. Photo courtesy of Storyblocks

In 2016, Elon Musk confidently said that by 2017 a Tesla car would make a cross country trek across the US without a driver. That didn’t happen.

By the end of 2017, Musk readjusted his timeline and promised self-driving cars would be on the road by mid-2018. Now it’s August, 2018, and we’re still waiting to see Tesla’s fully autonomous car.

Tesla’s problem is not unique; it faces many of the same challenges as other self-driving car companies.

One challenge Musk identified is the need for specialized code. In a self-driving car, specialized code is used for pre-mapping, which allows the car to perceive its environment.

Pre-mapping is necessary for nearly all self-driving cars and it is a process of making extremely detailed maps using lidar, beams of light used to measure the distance of objects from the car and create a 3D “map” of an area.

These maps are nothing like Google Maps: they log every curb height, stop sign location, fire hydrant and lane marking within a centimeter so the car knows the area where it is driving.

“We could have done the coast-to-coast drive, but it would have required too much specialized code to effectively game it or make it somewhat brittle and that it would work for one particular route, but not the general solution,” Musk said.

This advanced technology presents a drawback to self-driving cars.

“As a result, those vehicles are virtually fenced in by the pre-mapped region; a Waymo car’s self-driving mode won’t even kick in unless it senses that it’s in a mapped zone,” says Steven Levy from Wired.

Waymo, Uber, Drive.ai and almost every other driverless car company that relies on these maps are limited to very specific areas that have been mapped in such a detailed way. You won’t be able to drive to new places in driverless mode because your car wouldn’t have these maps.

Musk refused to use lidar in his promise to introduce self-driving cars in 2017 because he believes it is a temporary solution and will make innovation difficult in the future.

Accuracy is also a concern when it comes to pre-mapping.

“The map changes really frequently. If it changes enough, your localization is going to fail,” says Brody Huval from Drive.ai.

A small change in the map such as construction, a new sign or even a bush growing a little can be enough to confuse the car’s computer because its perceived environment doesn’t match up with the pre-mapped data.

Besides the actual functions and capabilities of pre-mapping, the process of making these maps presents another challenge to the self-driving car industry.

“Most companies developing maps for self-driving vehicles currently use a system that works fine for research and development but is probably prohibitively expensive and time-consuming for mass production,” says Philip Perry of Big Think.

Pre-mapping is not an quick or easy process. Getting maps for a specific area requires drivers with lidar scanners to gather visual data multiple times on the same route. Lidar technology itself is also very expensive; in 2018, the devices still cost about $75,000 each.

Sensors that are used for mapping are also an important part of safety in self-driving cars. Lidar, radar and cameras are all used in what is called the perception module of the car, which takes information from sensors and attempts to classify objects into things like people, bicycles, or other cars. These classifications dictate how the car will behave in certain situations and are supposed to help it avoid accidents.

In March 2018, a self-driving car operated by Uber hit and killed a pedestrian in Phoenix, AZ, the first fatal accident caused by a self-driving car.

According to the NTSB report, the Uber car could not identify the woman wheeling her bike across the street. The radar and lidar both detected her six seconds before the crash, but got confused, initially identifying her as unknown, then a vehicle, and finally as a bicycle.

Just over a second before impact, the car attempted to activate its emergency brake, but it had been disabled so it would not interfere with the self-driving features.

Many would say that when the perception module gets confused the car should brake to avoid these accidents. However, sudden stops caused by confusion in the self-driving car can also be problematic.

“Confused AVs have in the past been rear-ended (by human drivers) after slowing suddenly. Hence the delegation of responsibility for braking to human safety drivers, who are there to catch the system when an accident seems imminent,” says The Economist.

While the combined efforts of human supervision with self-driving cars seems like the safest option, it is a setback for the future of fully autonomous cars. Uber suspended all testing of its autonomous vehicles after this crash, and has laid off about 100 operators and eliminated the position at their Pittsburgh site.

While many challenges surfaced, companies working to put driverless cars on the road are continuing to test and innovate.

“By 2030, driverless vehicles and services will be a US$1 trillion industry. To these benefits, the great automotive brands will need to weave the latest technology into everything they do,” says Nvidia, a major GPU manufacturer for many self-driving cars.

--

--