On security and safety of HD maps

yodayoda
Map for Robots
Published in
7 min readJan 5, 2022

More autonomous vehicles (AV) are driving on our roads every day. It’s time we discuss the security and in particular the cybersecurity and safety of high-definition (HD) maps. This topic is wildly ignored but could have terrible consequences and also slow down the adoption of the otherwise life-saving technology. We are happy to see two research articles pick up this topic this year but this is still a concerningly low number.

Many AV companies and researchers focus on investigating deep learning models that can understand the environment and provide input to the decision layer of which action an AV should perform next. The input comes from camera, radar, lidar, and odometry sensors and is interpreted by traditional algorithms common in robotics such as sensor fusion and AI models. Some input such as the maps are either locally present in the car or synchronized with a cloud. Also several semantic data and Vehicle to Everything (V2X) communication is synched with the AV via cloud services at least to some extent.

Fig. 1: Several attacks (red) are possible to different systems of an autonomous vehicle. Source: [Deng et al.]

This gives rise to several attack surfaces that are shown in Figure 1. An attack can lead in the worst case to death of passengers and other actors on the road. In less severe situations, AVs could cause delays, leave their passengers annoyed with the service, or be misused for criminal activity. The deep learning models are especially vulnerable to attacks.

“To improve the robustness of the [autonomous driving system], model robustness training, model testing and verification and adversarial attacks detection in real-time should also be studied thoroughly.” — Deng et al., 2021

Several attacks have been shown including full remote access to Tesla cars that partly function autonomously. Check out this video where Tencent Keen Security Lab redirects a Tesla into counter-traffic and controls it via a gamepad or this one where a bit of paint convinces the recognition system of a Tesla that the speed limit is 85 not 35 Mph.

A very nice overview of the the different attack vectors can be found in the paper from IEEE [Chowdhuri et al.]. One of the conclusions of the authors is:

“[The] potential attacks and their possible consequences are not clearly understood by the relevant research communities and stakeholders.” — Chowdhuri et al, 2020

This is concerning by itself but let’s discuss the security of maps in more detail.

HD maps

HD maps enable better localization of AVs by allowing the current environment recorded by sensors to be compared to a reference map. Localization is currently possible down to 5–10 cm. In general, HD maps include information on road lanes, signs, driving behaviour and obstacles. Map updates to vehicles are delivered via a cloud service to keep them up-to-date. Road conditions update regularly and even minutes delay can lead to the necessity of human intervention (remote operators).

HD maps play a vital road in localization; they also add redundancy and, thus, security to the AV driving system. An attack on a single sensor such as GPS or LiDAR can be detected by making assumptions based on previous locations (a car cannot make unphysical jumps in space, needs to be located on a drivable surface and so on). If the vehicle’s sensors are attacked by adversarial attacks such as optical adversarial attack [Gnanasambandam et al.], it may be theoretically possible to detect and mitigate those attacks with HD map.

However, the maps are subject to attacks themselves.

Attacks on HD maps

“Loopholes existing in localization and navigation technologies […] could be utilized by adversaries to manipulate autonomous driving navigation by hijacking valuable vehicles, goods, or even target characters” — Luo et al, 2019

One of the easiest attacks to imagine is a Sybil attack tried within the research of [Sinai et al.]. The attack targeted social navigation application WAZE, for which they created 15 bot drivers accounts.

Since reports from other drivers can influence traffic data, the researchers started driving emulation and trained their accounts with a complicated scheme of changing speeds over several hours.

After the training, they sent fake driving data (spoofing) pretending their bots drive between 2 and 8 km/h, thus causing a traffic jam to be displayed on the WAZE application. With such an attack one could redirect autonomous vehicles (and to some extent manual vehicles) to take different routes. We immediately have to think of several movie plots in which the bad guys divert trucks carrying valuable goods or prisoners in order to lure them into a vulnerable spot :)

Bad luck for the automated Fortico cash truck. Adapted from [Sinai et al.]

In the previous mentioned paper, the authors were also able to track other users and report fake obstacles.

Other attacks include physical attacks. We have seen the street sign swap recogniced falsely by a Tesla, if such a sign would be reported to the map server by several vehicles it can be recognised as a valid map update and would be distributed to all cars, even influencing cars that are currently not in the vicinity. One could imagine huge printed posters that could imitate environments from another portion of the map, e.g., show electricity poles where there are none. Localization will be in conflict with GPS, however, in some situations it might be enough to move a car within the gps accuracy (a few meters) to result in it falling down a steep cliff. Well, you know where we are going with this. A similar situation can be achieved by actually moving anchor points, e.g. poles, traffic signs, lane markings in order to achieve the same effect.

Cyber attacks include either access to the map server (physical or remotely) or a man-in-the-middle attack. The consequences are the same. Fake map data can potentially be sent causing localization to malfunction, cars to speed up or down or choose wrong lanes (e.g. try to move into oncoming traffic). The car itself could be infected with malware and wrong map data could be injected. However, in that case, other systems are likely to be compromised as well. If access to the decision layer is granted there isn’t much use in editing the map data. The communication of map updates is of course also vulnerable to denial-of-service (DOS) attacks, resulting in no maps including updates to be sent to the AVs.

Mitigation of attacks

We are not helpless in the fight to deliver a secure network of autonomous vehicles.

One way to partly mitigate sibyll attacks is verifying drivers via carrier data. A sort of two factor authentication if you want for the data sender. The senders can be verified by either the usual tools such as social account, phone number, or location of the cellular antenna. We could, however, well imagine a more privacy-aware option that stores user key similar to yubikey on the car key itself and cryptographically verifies the senders. Of course this method needs to be combined with network traffic analysis to identify unusual reporting patterns.

A very important step is never to rely on a single system. Build in redundancy from the start when you develop localization for AV. That could even mean to rely on multiple map providers. But it doesn’t stop attacks, it just makes them harder.

What does stop attacks is to rely on code such as Software Guard Extensions (SGX) by Intel, which is designed to only allow cryptographically signed code to be executed. The code is protected by the hardware chip. Even though not perfect, the method stops many attacks and requires physical access to the AV itself for hacking.

Last but not least, a human-mind is still brilliant compared to any machine. Don’t forget that the passengers should always have an emergency stop button.

Conclusion

Autonomous vehicles are relying on their sensors and researchers found several ways to attack those vehicles via sensors. HD maps mitigate several attacks on those sensors as they allow for redundant verification.

On the other hand, HD maps are often generated from real environment data collected from vehicles and located on the cloud service, so we have to take utmost care that those data are not manipulated by malicious attackers.

We should apply the same regulations and checks as for other safety critical systems that are already functioning autonomously. There are many examples in transportation (driverless trains, planes on autopilot) and heavy industry (power plants, industrial automation). We therefore fully agree with the following statement:

“Just as aeronautical navigation applications are subject to stringent conditions and safety standards, vehicle and OEM parts manufacturers similarly require measures of quality or certification of HD Maps with respect to legal responsibilities and insurances.” — Simeon et al., 2018

At yodayoda we care about the security of our map systems and conform to stringent security standards for our cloud-based solutions. Please contact us if you need help upgrading your map security.

This article was brought to you by yodayoda Inc., your expert in automotive and robot mapping systems.
If you want to join our virtual bar time on Wednesdays at 9 pm PST/PDT, please send an email to talk_at_yodayoda.co, and don’t forget to subscribe.

--

--