Fooling real cars with Deep Learning

Shachar Mendelowitz
The Startup
Published in
8 min readJul 29, 2019

TL;DR
Modern vehicles use Deep Learning applied to visual input to “understand” its surrounding scenery. This is true for both connected and Autonomous vehicles — computer vision is used for traffic sign recognition (TSR), lane detection, automatic parking and more.

We attacked a real vehicle using Deep Learning to generate real life Adversarial traffic signs, effective on cars from different manufacturers. We used nothing more than a strong GPU and commercially available printing services

The best part? The attack is 100% blackbox. No need for the vendor code or knowledge of internal systems, such as the vehicle’s neural network architecture or weights. To tell you the truth, we have all reasons to suspect that the attack works even with traditional computer vision systems.

The attack deliberately causes a real car to make wrong classifications. We took the vehicle to SMART range, an automotive cybersecurity oriented experimentation range affiliated with Harman (our company) & Ben Gurion University, where we tested the attacks in an outdoor scenery.

Checkout the full publication - Fooling a Real Car with Adversarial Traffic Signs & Checkout the video (best viewed on desktop)

Fooling a real car — a demo — a manipulated 50 km/h sign perceived as 30 km/h.

The video is best viewed on desktop, as a low resolution smartphone screen significantly distorts the image of the sign to the human eye.

When we think of cybersecurity, we think of hacking devices, reverse engineering code, abusing admin passwords, exploiting software vulnerabilities for remote code execution (RCE), running arbitrary code and privilege escalation. The required set of skills required of a traditional hacker is broad, including reverse engineering, network security, operating systems internals, firmware stacks and more. In this post, we will be talking about hacking the Traffic Sign Recognition system in a non-traditional manner.

Modern Vehicles - A (vulnerable) Server farm on wheels

Vehicles are becoming more complex as they become more lucrative. It has been stated An average modern high-end vehicle has over 100 million lines of code, more than a modern US Army force combat system or the Large Hadron Collider, just below the total number of DNA basepairs of a Mice. This is not rocket science — this is even more complex.

The number of ECUs (Electronics Control Unit — essentially, a standalone computer) varies between ten to more than a hundred, with connected functionality, such as WiFi, Bluetooth and cellular connections. Vehicles have essentially become server farms, with continuous connection to the internet, lots of computational power, including CPUs and GPUs and there are 70,000,000 of them produced each year, with 1 Billion currently on the road.

infographic for Lines of Code (source: informationisbeautiful)

Software 2.0

Andrej Karpathy, one of the world’s leading Deep Learning experts and currently a Senior Director of Artificial Intelligence at Tesla Motors, has called out Deep Learning to be Software 2.0, a new way to programs machines.

Software 2.0 has already been deployed in the real world as Computer Vision software in modern vehicles. These generate, as Andrej Karpathy put it, “computational homogeneous”, “highly portable” and “agile” functionality, in the form of neural networks that execute self-driving logic. This in turn, present us with a new world of vulnerabilities — generic algorithmic vulnerabilities.

In his blog, he has stated the limitations of Software 2.0, amongst which:

Finally, we’re still discovering some of the peculiar properties of this stack. For instance, the existence of adversarial examples and attacks highlights the unintuitive nature of this stack.

The hacker motivation

The motivation for attacking a vehicle is vast — it can be hacked to gain personal information (as previously IoT devices were hacked), leak contacts or use its connectivity for DDoS attacks and similar traditional attacks, Newly crafted exotic attacks may be introduced, such as ransomware (and pay to unlock), GPU attack to mine cryptocurrencies (and pay for the electricity), or steering an autonomous truck carrying goods to a desired location, combining Spoofed traffic signs and GPS jamming (without breaking a single line of code).

Attacking vehicles’ Software 2.0, does not require traditional software hacking skills, nor does it even requires specific automotive technologies knowledge. With a plethora of published research papers (albeit, the vast majority of it is simulative), listed in Nicholas Carlini’s homepage (a Google Brain researcher, specializing in adversarial networks), every single researcher is a published neural-nets hacker.

Not everyone agrees, though, that adversarial-image-based attacks are a real threat. Chris Valasek & Charlie Miller, the 2015 hacked Jeep Cherokee hackers duo, have addressed in a 2018 paper that autonomous vehicles may be “less hackable than you think”. They added “However, at the time of writing this paper, these sensor-based attacks appear to be difficult to perform outside of a controlled environment and do not scale”.

One year later, due to rapid improvements in AI research and our own results, we now argue that not only autonomous vehicles are *More* hackable than you think. Adversarial attacks are easy and cheap to generate and they work outside of a controlled environment and are generally transferable between manufacturers. Even current modern vehicles on the road are in danger.

Chris & Charlie at Blackhat

Attacking the vehicle

The vulnerability

  1. Machine learning based classifiers are prone to Adversarial attacks
    This means that visual machine learning classifiers that perceive a certain traffic sign (50 km/h, for instance), can not correctly deal with all images that are correctly interpreted by a human being. It is possible to intentionally create traffic sign images that will be understood differently by a driver and the traffic sign recognition system. And both the human and the computer will be sure that they are right.
  2. Attacks are transferable.
    Unlike traditional vulnerabilities, where fragmentation of market devices adds cyber-robustness (a vulnerability found in a stack of a smartphone, will probably not affect another’s smartphone stack), these attacks are weakly dependent on the neural net architecture and exact implementation. Transferability means an attack on a vehicle will, with good probability, be effective on other vehicles, even if not from the same car manufacturer.
An amalgam of printed signs

Crafting an attack: Advantages for a hacker

Spoofing a vehicle’s neural network is:

1. May be made unnoticeable to a human being.

2. Impossible to detect by classical cybersecurity measures

3. Cheap and easy to implement.

4. Relevant to legacy computer vision systems

5. Blackbox. Does not require the “source code” (albeit would be more effective if it had)

Crafting an attack: The challenges

The image perturbation must be:

  1. Strong enough to survive the real-world: conditions, such as different observation angles, distances, lighting, occlusion, etc.
  2. Negligible and dismissed quickly by a human being as something that would appear naturally, such wear, dirt, or “innocent” vandalism.

These conditions are obviously contradictive, and these causes reasonable skepticism about the feasibility.

Spoofing Traffic Signs — Dirty to the naked eye, malicious for a car

The attack

Constructing the attack was inspired by several research papers, amongst which DARTS (Deceiving Autonomous Cars with Toxic Signs). We built an adversarial factory that allows reproducible production of robust real-world adversarial signs.

Our adversarial factory pipeline consists of three phases, namely, File-to-file, TV-in-the-loop and Real world. The pipeline include real world random transformations in order to accommodate for real world effects.

Colored vs monochrome attack traffic signs — which is less noticeable?

File-to-file
Attacking our own self-made classifier, yielding a high probability attack success rate. In this phase, the algorithm works on the file level, no real world involved.

Files generated for the file-to-file attack pipeline

TV-in-the-loop
Introduce real world variations by showing a camera the adversarial signs on a TV and attack our self-made classifier. The use of a TV allowed us to introduce real-world transformations, such as tilt, glare and more. The setup has been run repeatedly, with manual changes to orientation and lighting in order to evaluate the attacks in a controlled environment.

A TV-in-the-loop pipeline

The physical transformations were determined by setting the relative angles between the camera and the TV and varying the lighting conditions. The adversarial success rate was measured as the percentage of viewing angles where the adversarial sign was able to fool a black-box classifier. This technique allowed scaling up the adversarial traffic sign creation process, by evaluating thousands of configurations, as real-world simulator environment.

The most effective adversarial traffic signs were able to fool the classifier well over 90% of viewing angles and lighting conditions.

TV-In-the-loop captured adversarial images of 80 km/h spoofed traffic sign

Real world, real car, real classifier
At the final phase, we printed out the most successful signs in real size, on an actual traffic sign material. At this point, as it is a blackbox attack, we can only evaluate how effective are the printouts. Playing the role of the attacker, we have the privilege to choose with which sign to attack. Of course, we chose the ones that scored the best in the TV-in-the-loop experiments.

With all this in hand, we went to a local rental company and rented a car equipped with a camera-based traffic sign recognition system. There was nothing special about this car — it was just a regular compact-class sedan. Except for the fact it was supposed to be the first car ever attacked with adversarial traffic signs.

Experiment sequence

  • 14 Attack spoofed traffic signs
  • 14 Normal traffic signs
  • Repeat Drive-by runs with both normal and adversarial signs
  • Repeat — Morning, noon and evening.

Results

  • Six signs successfully spoofed the vehicle’s TSR system, in a robust manner, throughout the day morning, noon and evening.
  • Four caused a DoS (Denial of service) - causing the TSR to completely halt for roughly a minute, repeatedly. This unexpected result have been surprising, but we consider it a successful attack nonetheless.

Reiterating the printout adversarial signs Setting the up in SMART Range, Sde Teiman, Be’er Sheva, Israel

SMART Range, Sde Teiman, Be’er Sheva, Israel
HARMAN team setting-up the testing track at SMART Range in Israel
Adversarial 100 km/h traffic sign — successfully and robustly misclassified as 120 km/h

Summary

  • Attacks are cheap and easy to manufacture
  • Autonomous cars are more hackable than you thought
  • Drive-by experiments with a real car confirm TSR vulnerability
  • The required hacking skillset — Data scientist equipped with a GPU, data and Neural nets training experience.

The Paper

Full publication ias available at Fooling a Real Car with Adversarial Traffic Signs, as part of a research conducted at Harman’s cybersecurity division.

This work has been led by Dr. Alexander Kreines, Nir Morgulis & myself, Shachar Mendelowitz as part of Automotive Cybersecurity research project at Automotive Cybersecurity BU, Harman International

--

--