Adversarial Traffic Signs

David Silver
Aug 16, 2017 · 3 min read
Image for post
Image for post

A couple of days ago I wrote about embedding barcodes into traffic signs to help self-driving cars. Several commenters pointed out a recent academic paper in which researchers (Evtimov, et al.) confused a computer vision system into thinking that a stop sign was a 45 mph sign, with just a few pieces of tape.

This appears to be an extension of a property of neural networks that was already known, which is that they can be fooled in surprising ways. This is called an “adversarial” attack.

Here is an example Justin Johnson gave in the fantastic Stanford CS231n class on convolutional neural networks:

Oops.

So it’s no shocker that the computer vision systems for cars, which rely largely on CNNs, can be fooled.

But notice that it’s not obvious how to apply Justin Johnson’s examples above to an actual printed photo of a goldfish in the real world. The examples above only really work if you have a digital photo of a goldfish.

The breakthrough of the Evtimov et al. paper is that they developed an attack algorithm, which they call Robust Physical Perturbations, that allows them to apply this attack to signs in the real world.

So now we are heading down the road of fooling cars into blowing through stop signs. Is the end nigh?

I’m skeptical.

Hackers hardly need to wait until self-driving cars are on the road before they mess with stop signs. It’s easy enough to cause real carnage today just by removing a stop sign. Indeed, this happens already and the people who do it get convicted of manslaughter. (Although note that particular case was overturned on appeal because it wasn’t clear whether the convicts removed the precise stop sign in question, or a different stop sign.)

I don’t see too many hackers messing with street signs, though, presumably because the result is both fleeting and unpredictable, and the cost (jail time) is high.

In fact, self-driving cars seem even less likely than human drivers to be fooled by tampered stop signs. Self-driving cars are likely to have maps and sensors that could override whatever the car’s camera sees.

It’s possible this paper leads to further breakthroughs in adversarial attacks that could cause more problems, but I don’t think this advance by itself is too worrisome.

Self-Driving Cars

A publication covering news, predictions, and opinions…

David Silver

Written by

I love self-driving cars and I work on them at @Udacity! Learn to build self-driving cars with us: https://udacity.com/drive

Self-Driving Cars

A publication covering news, predictions, and opinions about self-driving cars and other autonomous vehicles.

David Silver

Written by

I love self-driving cars and I work on them at @Udacity! Learn to build self-driving cars with us: https://udacity.com/drive

Self-Driving Cars

A publication covering news, predictions, and opinions about self-driving cars and other autonomous vehicles.

Welcome to a place where words matter. On Medium, smart voices and original ideas take center stage - with no ads in sight. Watch

Follow all the topics you care about, and we’ll deliver the best stories for you to your homepage and inbox. Explore

Get unlimited access to the best stories on Medium — and support writers while you’re at it. Just $5/month. Upgrade

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store