Natural Adversarial Examples Slash Image Classification Accuracy by 90%

Synced
SyncedReview
Published in
3 min readJul 22, 2019

Researchers from UC Berkeley and the Universities of Washington and Chicago have released a set of natural adversarial examples, which they call “ImageNet-A.” The images are described as real-world, naturally occurring examples that have the potential to highly degrade the performance of an image classifier. For example DenseNet-121 obtains only around two percent accuracy on the new ImageNet-A test set, a drop of approximately 90 percent.

The ImageNet challenge competition was closed in 2017, as it was generally agreed in the machine learning community that the task of image classification was mostly solved and that further improvements were not a priority. It should be noted however that the ImageNet test examples are mostly relatively uncluttered close-up images which do not represent the more challenging object contexts and representations found in real world.

What’s more, it has been shown that adversarial examples that succeed in fooling one classification model can also fool other models that use different architecture or were trained on different datasets. Adversarial attacks therefore have the potential to cause serious and widespread security vulnerabilities across popular AI applications such as facial recognition, self-driving cars, etc.

Natural adversarial examples from ImageNet-A

The above examples show how the researchers’ new ImageNet-A images can fool even a hardy image classifier such as the deep neural network ResNet-50. The black texts indicate the actual image class, while the red texts show ResNet’s incorrect prediction along with its high (99%) level of confidence. What has surprised many in computer vision is that these are all natural adversarial examples free of any image processing or adversarial modifications.

The researchers first downloaded a bevy of user-tagged images from websites iNaturalist and Flickr. They then sampled all the real-world natural images, and removed those that failed to fool their ResNet-50 architecture. The final ImageNet-A dataset contains 7,500 AI-perplexing natural adversarial examples.

In a bid to improve classifier performance on their adversarial examples, researchers examined the best-in-class robust training techniques — including infinity adversarial training and stylized ImageNet augmentation — but these were not very helpful in improving classification performance. They then discovered that applying self-attention in the form of Sqeeze-and-Excitation (SE) could significantly improve robustness, as shown below:

Effect comparison between self-attention in the form of Squeeze-and-Excitation (SE) and Normal robust training techniques on ImageNet-A.

However even with the performance boosts achieved through self-attention, the ImageNet-A dataset still represents a huge challenge for classifiers. The machine learning community has much work ahead if it hopes to close the gap between the ease with which classifiers can handle ImageNet’s 14 million examples and their struggles dealing with the 7,500 confusing real world images identified by the ImageNet-A researchers.

The paper Natural Adversarial Examples is on arXiv.

Author: Hecate He | Editor: Michael Sarazen

2018 Fortune Global 500 Public Company AI Adaptivity Report is out!
Purchase a Kindle-formatted report on Amazon.
Apply for Insight Partner Program to get a complimentary full PDF report.

Follow us on Twitter @Synced_Global or daily AI news!

We know you don’t want to miss any stories. Subscribe to our popular Synced Global AI Weekly to get weekly AI updates.

--

--

Synced
SyncedReview

AI Technology & Industry Review — syncedreview.com | Newsletter: http://bit.ly/2IYL6Y2 | Share My Research http://bit.ly/2TrUPMI | Twitter: @Synced_Global