Personal Invisibility Cloak Stymies People Detectors

Synced
SyncedReview
Published in
4 min readDec 4, 2019

When Harry Potter receives an invisibility cloak as a Christmas gift he uses it to conceal himself from Hogwarts teachers and nasty caretaker Argus Filch. Now, researchers from Facebook AI and the University of Maryland have introduced a 21st century version — sweatshirts printed with adversarial examples that make the wearer undetectable to the AI-powered object detectors in today’s public surveillance systems.

Ian Goodfellow, the renown research scientist who pioneered generative adversarial networks (GANs), describes adversarial examples as “inputs to machine learning models that an attacker has intentionally designed to cause the model to make a mistake.” In the new study, researchers printed adversarial examples on sweatshirts and other items to “attack” object detectors and cause them to fail to recognize their targets from images or videos.

Fooling object detectors is much more difficult than fooling classifiers. As the researchers explain: “the ensembling effect of thousands of distinct priors, combined with complex texture, lighting, and measurement distortions in the real world, makes detectors naturally robust.”

This April, Synced reported onresearch from Belgian university KU Leuven which demonstrated how an adversarial attack using a colorful 40 sq cm printed patch could significantly lower the accuracy of object detectors. And in August we covered research from Lomonosov Moscow State University and Huawei Moscow Research Center, which proposed a wearable card designed to conceal a person’s identity from facial recognition systems. Both these efforts were limited to 2D printed patches, while the new study expands the method to the more practical but challenging realm of clothing and 3D objects.

An overview of the adversarial pattern generation framework
Impact of different patches on various detectors, measured using average precision (AP).

Researchers “trained” their attack patches using a random subset of 10,000 images containing people from the huge COCO dataset. They first evaluated the patches in digital simulated settings: white-box attacks (detector weights used for patch learning) and black-box attacks (patches crafted on a surrogate model and tested on a victim model with different parameters). All the trained patches proved highly effective in digital simulations.

The patterns were printed on paper dolls and other 3D objects to test the effects of physical deformations.
Adversarial sweatshirts

Researchers then moved on to physical world attacks, applying their adversarial examples to posters, paper dolls (folded printouts of test images at different scales) and sweatshirts. The wearable attacks significantly degraded the performance of SOTA object detectors across different environments.

The experiments show such digital attacks can transfer between models, classes and datasets, and also into the real world, although with less reliability than attacks on simple classifiers.

The paper Making an Invisibility Cloak: Real World Adversarial Attacks on Object Detectors is on arXiv.

Author: Yuqing Li | Editor: Michael Sarazen

We know you don’t want to miss any stories. Subscribe to our popular Synced Global AI Weekly to get weekly AI updates.

Need a comprehensive review of the past, present and future of modern AI research development? Trends of AI Technology Development Report is out!

2018 Fortune Global 500 Public Company AI Adaptivity Report is out!
Purchase a Kindle-formatted report on Amazon.
Apply for Insight Partner Program to get a complimentary full PDF report.

--

--

Synced
SyncedReview

AI Technology & Industry Review — syncedreview.com | Newsletter: http://bit.ly/2IYL6Y2 | Share My Research http://bit.ly/2TrUPMI | Twitter: @Synced_Global