These Hoodies and Sweatshirts Can Fool Surveillance Algorithms and Make You “Invisible”

Created by Facebook and the University of Maryland Researchers…

Image for post
Image for post

The state of the art AI-driven surveillance technology has given spying powers to every camera. We think of surveillance cameras as highly advanced digital eyes, watching over us, or watching out for us. With the help of AI, these cameras now have brains to complement their eyes. While this is good news for public safety, helping police forces and detectives more easily spot crimes and accidents and have a range of scientific and industrial applications, conversely its invasion of privacy. So the question is —

Is there a way we can trick these surveillance algorithms and become “invisible”?

Researchers from Facebook and the University of Maryland have an answer. Nicknamed as — “invisibility cloaks” for A.I. — where the researchers from Facebook and University of Maryland — have made a series of sweatshirts and T-shirts that can trick surveillance algorithms that renders the wearer imperceptible to detectors. The research work presents a systematic study of adversarial attacks on state-of-the-art object detection frameworks. Using standard detection datasets, they trained patterns that suppress the objectness scores produced by a range of commonly used detectors, and ensembles of detectors. Researchers printed adversarial examples on sweatshirts and other items to “attack” object detectors and cause them to fail to recognize their targets from images or videos.

To go in the details — the Facebook and Maryland research team ran 10,000 images of people through a detection algorithm to create the deceptive t-shirts. When a person was detected, they were replaced with randomized changes of attributes like contrast, brightness, etc. To verify the effectiveness, a series of algorithms were then used to find if the randomized patterns can trick the algorithms or not. Once done, these patterns were printed on physical objects like dolls, papers, clothing — hoodies, and sweatshirts. When a person wears these sweatshirts, the detector’s ability to identify them falls down to 50%.

The pullover is a great way to stay warm this winter, whether in the office or on the roads.

It has a stay-dry microfleece lining, a modern fit, and adversarial patterns the evade most common object detectors. The YOLOv2 detector is evaded using a pattern trained on the COCO dataset with a carefully constructed objective.

Ian Goodfellow, the renowned research scientist who pioneered GANs, describes adversarial examples as “inputs to ML models that an attacker has intentionally designed to cause the model to make a mistake” —

“We show you a photo that’s clearly a photo of a school bus, and we make you think it’s an ostrich”.

As per the article published almost 4 years back talking about the exploits — The Byzantine science of deceiving artificial intelligence — a group of researchers from Google and Penn State University are devising effective solutions against potential attacks that could be carried out on artificially intelligent systems — attacks that use adversarial examples can be — changing what a self-driving car sees, activate voice recognition on any cell phone and make it visit a website with malware, let a virus sneak through a firewall into a network, etc.

In a 2015 paper, Google researchers showed it was possible to make deep neural networks classify this image of a panda as a gibbon, by applying light distortion.

You can access the research paper here —

Making an Invisibility Cloak: Real-World Adversarial Attacks on Object Detectors

Written by

🇺🇸,World Traveler,Women in Tech,Sr. SDE-Earning my bread using 0&1,Coursera Instructor ML & GCP, Trekker, Avid Reader,I write for fun@AI & Python publications

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store