iPhone X Mask Dupe Raises AI Security Concerns

Synced
SyncedReview
Published in
3 min readNov 15, 2017

Machines have already beaten humans in image recognition accuracy, but they are also susceptible to errors. Last Friday, Vietnamese company Bkav announced they had outsmarted the AI-powered image recognition machine that unlocks the new iPhone X.

Bkav broke Face ID using a composite face mask made of 3D-printed plastic, silicone, makeup and paper cutouts. The company released a video showing the experiment: When the demonstrator unveils their creepy mask to the front camera, the iPhone is immediately unlocked.

Courtesy Bkav Corporation

Obviously, Face ID should not have been so easily cracked by a mask. Apple’s new face recognition tech is powered by a system called TrueDepth, which includes a dot projector, infrared camera and flood illuminator. The setup does not simply detect a 2D image, but actually projects a network of dots onto its subject, like a 3D contour mesh, to determine whether a presented face matches the User’s recorded face model. Apple also employs machine learning algorithms and neural networks run by the new A11 Bionic chip to train Face ID.

When Face ID made its debut, Apple lauded it as 20 times more secure than its predecessor, the fingerprint-based Touch ID. Apple boldly put the chances that someone could trick it at one in a million. More importantly, Apple actually claimed that masks would not stymie the robust tech.

Bkav’s method was simple: they scanned a test subject’s face, used a 3D printer to generate a face model, and affixed paper-cut eyes and mouth and a silicone nose. The total cost was only US$150.

While fooling Face ID is a coup for Bkav’s machine learning researchers, it is nightmare for AI security at Apple and has raised widespread concerns regarding deep learning’s fragility in dealing with adversaries.

When two different objects have an uncanny resemblance in features, this can confuse deep learning models. A well-known example in the AI community is differentiating between Chihuahuas and Muffins.

Machines can also be fooled by handcrafted inputs called adversarial examples, which are designed to fool a neural network. By adding imperceptibly small perturbations, adversarial examples successfully tricked a neural network developed by a team of researchers from Google, Facebook, New York University, and the University of Montreal into classifying a school bus and a dog as an ostrich.

San Francisco-based AI research institute OpenAI explains why it is so hard to defend adversarial examples: “it is difficult to construct a theoretical model of the adversarial example crafting process… adversarial examples are also hard to defend against because they require machine learning models to produce good outputs for every possible input. Most of the time, machine learning models work very well but only work on a very small amount of all the many possible inputs they might encounter.”

“Even if iPhone X/FaceID is trained to be reject some types of face masks, an adversary can create a mask very different than what it was trained on,” tweeted Andrew Ng from Deeplearning.ai.

Bkav’s cracking Face ID made headlines and upped the stakes on consumer device privacy, and more generally on AI-powered security. It will also ignite the race between security developers and disrupters.

Journalist: Tony Peng | Editor: Michael Sarazen

--

--

Synced
SyncedReview

AI Technology & Industry Review — syncedreview.com | Newsletter: http://bit.ly/2IYL6Y2 | Share My Research http://bit.ly/2TrUPMI | Twitter: @Synced_Global