AI vs AI: ‘FakeSpotter’ Studies Neurons to Bust DeepFakes

Synced
SyncedReview
Published in
5 min readSep 23, 2019

In 2017, new deepfake technology broke the internet when a Redditor used face-synthesize technology powered by generative adversarial networks (GAN) to create and spread a series of fake celebrity porn videos.

In 2018, actress Scarlet Johansson, a frequent target of deepfake porn, spoke out against deepfakes in an interview with the Washington Post: “the internet is a vast wormhole of darkness that eats itself”.

And in 2019, Facebook was criticized for failing to remove a viral video that had been manipulated to make US House Speaker Nancy Pelosi sound drunk.

Also in 2019, China’s Standing Committee of the National People’s Congress amended its Civil Code Personality Rights (Draft) in an attempt to reduce the malicious spread of AI-empowered deepfake images and videos. The amendment states that no organization or individual may infringe the portrait rights of others through digital technology fakes.

In its Worldwide Threat Assessment, the US Office of the Director of National Intelligence warns, “Adversaries and strategic competitors probably will attempt to use deep fakes or similar machine-learning technologies to create convincing — but false — image, audio, and video files to augment influence campaigns directed against the United States and our allies and partners.

Two weeks ago Facebook announced its “Deepfake Detection Challenge,” which aims to crowdsource deepfake detection solutions for exposing AI-synthesized videos that might mislead viewers.

Now a team of researchers from Nanyang Technological University, Kyushu University, Alibaba Group, and Xiaomi AI Lab have introduced a new approach that monitors neuron behavior to spot AI-synthesized fake faces. Proposed in the paper FakeSpotter: A Simple Baseline for Spotting AI-Synthesized Fake Faces, the approach may prove to be the Sherlock Holmes of fake face detection.

Researchers found that the neuron coverage behaviors between real and fake faces in deep face recognition systems can provide a critical clue for differentiating fake from real. Using this neuron coverage technique, researchers captured fine-grain facial features with deep facial recognition systems such as VGG-Face, OpenFace, and FaceNet. Because neurons can learn meaningful representations of inputs in image processing, the researchers focused on the behaviour of activated neurons as determined by a neuron coverage criteria they call “MNC.” The last piece of the FakeSpotter puzzle is the linear binary-classifier trained at detecting fake faces. The number of activated neurons in each layer can also be sorted as a feature vector during the linear binary-classifier training.

Experiment results show FakeSpotter reaching fake face detection accuracy of 78.23 percent, 80.54 percent, and 84.78 percent on VGG-Face, OpenFace, and FaceNet respectively, better performance than traditional deep CNNs.

Researchers believe the method can be effective in deepfake detection on face-swap videos. Their future work will involve localizing tampered areas in images for forensic study, and building a benchmark of high-quality fake images, tools for synthesizing fake images, and current detection methods.

Rapid advancements in AI technologies have spawned fake speech, fake videos, and so on. Can AI now be used to counter these misuses and protect our privacy? Some are doubtful.

A new report from nonprofit research institute Data & Society, Deepfakes and Cheap Fakes, examines the proliferation of fakes from a different perspective. Authors Britt Paris and Joan Donovan argue that technical solutions alone are not enough to address the problem: “News coverage claims that deepfakes are poised to destroy video’s claim to truth by permanently blurring the line between evidentiary and expressive video. But what coverage of this deepfake phenomenon often misses is that the ‘truth’ of AV (Audiovisual) content has never been stable — truth is socially, politically, and culturally determined. And people are able to manipulate truth with deepfakes and cheap fakes alike.”

“With thousands of images of many of us online, in the cloud, and on our devices, anyone with a public social media profile is fair game to be faked,” note Paris and Donovan, who propose fighting deepfakes via social policy measures such as penalizing individuals for harmful behavior and increasing content moderation and positive social value promotion by tech giants.

Similar sentiments are emerging in academia regarding the role of individuals and enterprises in the fight against manipulative AI. Turing award winner Yoshua Bengio, in his talk Learning High-Level Representations for Agents in last week’s MIT CSAIL Dertouzos Distinguished Lecture Series, spoke of declining to work on certain AI projects and identified “Manipulation from advertising and social media” and “Increased inequality and power concentration in few companies” as dangers associated with the misuse of AI technologies. Bengio told Synced: “I wish for my work to be used for good causes, not ones which are hurting humanity.

from Yoshua Bengio’s recent CSAIL talk

The paper FakeSpotter: A Simple Baseline for Spotting AI-Synthesized Fake Faces is on arXiv.

Journalist: Fangyu Cai | Editor: Michael Sarazen

We know you don’t want to miss any stories. Subscribe to our popular Synced Global AI Weekly to get weekly AI updates.

Need a comprehensive review of the past, present and future of modern AI research development? Trends of AI Technology Development Report is out!

2018 Fortune Global 500 Public Company AI Adaptivity Report is out!
Purchase a Kindle-formatted report on Amazon.
Apply for Insight Partner Program to get a complimentary full PDF report.

--

--

Synced
SyncedReview

AI Technology & Industry Review — syncedreview.com | Newsletter: http://bit.ly/2IYL6Y2 | Share My Research http://bit.ly/2TrUPMI | Twitter: @Synced_Global