Tracer Newsletter #45 (02/03/20)-UC San Diego researchers develop “adversarial deepfakes” designed to fool detection systems

Henry Ajder
Sensity
Published in
5 min readMar 2, 2020
02/03/2020

Welcome to Tracer, the newsletter tracking the evolution of deepfakes, synthetic media, and emerging cybersecurity threats.

Want to receive new editions of Tracer direct to your inbox? Subscribe via email here!

This week we’re also happy to share the release of our latest collaborative research on the commoditization of deepfakes via NYU School of Law. Check it out on their Quorum blog!

UC San Diego researchers develop “adversarial deepfakes” designed to fool detection systems

Researchers from UC San Diego released a paper empirically proving that deepfake videos can be altered adversarially, causing certain deepfake detection systems to identify fake videos as real.

What is the premise of the research?

The research explores whether deepfake detection techniques could be fooled into classifying fake videos as real. Many of these deepfakes detection methods are based on deep neural networks (DNN) and function with a fairly high rate of success. However, once these detection systems are deployed “in the wild” bad actors will inevitably try to subvert them, likely using adversarial examples. In the case of a deepfake video, this involves creating tiny pixel modifications that, while imperceptible to the human eye, can dramatically change how a DNN based detection system processes the video. These adversarial deepfakes would look indistinguishable from a normal deepfake, but may trick a detector into verifying a fake video as real.

What were the research findings?

The researchers created adversarial examples for each face in a set of deepfake videos by applying a standard off the shelf algorithm and then placing the faces back in their original frames. Throughout a variety of different tests, these adversarial deepfakes were able to fool two published detection models with a high success rate. The researchers also demonstrated that it is possible to generate adversarial deepfakes that are robust to the kind of video compression widely applied by social media platforms when a video is uploaded. To combat these kinds of critical attacks, the authors recommend that future research into deepfake detection focuses on training techniques that are known to build robustness against these kinds of adversarial examples.

Twitter give blue verification checkmark to a fake congressional candidate using a synthetic profile picture

A student created a fake Twitter account featuring a synthetic profile picture to pose as a Republican congressional candidate for Rhode Island, with Twitter verifying the account with a blue checkmark.

How did the account receive the blue checkmark?

The 17 year old unnamed student claimed he created the fake “Andrew Walz” account to test Twitter’s election integrity efforts following their announcement that they would verify all congressional and gubernatorial candidates for this year’s US elections. Along with a fabricated bio and background, the fake candidate’s website and Twitter account also used a synthetic StyleGAN2 generated profile picture from the website thispersondoesnotexist.com. The student then submitted the fake candidate’s information and a short survey to Ballotpedia, a non-profit “political candidate encyclopedia” that partnered with Twitter to help them verify official candidates. Without requesting ID or documentation, the account was then contacted by Twitter to receive the verification checkmark.

The constantly evolving challenge of verifying social media accounts

The error was acknowledged by Ballotpedia, who explained that the mistake was caused by their reports to Twitter lacking a distinction between declared candidates, who were establishing an online presence before filing, and filed candidates who had officially registered for the elections. Twitter also removed the account for violating its terms of service. The incident highlights the difficult task social media companies face when verifying accounts and guarding against constantly evolving techniques for subverting their platform’s security protocols.

This week’s developments

1) Coinspring Inc. launched Familiar, a smartphone app for faceswapping users into a pre-approved selection of gifs, with the process only requiring a single image of the user to operate. (Product Hunt)

2) An MIT supported researcher released Fifteen.AI, a free real-time text to speech tool that generates voices of characters from video games and cartoons using minimal training data. (Fifteen.AI)

3) A video circulating on social media was falsely claimed by some users to feature Greta Thunberg firing an AR-15 rifle, with the video actually featuring a young Swedish shooting enthusiast. (AP News)

4) MIT researchers conducted an experiment comparing the persuasive power of political video and text, with results indicating that videos only slightly increased a story’s believability over text. (PsyArXiv)

5) Two musicians synthetically generated and copyrighted “every possible MIDI melody in existence”, before releasing it to the public in an attempt to prevent copyright claims against musicians. (Vice)

6) Tencent and Tianjin university researchers published a new technique for generating synthetic images of a “child’s” face based on a pair of “parent” faces, with control for age and gender. (ArXiv)

Opinions and analysis

Will Instagram filters alter our view of beauty and who we are?

Chris Stokel-Walker explores how the increasing popularity of novelty and beautifying AR filter apps may impact the way we perceive ourselves in relation to our “enhanced” digital selves.

It doesn’t matter if anyone exists or not

Ian Bogost argues that there is little meaningful difference between the way we perceive images of real people and synthetically generated ones, based on the way we view other individuals in modern society.

Could deepfakes be used to train office workers?

Jane Wakefield and Beth Timmins outline some of the different commercial applications of AI-generated synthetic media currently being developed and reflect on the potential ethical challenges they present.

Want to receive new editions of Tracer direct to your inbox? Subscribe via email here!

Working on something interesting in the Tracer space? Let us know at info@deeptracelabs.com

To learn more about Deeptrace’s technology and research, check out our website

--

--