Tracer Newsletter #57 (22/06/20)- Facebook publishes the final results of the Deepfake Detection Challenge

Henry Ajder
Sensity
Published in
4 min readJun 22, 2020

Welcome to Tracer- your guide to the key developments surrounding deepfakes, synthetic media, and emerging cyber-threats.

Want to receive new editions of Tracer direct to your inbox? Subscribe via email here!

Facebook publishes the final results of the Deepfake Detection Challenge

Facebook published the final results of the Deepfake Detection Challenge (DFDC) and reflected on the strengths, as well as the weaknesses, of the top-performing detection models.

How well did participants’ detection models perform?
The DFDC called for researchers to design new deepfake detection models and train them on a new public dataset of more than 100,000 videos created specifically for deepfake research. The dataset was designed to represent several face swapping and speech synthesis techniques, including low-end manipulation such as blurring, modified frame-rates and video re-encoding. Of the 2,114 models submitted by participants, the top-performing model achieved 82% average precision on this public dataset. When evaluating entrants’ models against a private “black box” data set, the top-entry model’s performance decreased to 65% average precision. This private dataset was designed to include videos from ‘organic traffic’, including both real and manipulated videos as they appear on social media, and further video manipulation techniques to emulate how bad actors could attempt to fool detectors ‘in the wild’.

The view from Deeptrace: Promising results, but no room for complacency
The DFDC has proved to be an informative experiment exploring and confirming the current limits of deepfake detection. The 82% average precision achieved on the public dataset indicates that automated detection is technically viable, both in a controlled setting where the types of audio-visual manipulations used do not vary too much between training and testing, but also in the current online landscape of synthetic media. In fact, we estimate that between 90–95% of deepfake videos online today are created from variations of a single face-swapping technique.

It is perhaps unsurprising that competitors saw best average precision dropping to 65% on the private dataset, given that it was purposefully designed to adversarially test participants’ models on a much more challenging set of videos. This mirrors similar limitations in commercially deployed facial recognition and object detection systems when they are tested on adversarial inputs that they have not been previously trained on. As the development of detection techniques continues to progress, these results highlight the importance of continually monitoring developments with deepfakes ‘in the wild’ to ensure detection techniques are robust to the full range of techniques being used.

This week’s developments

1) Twitter applied a manipulated media label to a Donald Trump tweet featuring a misleadingly presented video of a black child playing with a white child overlaid with a fake CNN caption. (BBC)

2) Fox News published altered and misleading images of Seattle protestors to make them appear violent, including digitally inserting an image of a man with an assault rifle and misappropriating images from violent protests in Minnesota. (CNN)

3) A Graphika investigation identified a Russian-deployed fake Marco Rubio tweet claiming that British spies were planning on using deepfakes to support the US Democrats, indicating that Russian entities have already tried to weaponise the idea of deepfakes to destabilise political processes. (Henry Ajder- Twitter)

4) Secure web access company Twingate published the results of a survey assessing Americans’ perception of deepfakes, finding that 70% of participants believed deepfakes should be illegal and over a quarter were very or extremely concerned about their personal likeness being used in a deepfake. (Twingate)

5) OpenAI announced the commercial release of a cloud-based API for its text generation tool GPT-3, with the organisation stating that the release is designed to “greatly lower the barrier to producing beneficial AI-powered products.” (OpenAI)

6) University of Queensland researchers published “Text-to-Face”, a technique for generating photo-realistic faces from a description of facial characteristics. (arXiv)

7) Developer Matt Round created “This MP Does Not Exist”, a StyleGAN web tool for generating images of fake MPs trained on a database of British MPs’ photos.(Vole.wtf)

Opinions and analysis

It matters how platforms label manipulated media. Here are 12 principles designers should follow.

Partnership on AI and First Draft researchers outline 12 principles to guide designers at digital platforms on best practices for applying (or not applying) labels to manipulated media in order to reduce mis/disinformation’s harms.

A future scenario for deepfakes: personalised synthetic advertising

Lenka Hamosova presents a research project exploring how synthetic media could be combined with targeted data collection to enable a new form of highly personalised video advertising.

Want to receive new editions of Tracer direct to your inbox? Subscribe via email here!

Working on something interesting in the Tracer space? Let us know at info@deeptracelabs.com or on Twitter

To learn more about Deeptrace’s technology and research, check out our website

--

--