Tracer Newsletter #53 (11/05/20)-MIT release Detect Fake project to test users’ ability to spot deepfakes of

Henry Ajder
Sensity
Published in
4 min readMay 11, 2020

Welcome to Tracer- your guide to the key developments surrounding deepfakes, synthetic media, and emerging cyber-threats.

Want to receive new editions of Tracer direct to your inbox? Subscribe via email here!

MIT release Detect Fake project to test users’ ability to spot deepfakes of normal people

MIT Media Lab released Detect Fake, an online project designed to test users’ ability to spot deepfakes of ordinary people and examine performance differences between human and machine deepfake detection.

How does the project work?

The videos featured on Detect Fake are taken from the dataset created by Facebook for the Deepfake Detection Challenge (DFDC), featuring 100,000 fake videos and 19,154 real videos. For the project, the researchers developed a machine learning model to curate the most difficult videos for an AI to discern between real and fake. Detect Fake users are asked to identify these deepfake videos when they are presented alongside an unaltered video. After attempting to identify 10 deepfakes, users are shown how they ranked compared to other users.

Examining the differences between human and machine deepfake detection

The researchers behind Detect Fake state that the project is designed to “focus on the differences between how human beings and machine learning models spot AI-manipulated media.” This includes understanding whether one is outright better than the other, or whether both humans and machines have their respective strengths and weaknesses when it comes to detecting different kinds of synthetic media.

As with similar websites that test users’ ability to identify deepfakes such as whichfaceisreal.com, the authors also aim to educate users and help improve their ability to detect different kinds of deepfakes that don’t just target well-known celebrities.

This week’s developments

1) A privacy impact assessment found that the US Department of Homeland Security’s new biometric system is vulnerable to deepfakes due to its reliance on iris and facial recognition services. (fedscoop)

2) A deceptively edited video depicting US Vice President Mike Pence asking to carry empty boxes of PPE for a photo op received significant media attention, with Jimmy Kimmel uploading a video criticising the Vice President. (USA Today)

3) The U.S. Patent and Trademark Office rejected an application by Imagination Engines Inc listing their DABUS machine as an inventor, confirming that it can only issue patents to humans. (Bloomberg Law)

4) The Australian eSafety Commission released a brief to educate citizens on the threats posed by deepfakes and outlined their holistic approach to combating these threats. (Australian eSafety Commissioner)

5) A new device developed by Carnegie Mellon researchers allows users to simulate the feeling of walls and other solid objects in virtual reality. (TechXplore)

6) National Library of Scotland’s resident artist Martin Disley shared results from an experiment training a GAN on the library’s digitised historic map collection. (Martin Disley- Twitter)

7) NATO’s Strategic Communications division hosted an expert discussion on the roles deepfakes could play in future disinformation campaigns, and how stakeholders could mitigate these threats. (NATO StratCom COE- YouTube)

8) Researchers from the Chinese University of Hong Kong and Xiaomi AI Lab published IDInvert, a GAN based technique for copying and pasting the foreground of a high-quality image and automatically adapting the image background. (Github)

Opinions and analysis

Why the deepfake threat to the US 2020 election isn’t what you’d think

Joan Solsman explores the different ways weaponised deepfakes could target the US 2020 election, and argues that the most tangible risk is the growing awareness of deepfakes resulting in the dismissal of authentic digital media as fake.

Tracing trust: Why we must build authenticity infrastructure that works for all

Sam Gregory outlines key points that must be considered to ensure that authenticated capture or tracking of content authentication doesn’t harm, and in fact enhances freedom of expression and trust.

Businesses must prepare to protect their customers’ voices

Paul Mee and Gokhanedge Ozturkrgues argue that cyberattacks using synthetic voice audio pose a significant threat to businesses as voice biometrics proliferate, and outline steps businesses can take to mitigate these risks.

The seven types of people who start and spread viral disinformation

Marianna Spring shares archetypal profiles of figures identified in BBC disinformation investigations who contributed to the creation and spread of coronavirus disinformation.

Want to receive new editions of Tracer direct to your inbox? Subscribe via email here!

Working on something interesting in the Tracer space? Let us know at info@deeptracelabs.com or on Twitter

To learn more about Deeptrace’s technology and research, check out our website

--

--