Tracer Newsletter #59 (21/07/20)-Deepfake Detection API: The automated solution for identifying fake faces

Henry Ajder
Sensity
Published in
2 min readJul 22, 2020

Welcome to Tracer- your guide to the key developments surrounding deepfakes, synthetic media, and emerging cyber-threats.

Want to receive new editions of Tracer direct to your inbox? Subscribe via email here!

Deepfake Detection API: The automated solution for identifying fake faces

Last week, Reuters news agency published a story on Oliver Taylor (pictured above), a British student at the University of Birmingham who had written half a dozen editorials and blog posts on Jewish affairs, including for The Jewish Times and the Times of Israel.

However, the Reuters investigation was not focused on Oliver Taylor’s writing, but on the fact that he is not a real person…

Continue Reading on our website

News

1) The Daily Beast exposed a network of 19 fake authors using synthetic profile pictures to write Op-Eds for conservative news publications. (The Daily Beast)

2) MIT’s Centre for Advanced Virtuality launched their “In Event of Moon Disaster” project, with the website featuring the full-length deepfake of Richard Nixon delivering the Moon landing contingency speech and other educational resources related to the project’s themes. (Moon Disaster.org)

3) Reuters released a short breakdown of their process for identifying StyleGAN images in the wild, including the tell-tale signs identified in a StyleGAN image from their most recent investigation. (Reuters)

4) OpenAI researchers found that their generative algorithm GPT2 could generate the other half of an incomplete image if it was trained on a large data set of images as opposed to text. (MIT Tech Review)

5) How can you protect GAN generated images from being misappropriated? Or, how can we identify which GAN model has been used for malicious purposes? A team of researchers proposed a novel technique to address these issues by embedding invisible watermarks into GAN generated images. (arXiv)

6) Current generative models for video synthesis suffer a consistency problem: each frame is rendered based only on the past few generated frames. New work by NVIDIA presents an approach that learns a 3D representation of the world, much improving the perceived consistency of the video over longer time periods. (Nvidia)

Want to receive new editions of Tracer direct to your inbox? Subscribe via email here!

Working on something interesting in the Tracer space? Let us know at info@deeptracelabs.com or on Twitter

To learn more about Deeptrace’s technology and research, check out our website

--

--