Tracer Newsletter #51 (27/04/20)-Donald Trump retweets crude ‘deepfake’ video of Joe Biden

Henry Ajder
Sensity
Published in
4 min readApr 27, 2020

Welcome to Tracer- your guide to the key developments surrounding deepfakes, synthetic media, and emerging cyber-threats.

Want to receive new editions of Tracer direct to your inbox? Subscribe via email here!

Donald Trump retweets crude ‘deepfake’ video of Joe Biden

Donald Trump retweeted a crude ‘deepfake’ video of Joe Biden sticking out his tongue and raising his eyebrows, marking the first such use of a deepfake in US politics.

Where did the video first emerge?

The video was originally posted by an anti-Biden and anti-Trump Twitter account on the 23rd April, with Trump retweeting another of the account’s posts featuring the video at 8:25:50 pm ET on the 26th April. The video appears to have been created using MugLife, a smartphone app that leverages deep neural networks to animate a face with various expressions based on a single photo. The GIF-style output is clearly manipulated, with the source photo appearing to be taken from footage of Biden’s home campaign addresses.

A significant milestone for deepfakes in US politics

Trump and his campaign have shared several manipulated videos of Biden in the past, but this represents the first time Trump has shared a synthetically generated, albeit crude “deepfake” video featuring entirely fabricated content. This comes at a time where there are growing concerns that it is not just foreign adversaries that could weaponise realistic deepfakes in the 2020 US Presidential elections, but also domestic actors and organisations.

Adversarial latent autoencoders: A new method for generating highly customisable StyleGAN images

West Virginia University researchers released a method for generating StyleGAN quality images with the high-degree of control achieved with auto-encoders.

How does it work?

Modern generative models are dominated by two main paradigms: GANs, which are known for the realism of their output, e.g. the fake photos of faces from thispersondoesnotexist.com, and auto-encoder architectures, which provide flexibility in manipulating the output in a interpretable way, e.g. tweak a certain latent value to change a facial expression.

The adversarial latent autoencoder brings together these two approaches, generating realistic StyleGAN images that are highly customisable. This is achieved by training the StyleGAN model together with an encoder that: a) learns the generator input distribution, instead of one fixed distribution, and b) imposes reciprocity in the latent space of generator and discriminator.

Why is it significant?

Currently, most websites and openly available tools for generating StyleGAN1 and StyleGAN2 images do not allow users to customise the image output. Some companies have started selling generated faces that can be segmented by age, ethnicity, and gender, but users cannot directly tweak features of the image.

Adversarial generative autoencoders change this dynamic, allowing a user to directly change elements of the face’s composition. As with StyleGAN the code has been open-sourced by the creators, meaning it is highly likely that the method will be implemented in an accessible browser-based format in the near future.

This week’s developments

1) UC Berkeley and Stanford researchers published a novel technique for detecting deepfake videos by identifying inconsistencies with a subject’s mouth shape (visemes) when pronouncing “M”, “B”, or “P” sounds (phonemes). (arXiv)

2) An intelligence report by UK think tank RUSI concluded that British spies will need to use AI to detect and counter a range of AI-based attacks being developed by adversaries, including the use of weaponised deepfakes. (BBC News)

3) Microsoft and City University of Hong Kong researchers published an auto-encoder based technique for restoring old and damaged photographs. (arXiv)

4) A study by Carnegie Mellon University researchers found that 45.5% of Twitter accounts talking about the coronavirus pandemic have characteristics of bots. (Vice)

5) Insurer State Farm aired a deepfake commercial of SportsCentre anchor Kenny Mayne during a Chicago Bulls documentary, with the 1998 edited footage showing Mayne ‘predicting’ the creation of the documentary series in 2020. (NY Times)

6) Georgia Institute of Technology researchers published an in-depth overview and analysis of different architectures for creating and detecting deepfakes. (arXiv)

Opinions and analysis

Reflecting on “this candidate does not exist”

Riley reflects on the process and motivations behind his experiment creating a fake account for a non-existent political candidate that was vetted and verified by Twitter.

Want to receive new editions of Tracer direct to your inbox? Subscribe via email here!

Working on something interesting in the Tracer space? Let us know at info@deeptracelabs.com or on Twitter

To learn more about Deeptrace’s technology and research, check out our website

--

--