Tracer Newsletter #35 (26/11/19)- MIT filmmakers create a deepfake of Richard Nixon delivering an alternative moon landing disaster speech

Henry Ajder
Sensity
Published in
5 min readDec 5, 2019
26/11/19

Welcome to Tracer, the newsletter tracking the key developments surrounding deepfakes/synthetic media, disinformation, and emerging cybersecurity threats.

Want to receive new editions of Tracer direct to your inbox? Subscribe via email here!

MIT’s Center for Advanced Virtuality creates a deepfake of Nixon delivering alternative moon landing disaster speech

MIT’s Center for Advanced Virtuality created a deepfake of Richard Nixon delivering a historical speech, that was prepared in case the moon landings had failed, for their installation In Event of Moon Disaster.

How did the team make the deepfake?

The team sourced training data for the deepfake’s audio by recording three hours of an actor impersonating Nixon’s cadence and voice intonation. This was then processed by synthetic voice cloning company Respeecher, who generated a synthetic profile of Nixon’s voice that “synthetically masked” the audio of the actor delivering the speech. From here, the team worked with Israeli startup Canny AI to synthetically recreate the actor’s lip movements from the audio on an existing video of Nixon, with the team choosing to use footage of him delivering his resignation speech. These two components were then combined, with the final deepfake footage being displayed in a mock 60’s living room as part of the installation.

A visceral way to explore alternative histories

The use of deepfakes to enhance artist expression has been increasing throughout 2019, with most use cases focusing on political satire or viral “derpfakes” that faceswap celebrities for comedic effect. While In Event of Moon Disaster isn’t an entirely unique application of deepfakes (The Times previously synthetically recreated JFK’s voice delivering the speech he was meant to give on the day he was assassinated), it is one of the most powerful examples to date of how deepfakes have enabled a new mode of AI-powered artistic communication and immersion.

University of Oxford researchers develop a method to generate a 3D model of a subject’s face from a single 2D image

Researchers from the University of Oxford’s Visual Geometry Group released a paper detailing a new unsupervised method for generating 3D models of subject’s faces from a single 2D image.

How does the method work?

The technique uses a deep auto-encoder, a type of neural network, that disentangles 2D images into components including depth, light reflection, viewpoint, and illumination. In order to train the auto-encoder without supervision to identify these components, the researchers relied on the natural symmetric structure of faces as presented in the 2D image. The technique can then accurately reconstruct a 3D model of the human face originally presented in the image by reasoning about the computed illumination map of that given face. This is achieved without the creation of a prior 3D shape model.

Why is the new method significant?

The technique’s ability to generate 3D models with minimal data and no human supervision opens up several possibilities. These include automating labour-intensive modelling processes currently used in CGI and opening up new synthetic image generation techniques for altering the viewing angles of subjects.

This week’s developments

1) Filipino Senate president pro tempore Ralph Recto filed for a congressional probe into deepfakes, with the aim of introducing policies that strengthen cybersecurity capabilities and data privacy. (CNN)

2) Market research firm Forrester predicts that deepfakes will cause $250m in damages in 2020, based on reports of synthetic voice audio of CEOs allegedly being used to defraud companies. (Forrester)

3) A Vice investigation found a large online marketplace selling non-consensually generated virtual avatars of celebrities and private individuals designed for sexual interaction via VR devices. (Vice)

4) Researchers found that ‘hyper-realistic’ face masks were able to fool people one in every five times when they were asked to distinguish between photos of the masks and real faces. (Sky News)

5) The UK Conservative party were accused of deceiving the public by changing its official Twitter account branding to “fact check UK” during a televised debate with the rival Labour party. (Guardian)

6) Cybersecurity firm CHEQ estimate that $78bn is lost each year due to misinformation that negatively affects advertising, financial markets, corporate reputation, social media, and public health. (Cheddar)

7) A developer created a browser extension “GP True or False” that detects whether webpage text has been synthetically generated, based on OpenAI’s recently released GPT-2 detector model. (Github)

8) Two US Senators introduced a bipartisan bill that would direct several science and technology institutes to support research into detecting AI-generated synthetic media. (Homeland Prep News)

Opinions and analysis

Three things newsrooms can do to combat deepfakes

John Bowers, Tim Hwang, and Jonathan Zittrain present three key recommendations that newsrooms should adopt to counter the threats deepfakes pose when handling unverified media and information.

The perverts dilemma: A critique of deepfake pornography

Carl Öhman argues that the intuitively unethical nature of non-consensual deepfake pornography presents an ethical dilemma when considered in relation to the mental act of sexual fantasy.

The Mystery of the tourist campaign for an island that doesn’t exist

Andy Baio explores the origins of a mysterious advertising campaign for the entirely fictional island of Eroda, and outlines how Facebook’s ad transparency tools may have helped to reveal its purpose.

Want to receive new editions of Tracer direct to your inbox? Subscribe via email here!

Working on something interesting in the Tracer space? Let us know at info@deeptracelabs.com

To learn more about Deeptrace’s technology and research, check out our website

--

--