Tracer Newsletter #52 (04/05/20)-Jay-Z attempts to remove deepfake audio parodies from YouTube using copyright strikes

Henry Ajder
Sensity
Published in
3 min readMay 4, 2020

Welcome to Tracer- your guide to the key developments surrounding deepfakes, synthetic media, and emerging cyber-threats.

Want to receive new editions of Tracer direct to your inbox? Subscribe via email here!

Jay-Z attempt to remove deepfake audio parodies from YouTube using copyright strikes

Jay-Z’s agency Roc Nation attempted to remove two YouTube videos featuring synthetic voice audio of the rapper by issuing a copyright strike against the creator.

Who created the videos?

The videos were created by YouTuber Vocal Synthesis, who has uploaded hundreds of videos featuring synthetic voice audio imitating celebrities and politicians generated with the open-source software Tactotron2. In this case, the creator uploaded four videos featuring an entirely synthetic version of JayZ’s voice, including Shakespeare’s famous soliloquies from Hamlet and Billy Joel’s “We Didn’t Start the Fire”. Jay-Z’s agency Roc Nation issued two copyright takedown notices against the videos, claiming that they “unlawfully uses an AI to impersonate our client’s voice.” While initially, these claims resulted in the videos being taken down, Google stated that the filed copyright claims were incomplete and the videos have since been reinstated on Vocal Synthesis’ channel.

A source of legal ambiguity: Deepfakes and intellectual property

There have been several other cases where celebrities have invoked copyright or threatened legal action in an attempt to take down deepfakes, notably including Kim Kardashian and Jordan Peterson. However, it is currently unclear whether these copyright claims are sufficient for having deepfakes removed. Although the models for generating synthetic voice audio or video may have been trained on copyrighted material, in most cases the outputs are an entirely new piece of synthetic media. As these cases inevitably grow in number, the function of copyright, fair use, and other IP mechanisms will likely experience formal reviews and even potential reformulation when applied to deepfakes imitating an individual.

This week’s developments

1) OpenAI released Jukebox, a neural network for generating raw music audio, including rudimentary singing, in a variety of genres and artists styles. (OpenAI)

2) Researchers from CLOVA AI and EPFL released an updated version of the image-to-image translation model StarGAN, with the new model providing an improved diversity of generated images and scalability of image translations across multiple domains. (Github)

3) Data Journalism released the Verification Handbook, a free expert-sourced guide on investigating online platforms and accounts for inauthentic activity and manipulated content. (Data Journalism)

4) Facebook open-sourced its 9.4 billion parameter chatbot Blender, with the model featuring improved decoding techniques and novel blending of conversational skills including empathy, knowledge, and personality. (Facebook AI)

5) Meme website imgflip saw a renewed interest in their “This Meme Does Not Exist”, a web tool for generating new text captions for 48 popular meme templates. (imgflip)

6) Purdue University researchers developed a deepfake detection technique based on extracting visual and temporal features from faces to accurately detect manipulations. (arXiv)

7) The Australian Strategic Policy Institute released an accessible report exploring the dangers deepfakes pose to democracy and national security. (ASPI)

8) Academic and industry researchers developed a novel technique for generating expressive “talking head” videos from a single image, using only an audio file as input. (arXiv)

Opinions and analysis

What’s needed in deepfake detection?

Sam Gregory presents the key insights and recommendations gathered from stakeholder workshops focused on the deployment and development of deepfake detection tools.

Not a hoax: The threat of political “deepfake” laws

Mark Rumold argues that US state-level political “deepfake” laws are vulnerable to abuse based on poor definitions of deepfake and over-broad powers for suppressing political speech.

For the love of God, not everything is a deepfake

Samantha Cole argues that the manipulated GIF of Joe Biden retweeted by Trump last week is not a deepfake, and that the misattribution of the term diverts attention from the issues surrounding actual deepfakes that predominantly harm women.

Want to receive new editions of Tracer direct to your inbox? Subscribe via email here!

Working on something interesting in the Tracer space? Let us know at info@deeptracelabs.com or on Twitter

To learn more about Deeptrace’s technology and research, check out our website

--

--