Tracer #40 (13/01/20)- Facebook announces a new policy banning deceptive deepfakes on its platform

Henry Ajder
Sensity
Published in
6 min readJan 13, 2020
13/01/2020

Welcome to Tracer, the newsletter tracking the key developments surrounding deepfakes/synthetic media, disinformation, and emerging cybersecurity threats.

Want to receive new editions of Tracer direct to your inbox? Subscribe via email here!

Facebook announces a new policy banning deceptive deepfakes on its platform

Facebook announced a new policy designed to remove deepfakes that are intended to deceive the average viewer from its platform, with the move receiving a mixed response from the company’s critics.

What does the policy say?

The policy announcement, made in a blog post by Facebook’s Vice President of Global Policy Management Monika Bickert, states that deepfakes “pose a significant challenge for our industry and our society” and requires the “strengthening of [Facebook’s] policy toward misleading manipulated videos.” This involves the introduction of two new criteria for the removal of media from Facebook:

- If the edited or synthetic video would mislead the average person into believing the subject of a video had said words they have never said.

- The video is the product of artificial intelligence that has synthetically merged, replaced, or superimposed content onto a video while still appearing authentic.

The post also makes clear that the new policy does not apply to satire, parody content, and editing that changes the order of or omits certain spoken words. However, all content is still subject to review by Facebook’s network of fact-checkers, and Facebook’s community standards on issues such as nudity, violence, and hate speech.

How has it been received?

The policy has received a mixed reception since its announcement, with the main concern being that it does not seem to cover misleading manual edits of media commonly known as shallowfakes. This category of media manipulation is currently more common than deepfakes on social media, and includes the infamous Nancy Pelosi video where her speech was slowed down, as well as a more recent misleadingly edited video of Joe Biden. While some have also criticised the policy as addressing what is currently a “distant problem” for the platform, others have acknowledged the importance of taking a proactive approach to dealing with a developing threat before damage is done.

Selling synthetic people: The startups providing “worry-free diverse models on demand using AI”

An investigation by the Washington Post’s Drew Harwell highlights the growing number of startups selling realistic AI-generated photos of non-existent people for a range of commercial activities.

How do these startups operate?

Startups selling synthetic images have been made possible by key developments in deep learning over the last few years. These developments are showcased by Nvidia’s StyleGAN architecture that arguably sets the industry standard for generating realistic synthetic images of people, using specific parameters such as gender, age, and ethnicity. As these generative technologies have become increasingly accessible, startups have utilised them to generate novel synthetic images from their own large datasets of real photos. As Harwell observes, one of these startups explicitly markets their synthetic models as an easy way to increase ethnic diversity in marketing materials, while another advertises their images as a cheaper alternative to traditional photoshoots. While currently these companies only offer synthetic “head-shots”, at least one confirmed plans to generate full-body synthetic models moving forward.

Smart business or playing with fire?

There is clear commercial demand for these services, with one startup claiming they gained three clients in their first week of operation, and another claiming they have 2,000 prospective clients on a waiting list. On the other hand, the potential for bad actors to misuse synthetic photos has been well documented over the last year, with last week’s edition of Tracer detailing one of the most significant cases exposed by Facebook. These companies are likely to dismiss these concerns in relation to their products on the basis that the software is open-source and accessible to anyone who wants to use it. However, given the difficulty of sourcing a high-quality dataset to generate these synthetic images, it could be argued that these startups provide a convenient way to access realistic fake images, for both benign and malicious purposes.

This week’s developments

1) California Assemblymember Tim Grayson introduced a new bill that aims to criminalise the creation of non-consensual deepfake pornography and fund deepfake detection research. (CBS Sacramento)

2) A photoshopped image of Barack Obama shaking hands with Iranian President Hassan Rouhani circulated on social media after being misleadingly shared on Twitter by a US Congressman. (Snopes)

3) Reddit banned accounts and content that “impersonate individuals or entities in a misleading or deceptive manner”, in anticipation of deepfakes becoming more common on the platform. (Independent)

4) The US House Committee on Energy and Commerce held an expert hearing on manipulation and deception in the digital age, with a particular focus on deepfakes and election interference. (cnet)

5) A Queensland University researcher identified a network of bots and troll accounts behind false claims that a “leftist arson epidemic” is the cause of Australia’s current bushfire emergency. (Guardian)

6) Tumblr is launching an internet literacy initiative World Wide What that uses gifs, short text, and memes to help young users learn how to spot suspicious activity online. (The Verge)

7) Chinese social media app Tiktok updated their content policies to include the specific prohibition of misinformation that causes harm to users or the public, covering events such as elections. (Axios)

8) Researchers from Peking and Texas A&M universities developed “Deep Plastic Surgery”, a deep learning model that enables users to synthesise and edit photos based on hand-drawn sketches. (arXiv)

9) Deepfake Youtuber iFake used free faceswapping software to recreate, and arguably improve on, the VFX “de-ageing” of Robert De Niro and other actors in Martin Scorsese’s film “The Irishman”. (Esquire)

10) Twitter suspended fake accounts that impersonated an American and an Israeli journalist to spread fake articles containing Iranian regime propaganda across several platforms. (Daily Beast)

Opinions and analysis

The pros and cons of Facebook’s new deepfakes policy

Sam Gregory argues that while Facebook’s proactive preparation for malicious deepfakes is a good start, the platform must also address simpler forms of media manipulation that pose a current threat.

Disinformation for hire: How PR firms are selling lies online

Craig Silverman, Jane Lytvynenko, and William Kung explore the emerging global trend of lucrative “black PR firms” that offer a “disinformation for hire” service for companies and governments.

How bots are destroying political discourse as we know it

Bruce Schneier argues that the weaponisation of bots with increasingly sophisticated behaviour and realistic presentation threaten to overwhelm authentic political speech online.

Ten things tech platforms can do to safeguard the 2020 elections

John Borthwick and other tech industry experts outline ten key steps the major social media companies can take to help safeguard democratic processes and mitigate the weaponisation of their platforms.

Want to receive new editions of Tracer direct to your inbox? Subscribe via email here!

Working on something interesting in the Tracer space? Let us know at info@deeptracelabs.com

To learn more about Deeptrace’s technology and research, check out our website

--

--