Tracer Newsletter #39 (06/01/20)-Facebook investigation exposes network of fake accounts using synthetically generated profile pictures

Henry Ajder
Sensity
Published in
6 min readJan 7, 2020
06/01/2020

Welcome to Tracer, the newsletter tracking the key developments surrounding deepfakes/synthetic media, disinformation, and emerging cybersecurity threats.

Want to receive new editions of Tracer direct to your inbox? Subscribe via email here!

Facebook investigation exposes extensive network of fake accounts using synthetically generated profile pictures

Facebook announced that the company has removed a network of fake “pro-Trump” accounts that masqueraded as US citizens by using realistic synthetically generated profile pictures.

How did the network operate?

The account’s activity was traced to the US media company “The BL”, which itself is affiliated with The Epoch Media Group, an organisation that has previously pushed pro-Trump messaging on social media. Facebook state that the fake accounts, posing as US citizens, posted pro-Trump BL and Epoch Media content at a high frequency on pages and in groups, including Chinese, Spanish, and Portuguese language versions. The fake accounts were also used as admins for some BL related pages and groups alongside authentic accounts. Overall, Facebook stated that the network’s activities attracted 55m page followers and 381,500 group members, with a total of 610 accounts, 89 pages, and 156 groups affiliated with the network having been removed. At Deeptrace, we conducted further analysis on the synthetic images used by the fake profiles and how they were implemented, while Facebook partners Graphika and The Digital Forensic Research Lab (DFR) also published an in-depth report on the network.

A new challenge for social media companies and users

The takedown represents one of the first high profile cases where synthetic images have been used at scale to enhance disinformation operations on social media. This follows previous cases where synthetic profile pictures on fake social media accounts have been used in a more limited capacity to enhance espionage campaigns and deceive Tesla short-sellers. The growing use of synthetically generated profile pictures on social media suggests a degree of success when fooling users who are likely unable to identify the images as synthetic at a glance. In addition, bad actors who generate realistic synthetic images on demand, as opposed to scraping existing profile pictures, evades traditional investigations that use reverse image search tools to examine whether an image has been previously used in a different context.

Tiktok’s parent company found to have developed an unreleased deepfake “face-swap” feature

An Israeli startup revealed that the code of Tiktok and the app’s Chinese counterpart “Douyin” contain references to an unreleased feature where users can swap their face into a selection of videos.

How would the feature work?

Tiktok’s and Douyin’s code references the need for the user to provide a biometric scan of their face from multiple angles and pulling various expressions. This process allegedly helps the app verify your identity and ensure the subject being scanned is real (and not, say, a photograph). Once the scan is complete, the user can swap their face into a set of videos that have been licensed by the parent company Bytedance, with the generated output containing a clear watermark. The hidden feature was revealed by Israeli market research startup Watcherful.ai, who were able to activate the code and provide TechCrunch with a demo of the feature in use, as seen in the image above.

The growing accessibility of synthetic media technologies

The development of phone apps containing novelty faceswapping features is growing, with Snapchat’s recent announcement of its new Cameos feature, as well as dedicated faceswapping apps such as Zao and Carica. While Tiktok claimed the feature would not be released (suggesting rather that the feature was designed for Douyin), the code’s discovery raises questions surrounding the growing accessibility of synthetic media and how its mass deployment may result in unexpected negative consequences. To minimise this potential for misuse, apps will need to consider how they can use security features, such as liveness scans and pre-approved swapping videos, to deter the mass generation of non-consensual or potentially harmful content.

Reuters launches free online course to help journalists identify different forms of manipulated media


Reuters launched a free online course to help journalists learn how to identify deepfakes and other forms of manipulated media in the wild.

What does the course contain?

The course is formed of three chapters: manipulated media, deepfakes, and tackling manipulated media. The first two sections provide detailed examples of different kinds of manipulated media and how they can be created, while the third details techniques journalists can use to counter and identify manipulated media in the wild. The course takes around 45 minutes to complete, and is currently available in English, French, Spanish, and Arabic, with plans to expand it to include 12 other languages including Mandarin, Hindi, and Russian. The course will also be the focus of several panels and events this year hosted by Reuters and Facebook, the course’s main sponsor.

An important resource to help journalists expose and fight manipulated media

The course’s free launch in multiple languages is a valuable educational resource to help cultivate resistance to disinformation amongst a global community of journalists. While deepfakes currently form a small part of the manipulated media landscape, the resource will hopefully help journalists around the world learn more about established forms of media manipulation such as shallowfakes, as well as prepare for those that pose a growing threat moving forward.

This week’s developments

1) A deceptively edited video falsely appearing to show Joe Biden endorsing white nationalism went viral on social media, with Biden warning about further disinformation ahead of the election. (NY Times)

2) Human rights organisation WITNESS released a report examining the key dilemmas that need to be addressed when using technology to implement a media “authenticity architecture”. (Witness)

3) Microsoft and Peking University researchers released a paper detailing a new high fidelity faceswapping technique that can swap faces that are occluded by items such as hair or hats. (arXiv)

4) Snapchat owner Snap acquired image and video recognition startup AI Factory for $166m, whose technology has been identified as powering Snapchat’s new faceswap “Cameos” feature. (Cnet)

5) A Harvard student tested the vulnerability of online federal public comment processes by submitting a large volume of realistic fake comments generated by a “deepfake text” bot. (TechScience)

6) EPFL’s International Risk Governance Centre published a set of 15 recommendations covering a range of potential responses to the various threats presented by deepfakes. (EPFL)

7) A study by Ohio State University researchers found that people “self generate misinformation” by misremembering correct numerical information to match previously held beliefs. (Nieman Lab)

8) The UN’s World Intellectual Property Organisation (WIPO) launched a public consultation to discuss copyright protection and copyright violation related to AI generated synthetic media. (Torrent Freak)

Opinions and analysis

Deepfakes are on the rise. How should government respond?

Daniel Castro outlines US lawmakers’ approaches to deepfake legislation at the state and federal level, and argues that federal scope will be needed for laws designed to force platforms to remove content.

I created my own deepfake- it took two weeks and cost $552

Timothy Lee provides a detailed step by step overview of his experience creating a deepfake from scratch and reflects on how well his chosen process’ different components worked.

An interview with Ctrl Shift Face, king of Youtube deepfakes

Morgane Tual interviews the Slovenian deepfake creator behind the viral Youtube channel Ctrl Shift Face, providing an insight into his motivations and the creative process behind his faceswap videos.

Want to receive new editions of Tracer direct to your inbox? Subscribe via email here!

Working on something interesting in the Tracer space? Let us know at info@deeptracelabs.com

To learn more about Deeptrace’s technology and research, check out our website

--

--