Tracer Newsletter #37 (10/12/19)- Investigation finds deepfake forum users combining non-consensual deepfake pornography with custom 3D avatars

Henry Ajder
Sensity
Published in
5 min readDec 10, 2019
10/12/19

Welcome to Tracer, the newsletter tracking the key developments surrounding deepfakes/synthetic media, disinformation, and emerging cybersecurity threats.

Want to receive new editions of Tracer direct to your inbox? Subscribe via email here!

Motherboard investigation finds deepfake forum users combining

non-consensual deepfake pornography with custom 3D avatars

An investigation by Vice Motherboard journalists Samantha Cole and Emanuel Maiberg found users of a popular deepfake forum creating customisable 3D sex avatars ‘deepfaked’ with celebrities’ faces.

How are the deepfake avatars being created?

The deepfake celebrity avatars are created by capturing footage of custom 3D models and then running commonly used faceswapping software on the captured footage to replace the avatar’s generic face with that of a chosen subject. The 3D models’ movements and appearance are customisable prior to running the faceswapping process, enabling the user to create explicit non-consensual sexual footage of an avatar that comes to bear the likeness of the faceswapped target. Although most of these deepfake sex avatars were found in pre-recorded videos on deepfake forums, Cole and Maiberg also identified a recording demonstrating an interactive VR deepfake avatar that allowed the user to select preset sexual positions for the naked avatar to adopt in real-time.

A disturbing new way of digitally controlling women’s bodies

As observed by Cole and Maiberg, the combination of the two technologies mutually addresses certain limitations of each, enabling a new form of customisable synthetic media that is being exclusively used to violate women’s dignity. Despite the explicitly unethical nature of non-consensual deepfake pornography, the phenomenon is well-established. This is reflected by the growing number of videos hosted on dedicated platforms and the viral popularity of creation services such as DeepNude. While customisable 3D deepfake avatars may not be highly photorealistic, the potential to cause harm and humiliation to victims is by no means diminished, particularly given the highly customisable nature of the avatar’s movements.

A NATO study used fake social media engagements to test social media companies’ response to platform manipulation

Researchers from NATO’s STRATCOM centre found that Facebook, Youtube, Twitter, and Instagram all failed to detect “inauthentic behaviour” they had purchased from online manipulation service providers.

How did the study work?

The study involved the researchers purchasing fake social media engagements from an online manipulation service provider to see whether the targeted platforms would successfully identify and remove the content. For €300, the researchers purchased 3,530 comments, 25,750 likes, 20,000 views and 5,100 followers across the above four platforms, targeting a total of 105 different posts. This enabled the researchers to identify a total of 18,739 fake accounts being used to deliver the engagements, and also map other pages that were purchasing similar fake engagements.

What were the results?

The researchers found that 80% of the fake engagements were still online after four weeks. Even when the researchers notified each platform of a sample of 100 fake accounts identified, 95% remained online three weeks after the reports were filed. Despite the universally poor performance of all the targeted platforms, the researchers did observed disparities between how different platforms reacted to different kinds of fake engagements and accounts, with some performing better than others (see the above graph).

While the majority of the engagement activity by the fake accounts identified during the study appeared to be commercially driven, the researchers also identified accounts interacting on 721 political pages. With hundreds of manipulation service providers offering fake engagements similar to those procured by the researchers, large scale social media manipulation is becoming increasingly accessible, and the study illustrates that currently, the four major platforms are failing to effectively counter this threat.

This week’s developments

1) An experiment by information systems researchers found that reliability ratings of news sources provided by experts helped to limit the spread of articles containing misinformation. (The Conversation)

2) A South-Korean politician proposed a “deepfake bill” to amend existing sex crime laws to include the prohibition of deepfake pornography, as calls for deepfake legislation grow in the country. (Korea Times)

3) A screenshot of a message containing a military drill convinced millions of Britons that the Queen had died after it was shared extensively across social media and instant messaging services. (Guardian)

4) UK fact-checkers Full Fact deployed a new AI-powered tool that detects and classifies different kinds of statements found in the major UK political party’s manifestos ahead of this week’s election. (Full Fact)

5) A series of unsubstantiated posts claiming that men in white vans are kidnapping women went viral on Facebook in the US, with the panic leading to Baltimore’s major issuing an official warning. (CNN)

6) Reddit security confirmed that a post containing a controversial leaked document discussing a trade deal between the UK and the USA was likely part of a known Russian information operation. (Reddit)

7) Brigham-Young University researchers developed a dungeon text-adventure game using OpenAI’s text synthetic language model GPT-2 to generate a narrative for how each game unfolds. (AI Dungeon)

8) Researchers from The AI Foundation created an interactive synthetic avatar of author Deepak Chopra as part of a “virtual AI-generated archive” initiative designed to immortalise individuals. (Cnet)

Opinions and analysis

How deepfakes are being used to puncture politician’s bluster

Raphael Tsavkko Garcia outlines how a Brazilian journalist creating satirical and “mocking” deepfakes of the Brazilian president Jair Bolsonaro is shaping a new form of political protest.

India’s IT minister is naive to conflate deepfakes with fake news

Ivan Mehta argues that the Indian IT minister’s limited definition of deepfakes illustrates the importance of properly understanding the technology and its applications before passing specific legislation.

How to fight lies, tricks, and chaos online

Adi Robertson presents a series of principles and reflections that internet users can adopt in order to slow down and critically evaluate the different forms of misleading content typically encountered online.

Facts won’t stop disinformation polluting our media environment.

Whitney Phillips advocates for an ecological view of disinformation as a form of information pollution that can be combatted by challenging the “pollution conduits” in our actions, systems, and institutions.

Working on something interesting in the Tracer space? Let us know at info@deeptracelabs.com

To learn more about Deeptrace’s technology and research, check out our website.

--

--