Tracer Newsletter #5 (25/03/19)- A suspected deepfake video of Gabon’s president causes national unrest

Henry Ajder
Sensity
Published in
6 min readDec 5, 2019
25/03/19

Welcome to Tracer, the newsletter tracking the key developments surrounding deepfakes/synthetic media, disinformation, and emerging cybersecurity threats.

Want to receive new editions of Tracer direct to your inbox? Subscribe via email here!

The Recap

Important stories, papers, and recent developments

A suspected deepfake video of Gabon’s president causes national unrest

An official government video of the Gabonese President Ali Bongo was branded a deepfake by his political opposition, resulting in rumours that he had died, and an attempted military coup.

Why is the video suspected to be a deepfake?

In the wake of a stroke and a lack of public appearances, much speculation already existed around whether Bongo was permanently incapacitated or dead. The video, released as a new year address, showed several suspicious signs that it could be a deepfake, such as Bongo hardly blinking, staying almost completely static, and speaking with a strange speech pattern compared to previous addresses.

Why should we be concerned about the uncertainty surrounding this video?

Despite the video’s suspicious nature, no definitive consensus was reached about the video’s authenticity. Bongo has returned to Gabon from Morocco since this article’s publication, but this has done little to dispel rumours that the video is manipulated or entirely synthetic. The concern is now the mere suggestion that a video is a deepfake, even without definitive evidence, can spark a serious political crisis.

Facebook’s Codec Avatars enable you to create a VR ‘deepfake’ twin

Facebook’s Reality Labs have revealed they are developing highly realistic and personalised VR avatars that replicate the embodied and spatial experience of a real face to face conversation.

How are these avatars made?

Currently, the process involves capturing huge amounts of photographic data of a subject’s various facial expressions and head movements using 180 cameras. This data is then used to train a neural net to finely map elements such as expressions, mouth movements, and muscle deformations. The result is a synthetic “deep appearance model” of the subject that forms the basis of the final VR avatar.

Why is Facebook developing these avatars?

Codec avatars are years away from being completed, and currently, Facebook hasn’t disclosed how or where they may be used. Regardless, the project raises serious questions about how this technology could be manipulated for malicious uses, particularly within the new and visceral space of VR.

Reuters creates a deepfake to help their journalists learn how to spot one

An initiative by Reuters explored how deepfakes could challenge journalist’s verification workflows, by going through the process of creating a deepfake and testing it on various employees.

How did the experiment play out?

The deepfake was made to mimic a remote studio setup, with the subject speaking about a planned business expansion. The deepfake focused specifically on synthetically recreating the mouth and facial movements of the English subject around original french voice audio from a separate speaker.

What were the outcomes?

Those who were aware that the video was fake were quick to point out inconsistencies with audio to video synchronisation, mouth movement and the static nature of the subject. However, those who weren’t reported something seeming wrong, but were unable to identify anything in particular about the video. The suggestion was that currently, deepfakes trigger an instinctive sense of unease, but that pre-exposure to deepfakes is essential for preparing journalists to ‘place’ the source of this unease.

Other key developments

1) A newly released app, Morphin, synthetically grafts users’ selfies onto GIFS, creating personalised versions. Morphin suggests future uses may include personalised gaming and cinema. (TechCrunch)

2) Austrian antivirus company AV-Comparatives found two-thirds of all Android antivirus apps were fraudulent, with only 23 apps having a 100% detection on a simple malware detection test. (ZDNet)

3) Facebook announced new measures for detecting and blocking nonconsensual intimate images shared on the social network, such as revenge porn. (Facebook)

4) VideoFlow, from Google and the University of Illinois, proposes a model for video generation based on normalising flows, which can synthesise high-quality future frames in videos (ArXiv)

5) A consortium of researchers released the benchmark results of different ‘deepfake’ detection techniques, including a human expert, when tested against 700 various deepfakes. (Faceforensics)

6) Twitter has been probed by an Indian parliamentary panel regarding the prominence of hate speech and defamatory content on the platform, ahead of the Indian elections in May. (Reuters)

7) Ahead of the EU Parliamentary election in May, a partnership of 19 news organisations have formed FactCheckEU, a coalition for fact-checking politicians and combatting misinformation. (Poynter)

8) Pen America released a report warning against the “normalisation of fraudulent news and disinformation as campaign tactics” as part of the toolbox of modern campaigns. (Pen America)

Audio-Visual

Videos, podcasts, and demos

GauGAN: Turning rough sketches into photorealistic images

A team from NVIDIA, MIT, and UC Berkeley show how their latest GAN, GauGAN, transforms sketches into photorealistic landscapes. By sketching basic lines in an MS-paint style interface, the algorithm converts these lines in real-time into realistic images of mountains, forests, lakes and other scenery.

Compared to previous work, the technical contribution is in the generator, which uses a spatially adaptive normalisation layer akin to Batch Normalization, but conditional to the input segmentation map, and with scale and bias terms that are tensors.

McCann’s deepfake demo draws a worrying prophecy from Mark Cuban

A recent deepfake produced for McCann Worldgroup’s event at the Mobile World Congress demonstrated the power of generative technologies to perform ‘deepfake lip-syncing’. The video depicts Jon Carney, Chief Digital Officer, whose mouth movements are synchronised with the speech of Harjot Singh, Chief Strategy Officer, both in English and Hindi. The video led prominent American Businessman Mark Cuban to comment that generative technologies could lead to a future where a “CEO can dance like Bruno Mars and sing in any language”.

A fake video of Donald Trump revealing himself to be bald goes viral

A fake video of Donald Trump removing his hat to reveal his bald head (and suggesting that his trademark wispy hair is a toupée) went viral across social media. Whilst the low resolution fake may have caused many to double-take, basic inspection clearly shows Trump’s hand passing through his head where the video editing has taken place.

Bold Ideas

Arguments and opinions that got us thinking

Why artificial intelligence regulation is essential to avert a cyber arms race

Luciano Floridi and Mariarosia Taddeo argue for the establishment of an international doctrine on cyber-warfare, in order to avoid proliferating and increasingly destructive AI-enabled cyber attacks.

Why was there an eerie absence of fakes after the New Zealand attack?

Craig Silverman and Jane Lytvynenko illustrate how the terrorist laid meticulous plans to ensure his live-streamed footage and manifesto spread rapidly, leaving no space or time for misinformation to spread.

Want to receive new editions of Tracer direct to your inbox? Subscribe via email here!

Working on something interesting in the Tracer space? Let us know at info@deeptracelabs.com

To learn more about Deeptrace’s technology and research, check out our website

--

--