Tracer Newsletter #36 (02/12/19)- New Chinese law criminalises the publication of deceptive deepfakes that contribute to“fake news”

Henry Ajder
Sensity
Published in
5 min readDec 5, 2019
02/12/19

Welcome to Tracer, the newsletter tracking the key developments surrounding deepfakes/synthetic media, disinformation, and emerging cybersecurity threats.

Want to receive new editions of Tracer direct to your inbox? Subscribe via email here!

New Chinese law criminalises the publication of deceptive deepfake that enhance “fake news”

The Cyberspace Administration of China (CAC) has introduced new laws for governing the publication of video and audio content online, with the legislation taking particular aim at deepfakes and fake news.

What is the law and how is it being implemented?

The new law states that any “fake news” created with technologies such as generative AI or virtual reality is entirely prohibited for creation and publication. Additionally, any other form of content that is synthetically generated in these ways must be clearly marked, with a failure to do so also being considered a criminal offence.

The CAC seems particularly concerned about the potential dangers associated with deepfakes. A statement on their website claiming that deepfakes “endanger national security, disrupt social stability, disrupt social order, and infringe upon the legitimate rights and interests of others,”. The law comes into effect on January 1st 2020, with video and audio providers being briefed in advance to help prepare for the transition.

Legal responses to deepfakes go global

The CAC’s legislation follows similar US laws passed recently in California that criminalised the use of deepfakes targeting political candidates within a 60-day window of political campaigns. However, this law and others being considered in the US are much narrower in scope, with the CAC’s legislation seemingly targeting both individuals and platforms for any use of deepfakes deemed by the authority to be “fake news”. The law’s introduction follows claims by the government earlier this year that they were considering a ban on deepfakes in their entirety, and controversy surrounding deepfake pornography and the viral faceswapping app, Zao.

DARPA funded researchers develop a technique for “spoofing” camera fingerprints

Researchers from Federico Naples and Technical University Munich (TUM) have developed a technique for inserting traces or “camera fingerprints” into synthetically generated images.

How does the spoofing technique work?

The technique exploits the central premise of certain image manipulation detection systems that detect the unique traces or “camera fingerprints” a camera leaves on an image it has captured, such as specific compression or demosaicing effects. If these kinds of traces are not found on an image, this indicates that the image must have been synthetically generated and not organically captured. However, the researchers found that a GAN could be trained to recreate these specific camera traces, with the final output being “injected” into synthetically generated images. These spoofed fingerprint images successfully fooled state of the art detectors into identifying the images as derived from a specific camera model, as well as fooling independent synthetic media detectors into believing the image was real.

Why is the technique significant?

The technique poses an explicit challenge to some of the most reliable image manipulation detectors currently available, with the researcher’s hoping the results provoke the development of techniques that are more robust to unforeseen attacks. From a broader perspective, the ability to spoof these camera fingerprints reinforces the adversarial dynamic facing detection systems and the need to continually test these systems against the latest forms of attacks.

This week’s developments

1) Researchers from Zhejiang University developed Appearance Composing GAN: A new method for synthetically altering both a subject’s movements and physical appearance in video footage. (arXiv)

2) Facebook issued its first “legally required” correction notice under Singapore’s new fake news law, with the government claiming that the targeted post contained false information. (BBC News)

3) Games company Cards Against Humanity created a Black Friday competition between sales of cards created by their human writers and a GPT-2 model trained on their previous brainstorming data. (CAH)

4) British political parties have been accused of deceiving the public by spreading “fake local newspapers” promoting a specific party candidate ahead of the General Election. (First Draft News)

5) Nvidia researchers released an open-source Github repository for Mellotron, a speech synthesis model that can generate audio of an individual’s voice singing or expressing distinct emotions. (Github)

6) Researchers found that adversarial “deepfake” attack models could be used to generate fake accounting documents that successfully compromised widely used fraud detection software. (arXiv)

7) Information literacy researcher Mike Caulfield released Walkthrough, a Windows tool intended to help teachers and other users create clear and accessible fact-checking infographics. (Hapgood)

8) Nvidia released a few shots vid2vid synthesis technique that can generate detailed photorealistic videos of previously unseen subjects or scenes from images not seen during training. (NVLabs)

Opinions and analysis

The infinite arms race between deception and detection

Lux Capital’s Josh Wolfe presents a talk on the ways technology is constantly being employed to both conceal and reveal truth and explains how this “arms race for reality” impacts our view of the world.

Why the fight against disinformation won’t be any easier in 2020

Alexandra Levine and her Politico colleagues argue that bad actors’ evolving uses of disinformation, fake accounts, and trolling means they still pose a significant threat to the 2020 presidential elections.

When is it ok to use CGI of dead actors in new movies?

Patrick Stokes analyses the emerging trend of “synthetically resurrecting” dead actors in new films, and emphasises the ethical ambiguity between remembrance and exploitation inherent to the practice.

Want to receive new editions of Tracer direct to your inbox? Subscribe via email here!

Working on something interesting in the Tracer space? Let us know at info@deeptracelabs.com

To learn more about Deeptrace’s technology and research, check out our website

--

--