Tracer Newsletter #55 (02/04/20)-Adobe and Dalian University researchers release new “image inpainting” technique for removing objects from images

Henry Ajder
Sensity
Published in
3 min readJun 3, 2020

Welcome to Tracer- your guide to the key developments surrounding deepfakes, synthetic media, and emerging cyber-threats.

Want to receive new editions of Tracer direct to your inbox? Subscribe via email here!

Adobe and Dalian University researchers release new “image inpainting” technique for removing objects from images

Researchers from Adobe and China’s Dalian University of Technology released an open demo for a new “image in-painting” technique that significantly improves on previous methods.

How does it work?

Previous image inpainting techniques frequently created visual artefacts when attempting to realistically fill large object removal holes in images. This new technique addresses the flaw by implementing a deep generative model that creates an inpainting result with a corresponding confidence map. The model iteratively fills the image hole based on the pixels where the model has the highest confidence, with the remaining pixels being gradually being filled as further iterations are processed. The technique is combined with a guided upsampling network to improve the resolution of its inpainting results, as well as enhanced training data to better mimic real object removal scenarios.

The increasing accessibility of AI-powered editing tools

According to the researchers, the technique significantly outperforms existing methods in both qualitative and quantitative evaluations. A free and accessible demo of the technique was released alongside the technical paper, where users can remove objects from their own uploaded images. To operate, users simply ‘paint’ over the area of an image they wish to remove and receive the inpainted version instantaneously.

This week’s developments

1) Max Planck Institute researchers published a new deep learning-based deepfake detection technique and accompanying state of the art deepfake dataset for benchmarking detection techniques’ accuracy. (arXiv)

2) Open AI researchers published a paper assessing the performance of their generative language model GPT-3 across different NLP datasets. (arXiv)

3) Scammers attempted to steal cash from selected targets using pre-recorded video of Tron cryptocurrency creator Justin Sun to imitate him appearing to them on a live Skype call. (Bitcoin Insider)

4) IBM researchers developed a technique for automating the generation of new maps using incomplete or inaccurately labelled high-resolution aerial images. (arXiv)

5) Faceswapping app Impressions released faceswapping capabilities for the recently deceased basketball player Kobe Bryant, raising questions about whether the use case potentially violated Bryant’s image rights. (Chris Messina-Twitter)

6) A consortium of university researchers developed Autosweep, a new technique for recovering 3D editable objects from a single photograph. (Chenxin.Tech)

7) Reuters and CNN won Webby awards for their respective resources explaining different kinds of media manipulation and the Pentagon’s race against deepfake videos. (Reuters/CNN)

8) The Commission for the Enrichment of the French Language (CELF) encouraged French speakers to use the term “videotox infox” instead of the English-derived “deepfake”. (BBC News)

Opinions and analysis

Meet the digital embalmers helping celebrities, brands, and individuals plan their digital afterlives

Cathy Hackl explores the growing industry of “digital embalmers” who organise clients’ digital afterlife and synthetic resurrection through a variety of mediums.

Deepfakes in South Korea: A new kind of crime

Yoon So Yeon outlines the destructive impact deepfake pornography has had on South Korea women, and how the government is taking action in an attempt to counter this activity.

The real threat of fake voices in a time of crisis

Jinyan Zang, Latanya Sweeney, and Max Weiss recount their study deploying fake generated comments to undermine a government public consultation online, and argue that these “deepfake text” attacks pose a serious threat to democratic processes.

‘Deepfakes’ are here. These deceptive videos erode trust in all news

Cristian Vaccari and Andrew Chadwick outlines the findings of their recent study analysing the impact of deepfakes on trust in news media, and argue deepfakes’ biggest threat may be undermining our ability to meaningfully discuss public affairs.

Want to receive new editions of Tracer direct to your inbox? Subscribe via email here!

Working on something interesting in the Tracer space? Let us know at info@deeptracelabs.com or on Twitter

To learn more about Deeptrace’s technology and research, check out our website

--

--