Tracer Newsletter #54 (18/05/20)-Synthetic media startup launches marketplace for buying licensed AI-generated characters

Henry Ajder
Sensity
Published in
4 min readMay 18, 2020

Welcome to Tracer- your guide to the key developments surrounding deepfakes, synthetic media, and emerging cyber-threats.

Want to receive new editions of Tracer direct to your inbox? Subscribe via email here!

Synthetic media startup launches marketplace for buying licensed AI-generated characters

Synthetic media startup Alethea.ai launched an online marketplace for buying licensed “AI-generated characters” that are cryptographically labelled to indicate that they are manipulated.

How does the marketplace work?

The marketplace, as summarised by Coin Telegraph, aims to provide “infrastructure for licensing, circulating and monetizing legal and permissioned [AI generated] creations.” Individuals’ can give consent for their image and/or voice to be used for custom avatar generation, and receive payment for its use. The marketplace also features entirely “fictional” AI-generated characters that can be used for a lower price. All generated avatars are then cryptographically marked by Oasis labs to provide an indication that the media has been manipulated.

Distinguishing legitimate from malicious uses of synthetic media

The marketplace seeks to formalise and secure the process of generating synthetic media using a similar approach to “controlled capture” techniques, where authentic media is verified at the point that the image/video is taken. In Alethea’s case, this approach represents an attempt to distinguish their synthetic media from malicious or non-consensual examples. As commercial applications of AI-generated synthetic media continue to gain traction, companies and creators will likely come under increasing pressure to ensure their services are responsibly deployed in a way that minimises the potential for misuse.

This week’s developments

1) Facebook AI developed a real-time neural text-to-speech system that can process 1 sec of audio in 500 ms using only CPUs instead of more powerful GPUs or specialised hardware. (Facebook AI)

2) Digital studio Brud’s virtual influencer and artist Lil Marquela signed with CAA, marking the talent agency’s first signing of a virtual client. (Variety)

3) Two Belgian computer science students created “Fake Fake News”, a website featuring GPT2 generated satirical news articles based on a training dataset of satirical articles from The Onion. (Fake Fake News)

4) Epic Games released a demo for the upcoming Unreal Engine 5, stating that the engine’s aim is to generate photorealism on par with movie CG and real life, while also ensuring it is within practical reach of development teams. (Unreal Engine)

5) Donald Trump retweeted a doctored “meme” video of his head superimposed on the actor who played the US president in the film Independence Day. (Donald Trump-Twitter)

6) (NSFW) Artist Shardcore launched “The Machine Gaze”, a digital exhibition of synthetic pornographic images generated by a combination of machine learning techniques, including Generative Adversarial Networks (GANs), neural style transfer, and Google’s deep dream. (The Machine Gaze)

7) Software engineer Thomas Dimson created a website for generating words that don’t exist, along with their definitions and use cases. (thisworddoesnotexist.com)

8) Artist and developer Javier Ideami created Loss Landscape, a series of machine-generated images that explore the morphology and dynamics of the fingerprints left by deep learning optimisation training processes. (Loss Landscape)

9) Microsoft and Intel researchers developed a novel deep learning approach for translating malware into images to aid classification and detection. (ZDNet)

10) Digital content creator Tristan Cross created a detailed model of his local pub in VR, including interactive pints and motion-captured version of his friends. (Wired)

Opinions and analysis

All’s clear for deepfakes: Think again

Bobby Chesney, Danielle Citron, and Hany Farid argue that a recent NATO discussion downplaying the threats posed by deepfakes failed to account for a number of existing harms that particularly impact women and political discourse.

Tracing Trust: Media authenticity infrastructure

WITNESS’ Corin Faife presents a series of videos and a live-stream exploring the key issues that should be considered when developing an enhanced media authentication infrastructure.

What does JAY-Z’s fight over audio deepfakes mean for the future of AI music?

Marc Hogan explores how the future of AI generated music and sampling could be impacted by the legal debate surrounding Jay-Z’s recent attempt to copyright strike deepfake audio imitating his voice.

The new AI tools spreading fake news in politics and business

Hannah Murphy outlines the emerging AI tools that are being used to accelerate the “democratisation of propaganda”, and how this process could be countered.

Want to receive new editions of Tracer direct to your inbox? Subscribe via email here!

Working on something interesting in the Tracer space? Let us know at info@deeptracelabs.com or on Twitter

To learn more about Deeptrace’s technology and research, check out our website

--

--