Deepfakes: What Will You Believe?

Get Famous if You Want to Save Yourself

Aisha Tritle
Omnidya AI
5 min readMar 4, 2019

--

A world where you can no longer believe what you see. Where what used to count as hard evidence is easily crafted.

Pretty dystopian, right? This is what deepfakes — AI-generated fake media — can lead to. Basically, these fake videos looks so realistic that if you tell the truth and say you didn’t go to that disastrous party where everyone got arrested or married— someone could create a video so realistic that it would make it seem as if you really were there.

Sucks, right?

What are some real-life examples of deepfakes?

Well, someone created a series of deepfakes placing Nicolas Cage in movies he was never actually in. Wild, I know. Skip to 0:51 to see him as Lois Lane in Man of Steel.

Henry Cavill with but without a mustache. Credit: Warner Brothers

Someone also taught an AI to shave Henry Cavill’s mustache. Total cost of that trained AI? $500. On the other hand, the budget for the Justice League reshoots — after which they had to digitally remove the mustache HC grew for MI6 — was $25 million. And you can bet your buttons they spent more than $500 cleaning up Superman’s facial hair.

Unfortunately, deepfakes are being utilized for far more nefarious purposes. Scarlett Johansson is a frequent victim of the now all-too-common occurence of deepfake porn where her face is edited on top of someone else’s body. Though Pornhub promised to ban all deepfakes from their platform back in February of 2018, they’ve been lax with enforcing this rule.

Deepfakes have been labelled such a threat that researchers at OpenAI, a non-profit sponsored by illustrious individuals such as Elon Musk and Peter Thiel, are only releasing a reduced version of Generative Pre-trained Transformer-2 (GPT-2 ), an ML system that generates text based on brief writing prompts. Why did they only release a reduced version? The researchers posted a blog explaining

“Our model, called GPT-2 (a successor to GPT), was trained simply to predict the next word in 40GB of Internet text. Due to our concerns about malicious applications of the technology, we are not releasing the trained model. As an experiment in responsible disclosure, we are instead releasing a much smaller model for researchers to experiment with, as well as a technical paper.”

They were worried about their creation being used to create abusive language at scale. How scary when your baby is capable of such darkness.

But on a somewhat positive note:

People are trying to leverage artificial intelligence to detect deepfakes.

Researchers at University at Albany, SUNY, trained an AI to detect deepfakes by detecting “eye blinking in the videos, which is a physiological signal that is not well presented in the synthesized fake videos.” They’ve also recently identified other ways to figure out if a video is authentic, such as by analyzing head angles and checking to see if a face is overly smooth.

Gfycat is also working on using AI to detect deepfakes — utilizing Project Angora and Project Maru. The former searches the web for a higher-res version of the uploaded gif to replace it with. The latter is able to detect if the celebrity face in the gif uploaded doesn’t quite match the loads of other gifs online.

Motherboard’s written an article on how Gfycat’s attempts to ban deepfakes aren’t working…and it makes sense. The Gfycat approach has gaps. If none of the parties involved in a deepfake are famous, then Project Maru and Angora become useless.

But you know, at least Gfycat is trying. The point is: you better get famous if you want to save yourself.

Nicolas Cage as Lois Lane. Credit: Usersub

Another method is at the forefront of deepfake combating: verification at “point-of-capture.”

Truepic, ranked #16 on Fast Company’s50 Most Innovative Companies 2019,” is a US-based startup that raised $8 million last year to fund its mission of verifying videos and photos. Their website mentions that they utilize “patented Controlled Capture technology and image forensics tools.” You currently need to take photos and videos through the Truepic app to verify their authenticity, but the company recently acquired Henry Farid’s Fourandsix Technologies (digital forensics) and will soon be able to verify photos and videos taken outside of the their app.

Another startup working to verify photos and videos is UK-based Serelay. Much like Truepic, their app (available for iOS and Android) currently needs to be used at the point of capture. They do, however, offer an additional option for businesses: the Serelay SDK is available to integrate into existing iOS/Androids apps for in-app photo and video verification.

But despite all the noble efforts, the gaps in deepfake detection are all too apparent. A multitude of deepfakes go undetected — and as deepfake technology advances, detection methods will not only have to catch up, but advance at the same rate…

Or we really won’t know what’s real anymore.

We’re Launching Soon to Give You Quotes on Home Insurance in Under 60 Seconds. Keep Up With Us:

--

--

Aisha Tritle
Omnidya AI

VP of Insights & Analytics, YouGov Signal. Working with most major film studios. All views are my own.