Everybody is aware of a fake news problem, arose not so long ago, bringing Zuckerberg to the court and causing political and social issues here and there. But may I assure you — we’re at the beginning of a much bigger trial: AI generated fake video content, or “deepfakes”. And that means war.
First let’s not underestimate what’s going on. It all started publicly with fake Obama and Putin videos, saying things they would never say. Later a “lighter” and in a way more disturbing implementation came by trending on Reddit — deepfake porno vids with famous (mostly female) actors as participants and fake movie scenes. Recently Chinese television released an AI news host — undoubtedly peaceful but very high quality creation.
This is all possible and becomes more and more available with a use of Generative neural networks, or GANs. Previously you had to use expensive physical motion capture tech and equipment and of course have a person of interest at your disposal, and I doubt you have Scarlett Johansson next door to willingly star as CGI model in your video (who by the way has also fallen as a deepfake victim and was publicly disgruntled about it).
All GANs need is a proper dataset of a target person speeches, new text for him or her to say and an original video to blend in. Furthermore, many GAN models are open source for your convenient usage, this is not some classified alien tech.
But this is all about some high level stuff, politics, starts, right? Wrong. It’s about you.
At the moment it’s require to have a pretty large dataset to teach a neural network on (and that’s narrows the technology down to public figures), but it a very short time — and I mean a few years period — a several videos of a person would be enough. GANs are rapidly evolving alongside the whole machine learning industry, and new researches with lesser dataset required and more powerful production results are not long to wait.
Imagine you have a neighbour parked on your spot a few times. All you have to do is make a couple of vids of him and voila — a very convincing looking nasty piece of content starring that dude is trending in a neighbourhood chat group and on Facebook, all created on your mobile in minutes. That is what’s really coming.
As futuristic as it might sound, this is entirely not a science fiction, and credible researchers along with influential media would agree. The US Defense Advanced Research Projects Agency (DARPA) is financing its own set of tools for catching deepfakes — called Media Forensics. MIT Technology Review places deepfakes in Top 5 “emerging cyber-threats to worry about in 2019”. AI Foundation is working on a browser plugin called Defend Reality to fight the thread. Techcrunch, Motherboard, you name it — all address the upcoming damage.
And that damage is exactly what we’re trying to prevent at a new R&D project AIDA (Artificial Intelligence Deep Analytics).
Deepfake generation is not perfect and leaves specific traces on video allowing to automatically uncover faked content.
AIDA complex neural network runs every frame analysis based on multiple image parameters compared with each other to label probability of fraud, and the accuracy is relatively high on current gen of GANs.
Once released, AIDA API could be used by global news sources and social media platforms to verify video content either posted by users or sent from “anonymous sources”.
But don’t be fooled — there’s no and neither will be any set in stone ultimate weapon for deepfakes battlefield. Neural networks evolve inevitably and fast with more sophisticated architecture, presenting new threats to the community, as well as AIDA with its own AI will develop new ways of defence. Now we’re just at the beginning of the war.