Deepfake Videos To Enter Politics

Buster.Ai
Buster.Ai
Published in
4 min readApr 14, 2020

Fortunately, our algorithms can debunk them

“By 2022, most people in mature economies will consume more false information than true information.”, Gartner warned in October 2017. As feared, politics are no exception when it comes to the proliferation of misleading content. In early February 2020, Deepfakes entered the Indian Legislative Assembly elections by making a candidate speak in several languages in order to reach more voters. If a political party is clearly ready to use Deepfake technology for campaigning purposes and knowing that anyone can now generate Deepfakes with a dangerously convincing rendering, the scenario of massive dissemination of fake videos in view of altering poll results is no more a dystopian future.

Our mission: protecting readers and media against fake news

Although the extent of fake stories and content goes way beyond the news aspect, media liability in conveying these fake stories and content is definitely at stake. Facing the surge in Fake News along with their ability to travel six times faster than the truth on Twitter according to an MIT MediaLab study, news media have never been so urged to authenticate stories and content. However, the question of how they can perform these verifications is still left unanswered. All the more so as, according to Gartner, the threat is mainly carried by the “digitally created content that is not a factual or authentic representation of information”, i.e. those that the ill-informed or even the human eye of journalists fail to debunk.

News media are torn between the imperative of express coverage of breaking news and the verification of story, content or source. The outcome of this everyday Cornelian dilemma is that often due to the lack of abilities to verify content, the rush to cover a headline prevails, despite the news industry’s fear of spreading fake news and admitting to it.

To help news media meet these two imperatives, BusterAI is developing content verification algorithms. Empowering media to authenticate content against forgery is our top priority. Our ambition is to equip the media with all the tools needed in order to efficiently fact-check any source of information and help them with its publishing.

Our algorithms debunked two Deepfake videos

On February 7th, 2020, only one day before the Legislative Assembly elections in Delhi, two fake videos of candidate Manoj Tiwari (Bharatiya Janata Party) went viral on WhatsApp, Vice reports. In these videos, the candidate addresses the voters in English and Haryanvi, reaching approximately 15 million people.

As we aim to improve our algorithms for Facebook’s Deepfake Detection Challenge in which we will be taking part, we wanted to verify our predictions on these two videos.

Our Deepfake Detection tool analyzing the candidate’s videos in English and Haryanvi, respectively.

Our algorithms identified both videos as fake, with a confidence of 81% for the English video and 80% for the Haryanvi video. For each video, 2 of the 20 extracted frames were predicted as being real and the other 18 as being fake.

Our recipe to debunk Deepfake videos

In the case of Manoj Tiwari’s Deepfakes, the performances of an actor pronouncing the candidate’s speech in English and in Haryanvi were modeled on a video of the candidate delivering the same speech in another language. In order to obtain a convincing result, the candidate’s team used a family of AI algorithms named GAN: it automatically reworks the actor’s voice to match with the candidate’s face and lip movements, while perfectly syncing his voice and lips.

To assess the authenticity of these types of videos, the first phase consists of extracting frames from each video. Then comes the involvement of our Deep Learning algorithms: the frames are analyzed at a pixel level, meaning at levels of detail not visible to the naked eye. Our algorithms allow us to detect hidden artifacts in the lip-syncing, i.e. visual inconsistencies in the lip movement synchronization with the audio. Our models are trained on sufficient amounts of videos to learn the topological invariants of faces and thus determine whether a face is real or not.

Deepfakes are a serious threat that every company using or relaying information must gear up for. If you are looking to start preparing against the many risks that come with the advancement of this technology, visit:

👉 https://buster.ai/

We are also hiring talented Deep Learning researchers in order to improve the state-of-the-art in several fields of fact-checking.

If you wish to take part in this adventure, join us!

👉 https://buster.ai/jobs/

--

--

Buster.Ai
Buster.Ai

The antivirus of information. Find truth in the ocean of misinformation 🔍 buster.ai