Malicious use of deepfakes is a threat to democracy everywhere

If we can no longer trust our eyes or our ears, what are we going to do about it?

Branka Panic
The Startup
10 min readNov 25, 2019

--

Rana Ayyub was sitting with a friend at a coffee shop when she first time saw it, her own face, falsely embedded on a porn scene, distributed not only to her but viral to the public. Rana was never filmed in a porn, and yet she saw with her very own eyes someone appearing as her in a compromising video. Someone wearing her face.

She was a victim of a deepfake porn attack as a response to her vocal stand against Kathua gang rape in India. Although she was previously the target of multiple far-right attacks, fake tweets and images, as she says, this one came as something completely devastating, with huge impact on her professional and private life. Her private address and phone number were published with the video, inviting and offering her services. At the time of the incident the video was shared more than 40,000 times, resulting in countless harassment calls, offers and threats sent daily to Rana.

I felt violated, I have never felt like this before. I was nauseous, I was in hospital, I had palpitations, I had anxiety attacks. I know for a fact now, you don’t kill a person only with a bullet, you also kill a person mentally, and they have done that to me, said Rana.

It has been a year since this incident happened. We know much more about deepfakes now, including how to make them incredibly realistic. But we are still far away from detecting them with high levels of accuracy and even further from agreeing and establishing accountability mechanisms for malicious use. This potentially brings not only elections in danger, but democracy as such, in US and abroad.

The latest Report on The State of Deepfakes: Landscape, Threats and Impact, by Deeptrace, revealed that the deepfake phenomenon is growing, with the number of videos almost doubling over the last seven months. Deeptrace is an Amsterdam based company providing deep learning and computer vision technologies for the detection and online monitoring of synthetic media. They identified the prevalence of non-consensual deepfake pornography to be a key trend, accounting for 96% of the total deepfake videos online. Although the authors are making the distinction between deepfake pornography and politically motivated deepfakes, the case of Rana Ayyub is clearly showing that this distinction is a thin line. Deepfake porn can also be misused in political purposes, targeting and harming activists, journalists and human rights defenders, crucial building blocks of every democracy.

The number of deepfake videos almost doubled over the last seven months to 14,678, with 84% growth compared to 2018.

Basics of deepfakes

Deepfakes (coming from “deep learning” and “fake”) appeared nearly three years ago, and they are getting more authentic and easier to make. Deepfakes are realistic AI-generated audio, video and images, designed with a machine learning algorithm that creates false content, appearing to be real. Sounds complicated to do? Not anymore. Today, they can be made with accessible tools with a small budget, which makes them easily available to many. It’s quite easy to create a fake video of reasonable quality, with software available online and websites that can add face to any existing video. As Galina Alperovich, Senior Machine Learning Researcher at Avast, says:

“If you are in your second year of university studying AI and image manipulation, you could do it.”

Is altering content bad by default and should it be characterized as crime? Difference between “malicious deepfakes” and “deepfakes for good”

Starting a war against the technology behind deepfakes is not only impossible but also not a recommendable thing to do.

“The volume and sophistication of publicly available academic research and commercial services will ensure the steady diffusion of deepfake capacity no matter efforts to safeguard it.”, University of Texas professor Bobby Chesney and University of Maryland professor Danielle Citron

As any other technology, altered media can be used for good and evil purposes. Even if we would like to stop its further development, it’s already so advanced and spread that such endeavor would be impossible. Apart from many applications in entertainment (see this example, creating a deepfake of Tom Cruise, showing in a funny way how the technology is being developed and used), some companies are also exploring all the possible applications of the technology for social good. Such example is VocaliD, using AI technology to make digital voice for people who lost theirs due to illness, injury or lifelong conditions. Deepfake technology creates a vast area of opportunities in education, providing students with information in more engaging and compelling ways. Video artists are already using this technology as a form of art to satirize, parody and critique public figures.

Like any technology, the ability to produce and spread deepfakes can also be used to cause a harm to individuals, organizations and entire systems. I already demonstrated in Rana’s case, how deepfakes can be weaponized to harass, humiliate and intimidate individuals. Candidates in political campaigns can be targeted, harming their reputation and chances of getting voters’ support and winning the election. Deepfakes can be misused in legal cases, provided and included as evidence and putting a new sets of challenges for trial courts. And beyond politics, there are wider potential risks to business and economy. They can potentially cause damage to our society by affecting stock prices, harming the reputation of our business, spreading rumors that are hard to prove as wrong.

Threat to democracy and security in US and abroad

When researchers at the University of Washington posted and circulated a deepfake of President Obama, it become clear how this technology can pose a serious security threat. Whoever gets this technology in their hands can make look like any president is saying and doing anything they imagine.

Paradoxically the video that created the biggest noise this year, altered video of Nancy Pelosi, was actually not a deepfake (some call it “doctoring” or “shallowfake”). In this case the timing of the video was altered, slowed down, to make her look like she is stumbling over her words without clearly communicating her message. The level of alteration is not so important for this story as the fact that the video was quickly shared on Twitter by the President of the United States, reaching millions of people in a matter of seconds. The very same effect with even graver impact can be expected in the future from deepfakes.

Although deepfakes are vastly covered in media as a potential threat to US elections, the reality shows that this phenomenon can threaten security, stability and democracy everywhere. There are two international cases demonstrating how deepfakes played a significant role in creating or intensifying a crisis. In 2018, Gabon was facing a political crisis. President Ali Bongo has not appeared in public for a while due to illness, which started speculations about his well-being and ability to serve as a president. To calm the situation the government released a video of President delivering a traditional New year’s message. The released video raised many suspicions of a deepfake, which only increased instability and eventually led to attempted military coup. Another scandal emerged in Malaysia this June, when a gay sex video was released featuring Minister of Economic Affairs Azmin Ali. Ali and his supporters claimed the video was deepfake, aiming to sabotage his political career in a country where same-sex sexual activity is illegal, with punishment of prison sentence up to twenty years. In neither of those cases it was proved or denied without any doubt that the released videos were deepfake. In both cases, the presence of deepfakes andthe discourse itself was enough to destabilize political process and undermine the authority of public figures.

This is not a new threat to democracy — misinformation and distrust are ages old, so why all the fuss?

One can argue that the threat deepfakes are posing to democracy is nothing new. In fact, democracy was fighting “fake news” and distrust since its inception. One of the examples that strongly resonates with me is the British propaganda in 1917, sharing disinformation about Germans in World War I. The Times and The Daily Mail published articles claiming that German soldiers were using the corpses of their own soldiers to boil fat and bone meal for pig food. This disinformation of “Corpse Factory Story” and mistrust in reporting is believed influenced doubts in reporting about Holocaust atrocities during World War II. Nevertheless, the availability of new technologies makes manipulation an easy effort today, while social networks bring the ability and speed of sharing to unprecedented levels. Videos are much more powerful in convincing people, attracting and keeping their attention. They are much more likely to create a strong emotional response and make people share, comment and like.

Twofold problem of malicious deepfakes

In the world of deepfakes we need to protect ourselves not only from believing that something fake is real, but also from the trap of doubting what’s truly real. Both cases are very dangerous for the trust in and the future of democracy.

“It’s not just that we have to protect against the fakes, but we also have to protect against things that actually happened” Hany Farid, Digital Forensics Expert and Professor at UC Berkeley

Farid reminds us about the last presidential election, when a recording of Donald Trump, talking about women in vulgar manner to Billy Bush, the host of “Access Hollywood”, was released to public. We need to be able to request accountability for this and similar statements. We must find a way not to get into the trap of losing trust in resource and authenticity, when the presence of deepfake technology allows anybody to claim for anything as a deepfake.

In a world where fakes are easy to create, everything becomes easier to deny.

People caught saying or doing things can claim evidence against them is a deepfake. So how can we keep trust in things that actually happened? As much as it is important to create accountability mechanisms for malicious creation and use of deepfakes, it is equally important to establish and trust the institution or authority responsible for determining a content as real or fake.

This calls for two important questions to be answered:

1. How to recognize and authenticate what is deepfake?

Various institutions, groups and individuals are already racing to discover how to detect deepfakes. One of the examples is the work being done at UC Berkeley, of developing digital forensics tools, using specific characteristics of someone’s speech to recognize whether a video is real or a fake. Researchers tested the model on several US politicians, using their unique way of speaking and analyzing real videos and fake ones created by the University of Southern California, and their model was correct 92%-96% of the time, depending on the politician and the length of the video. The downside of this solution is that the speech patterns can be learned and easily incorporated by fake content creators, which illustrates how complicated this “deepfake race” is. The Defense Advanced Research Projects Agency (DARPA) already spent $68 million to design a system that can recognize deepfakes. Researchers are claiming the program is showing promise, with 75% accuracy in a set of hundreds of test videos.

“AI might never catch 100% of fakes”, said Juston Moore, a data scientist at Los Alamos National Laboratory, “but even if it’s a cat-and-mouse game,” he said, “I think it’s one worth playing.”

2. How to do something about it? How to combat malicious deepfakes?

We should all be aware of the existence of the potential impacts of deepfakes. Some experts are even going a step further, asking from us to educate ourselves how to detect deepfakes, by looking for telltale signs in the faces, absence of blinking, or robotic facial movements. They are inviting individuals to engage in media literacy, verifying the sources of videos as legitimate, checking information on alternative websites. The public has to recognize their own responsibility, thinks Claire Wardle, online manipulation expert:

“If you don’t know 100%, hand on heart, this is true, please don’t share because it’s not worth the risk”

But putting the burden on individuals to detect deepfakes is just a lost battle. If even industry experts are still not able to do it themselves or develop a system to do it with full accuracy, how do we expect common citizens to do it.

Part of the solution are social platforms, thinking seriously and responsibly what to do with this type of content. Platforms are the ones that have a power to evaluate and label the videos before they go viral. However, social platforms are often opposing blocking the content on the ground of defending the free speech. Under the public pressure, Twitter recently started a public discussion about its policy of targeting deepfakes, proposing to put a notice next to tweets sharing manipulated media or adding a link to a news story showing why other sources think the media is manipulated. Even after appearing himself in deepfake, Mark Zuckerberg didn’t change Facebook policy towards manipulated media and left the content online. And finally, we are left with the question who should ultimately be accountable, the authors, the ones who are sharing the content, or the platforms that are allowing this content to be shared, without or with marking the content as fake? California just passed a law, which makes it a crime to distribute audio or video that gives false, damaging impressions of a politician’s words or actions. Not everybody welcomed the law, referring to free speech concerns.

Combating malicious use of deepfakes and disinformation is a complex effort, which can be achieved only with combined endeavor of multiple stakeholders, from users, creators, publishing platforms to legal and regulatory bodies. The solution has to be a combination of accurate deepfake detection and change of policies of how social media sites approach the problem.

“Based on the rate of AI progress, we can expect deepfakes to become better, cheaper, and easier to make over a relatively short period of time. Governments should invest in developing technology assessment and measurement capabilities to help them keep pace with broader AI development, and to help them better prepare for the impacts of technologies like this.” Jack Clark, Policy Director of OpenAI, and author of Import AI

--

--

Branka Panic
The Startup

Exploring intersections of exponential technologies, peace-building and human rights. Founder and Executive Director of AI for Peace. https://www.aiforpeace.org