Judge the Video by Its Cover: The Threats Deep Fakes Pose to Security and Media

tl;dr: Achievement unlocked: recognition and handling of fake news. Begin next level: detection of deep fakes.

Like the 2016 elections campaign in the U.S. is associated with fake news stories, the upcoming elections of 2020 are threatened by another tool of misinformation: deep fakes. Starting out as a fake pornographic video of actress Gal Gadot and evolving into a threat the U.S. military and startup companies are trying to tackle, deep fakes also have implications for media production and consumption.

Deep fakes, a portmanteau of “deep learning” and “fake”, are a form of video and audio forgeries which make it seem like people did or said things they never actually did. Though previously the technique was only exploited by artificial intelligence researchers, deep fakes was brought to the level of the general public when a Reddit user started posting fake pornographic videos created via a generative adversarial network (GAN).

By the time the user was banned from the site, he had released the FakeApp which allows similar videos to be created easily by average users. Not only is the software to create such forged media increasingly accessible but it also threatens anybody who has uploaded enough pictures or videos of themselves on the Internet.

One type of deep fakes is the face swap which is accessible to users through different apps that can be used voluntarily for entertainment. As deep fakes become increasingly widespread, the Internet has also seen the creation of a website which generates fake human faces, a video of a blend between actress Jennifer Lawrence and actor Steve Buscemi, and a video of Jordan Peele as former president Barack Obama warning against fake news.

However, governments fear that deep fakes could have harmful effects on national security and the upcoming 2020 presidential elections in the U.S. What raised the concern of deep fakes creators’ potential for malicious practices was a video of President Donald Trump encouraging people in Belgium to follow his example and withdraw from the Paris climate agreement.

Its creators, the Belgian political party, Socialistische Partij Anders (sp.a)said that the video aimed at catching people’s attention and guide them to an online petition urging the Belgian government to take action. The video also resulted in negative comments from viewers aimed at President Trump’s words which, as the video was proven fake, turned out to be in vain.

Some single out this incident as the only example of weaponizing deep fakes in politics, arguing that the video forgeries do not pose a threat to national security or the results of the upcoming elections. Others further argue that search engine results indicate that users are interested in deep fake pornography rather than the political implications of the technology.

In contrast, U.S. lawmakers argue that such videos pose a threat to national security. By November 2018, the media forensics program at the Department of Defence’s Defence Advanced Research Projects Agency (DARPA) had invested $68 million in developing technology which can detect forged videos in an effort to ensure national security and stability.

Finally, deep fakes are recognized as difficult to legislate because they can be subject to intellectual property law and privacy law but can also be allowed under the First Amendment. The question is not whether any legal remedies exist but whether they are sufficient to curb the power of the new technology. For instance, New York state legislators have reviewed a bill which states that “the use of a digital replica for purposes of trade within an expressive work shall be a violation”. In response, Vice President Government Relations at The Walt Disney Company wrote in a letter that the bill would harm the creative industry’s ability to tell stories about real people and events which is at the core of their profession.

Regardless of how deep fakes end up being used by their creators in the future, the videos have defied the notion that seeing is believing which has the potential to affect media production and consumption.

As a form of manipulation that is yet to be regulated by law, deep fakes remain an ethical concern that necessitates media professionals’ careful examination and assessment. Even though professionals are not trained to detect hoaxes, they can still be held accountable (Bugeja 2019).

Should a media professional use a deep fake video in a story in an effort to create sensation or be the first one to cover a topic, they risk their individual reputation as well as the credibility of the organization they work for. More particularly to political stories, journalists are at a special risk of losing control of the message they are reiterating and being seen as an accomplice to the creators of the deep fake (Bugeja 2019).

For those reasons, a media professional working in a landscape laden with deep fakes, has to apply the same steps of assessment to images and videos as to text. For instance, those who produced fake news stories during the 2016 elections could seize the opportunity that deep fakes provide and create video forgeries as well.

Other media outlets should be subject to the same procedures as any user posting content online. It was Buzzfeed in collaboration with Jordan Peele who produced the fake video of President Obama delivering a PSA about fake news.

Finally, a media professional could turn to researchers’ findings on the technology in order to understand how deep fakes work and why they can be powerful. For instance, researchers explain that the technologies that create deep fakes and those who detect such videos are based on the same principles. It would be helpful for media professionals then to tap into such researchers’ knowledge. Taking advantage of the existing research on the principles underlying such technology, a contemporary media professional can become more deep fakes-savvy and competent at assessing whether a video is original or fake.


Bugeja, M. (2019). Living Media Ethics: Across Platforms (2nd ed.). New York: Routledge.



Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store
Katerina Avramova

Katerina Avramova


Journalism and Mass Communication & Persuasive Communication in Business and Politics graduate. Future media lawyer or policy-maker.