The Spread of Deepfake Misinformation

Jason Deng
SI 410: Ethics and Information Technology
6 min readFeb 17, 2023
Deepfake example (source)

Have you seen videos of famous personalities saying something ‘strange’ which you believe they have would not say? How many times you have seen false and odd information spread from a trusted source? What is real and what is fake?

The rise of technology has led us to ask this question more and more as we consume an increasing amount of digital media. Deepfake is a prime example of a technology on the rise which has the potential to spread misinformation, and it has garnered widespread attention for their uses in celebrity pornographic videos, revenge porn, fake news, hoaxes, and financial fraud. This has elicited responses from both industry and government to detect and limit their use.

So what exactly is a deepfake? Deepfake is defined as a form of media, typically a video, that includes the image of a person in which their face or body has been digitally altered so that they appear to be someone else. Ever seen a video of a celebrity where they are saying or doing something crazy recently? That may be a deepfake! What could be seen as a harmless video of a person talking, could actually be a representation of a person’s identity being stolen and misinformation being spread. Not only videos, but deepfake technology can be used to create realistic fictional photos or create voice clones of public figures which might be very difficult to identify compared to deepfake videos. I believe that without proper education and awareness, deepfake has the potential to spread mass disinformation to sway election outcomes and cause political manipulation.

The term “deepfake” originated from a Reddit user account in 2017 who posted pornographic content that used famous actresses faces to swap into porn videos. This user shared the method and code to make these deepfake. Since then, there has been many improvements and iterations of this software where this process of image manipulation from artificial intelligence labs have made deepfake technology steadily better and more accessible.

Political manipulation is a big target for deepfake misinformation where political figures are the victim of false media that is portrayed in their identity. Examples of this include creating deepfake false voice recordings and video clips of politicians saying or doing something that they did not commit. Imagine that during the election period there are countless videos of politicians saying statements, but you don’t know what videos are real or deepfake. People would start to discredit video and audio media when considering who to vote for in an election. Even so, this is the best case scenario as the ones who are not familiar with deepfake will believe anything and become manipulated by the false media. This is unfortunately the world we may be heading towards where online media about politicians cannot be trusted and used. Overall, the effects of this creates an environment where politicians are less credible and involved in scandals. In addition, this could have a drastic effect on the outcome of elections where people are swayed on potentially false information encouraged through deepfake.

A current example of a malicious political deepfake is a deepfake of Donald Trump speaking about the Paris climate agreement. This deepfake was produced by a Belgian political group in order to gain more awareness to sign a petition to take action against climate change. Below is the video that demonstrates this deepfake.

While the deepfake video is clearly fake and can be spotted easily, a small amount of viewers were still fooled and called for change on climate change action. If a select viewers were fooled from a low quality deepfake, we could be seeing a serious problem in the future where mass populations can be deceived by a good deepfake.

Deepfake technology still has some way to go until it can be effectively used for misinformation and manipulation. In my experience, much of the exposure for deepfake comes from social media. All the videos of deepfakes I’ve seen can be easily distinguished as fake. Often, these videos are shared through both Instagram and Tiktok and many of these videos are shown for comedic purposes. Examples include attention grabbing deepfakes of Barack Obama and Mark Zuckerberg which have gained millions of views. Although deepfakes are being created to mimic well known people, so far most of these are an extension of memes and trolling. However, with the rapid progression of technology and what we know about deepfakes, it seems that the idea this kind of harassment will be the first major negative impact of the technology.

Because most of deepfake spreading originates from social media, many of these social media companies should be aware of deepfake and how it can affect politcal elections. Improvements may include better detection for deepfake content, as well as potentially formulating and implementing clearer policies regarding when they will remove (or decline to remove) deepfakes from their sites. However, developing these policies to prevent disinformation can prove to be difficult for social media companies. One method could be to contemplate a policy of removing only malicious deepfakes. But that would require defining “malicious.” If a deepfake is used in a manner that is clearly a parody or for comedic purposes, would this policy give the person portrayed grounds to demand its removal from a social media site? Also what if the deepfake is only targeting an election? Defining what is a “malicious” deepfake video can prove to be very challenging and will likely allow these manipulative deepfake to deter and affect future political campaigns.

In a blog post written by Claire Wardle, she speaks about the many facets of fake news and has made references to the spread of misinformation during presidential elections. Wardle also mentions the potential motivations behind the spread of fake news which boils down to 8 “Ps”: Poor Journalism, Parody, to Provoke or ‘Punk’, Passion, Partisanship, Profit, Political Influence or Power, and Propaganda. In relation to deepfake, this could explain some of the motives behind its use and how deepfake is taken advantage of to fulfill a certain agenda. Wardle also discusses how this fake media can be distributed to reach a large audience. This can include, the dissemination of tweets where individuals retweet deepfakes without checking if the video is altered, or journalists amplifying and exaggerating the information to report the news emerging on the social web in real time. Even some of this media is being pushed out by loosely connected groups who are deliberately attempting to influence public opinion, and some of it is being disseminated as part of sophisticated disinformation campaigns, through bot networks and troll factories.

To prevent such large spread of misinformation and disinformation through deepfake is to spread awareness on such technology as well as educate how to spot one. I believe that the actual threat of deepfake manipulation comes from the lack of knowledge in this development of technology as people would be ignorant to know that there are fake videos that can be made. In addition, the evolution of deepfake videos will allow more and more of the common population to be able to access this technology and create their own deepfakes, thus creating a larger pool of deepfake video media on the internet.

Without the use of further technology, there are some ways to detect a deepfake just using our own eyes and senses. An article from Jonathan Hui describes some of the strategies and questions ask in order to detect deepfake videos:

  • Does it over blur comparing with other non-facial areas of the video?
  • Does it flick?
  • Does it have a change of skin tone near the edge of the face?
  • Does it have a double chin, double eyebrows, double edges on the face?
  • When the face is partially blocked by hands or other things, does it flick or get blurry?

These questions are important to ask when watching a suspicious video as it could be lead to the discovery of deepfakes. Knowing when a video could potentially be deepfake is crucial to preventing misinformation spreading which includes preventing false information being spread on political figures.

As technological advancements are made, deepfake technology can be seen as a potential threat to spread misinformation on a massive scale. However, with more awareness on the topic and with individuals being educated on deepfake, the risks can be mitigated and individuals can be protected from the overall harms and misinformation from deepfakes. Moral of the blog? Don’t believe everything you see especially on the internet and social media. Do some research through your favorite search engine like Google or Bing to identify whether the video is a deepfake or not before sharing it with your friends and families.

--

--