Deconstructing Deepfakes: The Ethical Implications of Altering Reality

Shubhi Upadhyay
Kigumi Group
Published in
3 min readMay 16, 2023
Photo by Steve Johnson on Unsplash

“Deepfake” has rapidly emerged as a buzzword in the media, drawing attention to its use cases and ethical implications. Just a week ago, the Republican Party in the United States posted a 30-second clip involving the use of deepfake technology to demonstrate fictional dystopian scenarios in an attempt to show voters what they believed the future would look like if Joe Biden was re-elected. This is just one example of the impact of deepfake technology, a form of artificial intelligence (AI) that involves manipulating content to create hyper-realistic images, videos, and other types of media.

While they can have positive and entertaining uses, including enabling practices like CGI in film production, deepfake technologies can also be used to spread misinformation. This can be extremely damaging to both individuals and society as a whole. For example, deepfakes can be used to coerce or blackmail people into doing things they don’t want to do, trick people into believing things that aren’t real, and frame individuals for actions they didn’t commit.

Creating a Deepfake

With this in mind, let’s explore the process of how they are made. As noted in Miles Li’s AI and Misinformation: Why Sometimes Having an Answer is Worse than “I Don’t Know”, it is relatively easy to make a deepfake — anyone with the right software can make one at the moment, and it’s only getting easier. This is why it is critical to understand how deepfakes are created, which can inform the process of their detection and prevent their misuse.

To create a deepfake, a creator first trains a neural network on large amounts of real media of a person, such as video footage of their visual appearance or audio samples of them speaking. Based on this data, the neural network model learns to mimic certain features of the person, such as the way they breathe or how they look under different lighting. Then, the model creates a new media item (i.e. a moving image / video, voice recording, or other media form) that replicates the person of interest, such as by overlaying the person’s face onto another individual or fabricating an artificial audio recording using the person’s voice.

The Future of Deepfakes?

Most people assume that generative adversarial networks, or GANs, will be largely responsible for creating deepfakes in the future. A GAN is a pair of AI networks that work in tandem to generate a more accurate product. For example, if one system was tasked with replicating a work of art, the other system would be responsible for how accurate it is. The two systems would work together, sharing ideas millions of times to improve the replication. While GANs aren’t used commonly in this manner yet, AI researchers hope that in the future, these GANs will move past replicating media to generating new media. This advancement may involve the creation of deepfakes.

Overall, knowing how deepfakes are created can aid in developing and deploying them in ethical ways. For example, it can help researchers develop algorithms to detect whether an item of media is real or a deepfake. Additionally, it can help the general public develop more awareness of the creation of deepfakes, which can equip them with more knowledge to discern between real and fake media. As a result, we can work towards a future in which deepfakes are used in increasingly ethical and positive ways while preventing susceptible individuals from harm.

Sources:

--

--