To see no longer means to believe
Deepfake technology is on the rise — but imitation or identity theft isn’t entirely new. Mehhma Mali digs deeper.
The use of deepfake technology is increasing as more companies devise different models.
It is a form of technology where a user can upload an image and synthetically augment a video of a real person or create a picture of a fake person. Many people have raised concerns about the harmful possibilities of these technologies. Yet, the notion of deception that is at the core of this technology is not entirely new. History is filled with examples of fraud, identity theft, and counterfeit artworks, all of which are based on imitation or assuming a person’s likeliness.
In 1846, the oldest gallery in the US, The Knoedler, opened its doors. By supplying art to some of the most famous galleries and collectors worldwide, it gained recognition as a trusted source of expensive artwork — such as Rothko’s and Pollock’s. However, unlike many other galleries, The Knoedler allowed citizens to purchase the art pieces on display. Shockingly, in 2009, Ann Freedman, who had been appointed as the gallery director a decade prior, was famously prosecuted for knowingly selling fake artworks. After several buyers sought authentication and valuation of their purchases for insurance purposes, the forgeries came to light. The scandal was sensational, not only because of the sheer number of artworks involved in the deception that lasted years but also because millions of dollars were scammed from New York’s elite.
The grandiose art foundation of NYC fell as the gallery lost its credibility and eventually shut down. Despite being exact replicas and almost indistinguishable, the understanding of the artist and the meaning of the artworks were lost due to the lack of emotion and originality. As a result, all the artworks lost sentimental and monetary value.
Yet, this betrayal is not as immoral as stealing someone’s identity or engaging in fraud by forging someone’s signature. Unlike artwork, when someone’s identity is stolen, the person who has taken the identity has the power to define how the other person is perceived. For example, catfishing online allows a person to misrepresent not only themselves but also the person’s identity that they are using to catfish with. This is because they ascribe specific values and activities to a person’s being and change how they are represented online.
Similarly, deepfakes allow people to create entirely fictional personas or take the likeness of a person and distort how they represent themselves online. Online self-representations are already augmented to some degree by the person. For instance, most individuals on Instagram present a highly curated version of themselves that is tailored specifically to garner attention and draw particular opinions.
But, when that persona is out of the person’s control, it can spur rumours that become embedded as fact due to the nature of the internet. An example is that of celebrity tabloids. Celebrities’ love lives are continually speculated about, and often these rumours are spread and cemented until the celebrity comes out themselves to deny the claims. Even then, the story has, to some degree, impacted their reputation as those tabloids will not be removed from the internet.
The importance of a person maintaining control of their online image is paramount as it ensures their autonomy and ability to consent. When deepfakes are created of an existing person, it takes control of those tenets.
Before delving further into the ethical concerns, understanding how this technology is developed may shed light on some of the issues that arise from such a technology.
The technology is derived from deep learning, a type of artificial intelligence based on neural networks. Deep neural network technologies are often composed of layers based on input/output features. It is created using two sets of algorithms known as the generator and discriminator. The former creates fake content, and the latter must determine the authenticity of the materials. Each time it is correct, it feeds information back to the generator to improve the system. In short, if it determines whether the image is real correctly, the input receives a greater weighting. Together this process is known as generative adversarial network (GAN). It uses the process to recognise patterns which can then be compiled to make fake images.
With this type of model, if the discriminator is overly sensitive, it will provide no feedback to the generator to develop improvements. If the generator provides an image that is too realistic, the discriminator can get stuck in a loop. However, in addition to the technical difficulties, there are several serious ethical concerns that it gives rise to.
Firstly, there have been concerns regarding political safety and women’s safety. Deepfake technology has advanced to the extent that it can create multiple photos compiled into a video. At first, this seemed harmless as many early adopters began using this technology in 2019 to make videos of politicians and celebrities singing along to funny videos. However, this technology has also been used to create videos of politicians saying provocative things.
Unlike, photoshop and other editing apps that require a lot of skill or time to augment images, deepfake technology is much more straightforward as it is attuned to mimicking the person’s voice and actions. Coupling the precision of the technology to develop realistic images and the vast entity that we call the internet, these videos are at risk of entering echo chambers and epistemic bubbles where people may not know that these videos are fake. Therefore, one primary concern regarding deepfake videos is that they can be used to assert or consolidate dangerous thinking.
Deepfake videos can be used to assert or consolidate dangerous thinking.
These tools could be used to edit photos or create videos that damage a person’s online reputation, and although they may be refuted or proved as not real, the images and effects will remain. Recently, countries such as the UK have been demanding the implementation of legislation that limits deepfake technology and violence against women. Specifically, there is a slew of apps that “nudify” any individual, and they have been used predominantly against women. All that is required of users is to upload an image of a person. One version of this website gained over 35 million hits over a few days. The use of deepfake in this manner creates non-consensual pornography that can be used to manipulate women. Because of this, the UK has called for stronger criminal laws for harassment and assault. As people’s main image continues to merge with technology, the importance of regulating these types of technology is paramount to protect individuals. Parameters are increasingly pertinent as people’s reality merges with the virtual world.
However, like with any piece of technology, there are also positive uses. For example, Deepfake technology can be used in medicine and education systems by creating learning tools and can also be used as an accessibility feature within technology. In particular, the technology can recreate persons in history and can be used in gaming and the arts. In more detail, the technology can be used to render fake patients whose data can be used in research. This protects patient information and autonomy while still providing researchers with relevant data. Further, deepfake tech has been used in marketing to help small businesses promote their products by partnering them with celebrities.
Deepfake technology was used by academics but popularised by online forums. Not used to benefit people initially, it was first used to visualise how certain celebrities would look in compromising positions. The actual benefits derived from deepfake technology were only conceptualised by different tech groups after the basis for the technology had been developed.
The conception of such technology often comes to fruition due to a developer’s will and, given the lack of regulation, is often implemented online.
While there are extensive benefits to such technology, there need to be stricter regulations, and people who abuse the scope of technology ought to be held accountable. As we see our present reality merge with virtual spaces, a person’s online presence will continue to grow in importance. Stronger regulations must be put into place to protect people’s online persona.
While users should be held accountable for manipulating and stripping away the autonomy of individuals by using their likeness, more specifically, developers must be held responsible for using their knowledge to develop an app using deepfake technology that actively harms.
To avoid a fallout like Knoedler, where distrust, skepticism, and hesitancy rooted itself in the art community, we must alert individuals when deepfake technology is employed; even in cases where the use is positive, be transparent that it has been used. Some websites teach users how to differentiate between real and fake, and some that process images to determine their validity.
Overall, this technology can help individuals gain agency; however, it can also limit another persons’ right to autonomy and privacy. This type of AI brings unique awareness to the need for balance in technology.
Mehhma recently graduated from NYU having majored in Philosophy and minoring in Politics, Bioethics, and Art. She is now continuing her study at Columbia University and pursuing a Masters of Science in Bioethics. She is interested in refocusing the news to discuss why and how people form their personal opinions.