Effect of Deep fake AI on the Dissemination of Information Shared Through New Media

Muhammad Umar Ali
Digital Diplomacy
Published in
8 min readJun 11, 2020

An opinion piece surrounding Deep fake AI and its ethical dilemmas; as well as how to mitigate them.

Preface

This is a more formal piece of writing, it was written this way since I want to make a real contribution to the discussion of Deep fake AI and all the ethical issues surrounding it. As such, the writing is stylized in a very academic essay style with even a bibliography. As such, the readability might have been sacrificed in an attempt for a more professional submission of thought.

Introduction

Rene Descartes, a 17th-century philosopher, proposes a thought experiment wherein he suggests that there exists a hypothetical demon that could make one experience a reality that does not exist. Descartes then makes the claim that our perception of reality through the senses is unreliable and ought not to be trusted. Today, the advent of Deep fake AI affirms Descartes’ claim and even delivers the power of distorting reality not only to hypothetical demons but also to specialists with the correct set of computer software. Deep fake AI, a relatively new process of editing video, has increased the accessibility for media falsification. Deep fake AI allows for the alteration of video in a way that makes it indistinguishable from genuine media. While media falsification already exists, Deep fake AI changes the scope, scale and sophistication of the technology involved. This technological advancement threatens the authenticity of video and compromises reality through sensory experience, this further emphasizes the role of the general public to be aware of the veracity of political videos. Moreover, individual agency becomes increasingly important,, not only in the content of the media we absorb, but as well the medium and mode in which it is delivered in. In this article, I will examine Deep fake AI by exploring the historical precedents of media falsification, Deep fake AI’s socio-political relationship with users of technology and explain how public deliberation grants clarity against these false narratives promoted by Deep fake AI.

Studying Historical Precedents and Reviewing Case Studies

One specific historical precedent that examines media falsification was the rampant censorship during the Soviet era to silence Joseph Stalin’s political rivals. In David King’s book The Commissar Vanishes, King notes that “So much falsification took place during the Stalin years…” (Ch. 1) and that alteration of media was a means to control public perception and memory. Specifically, King finds Soviet dissidents, like Leon Trotsky, completely removed from official Soviet photographs through airbrushing and painting over faces. A large reason as to why the manipulation of the general public was so effective in Stalin’s photographic vandalism was largely due to the novelty of photography at the time. The awareness of photographic editing was almost non-existent at the time, which meant no one would question the authenticity of a photo. This resulted in serious socio-political ramifications like the oppression of the Left Opposition, a rival political faction within Russia.

Examples of photo alteration and media falsification during the Soviet Union

The fundamental premise that underlies both Deep fake AI and Stalin’s use of photographic manipulation, allowing both to be effective in its way of spreading fake news, is its novelty. Another more relevant and modern example is Adobe Photoshop; prior to the emergence of Photoshop in 1990, images for the most part were taken as reliable pieces of evidence. This was evident by its popular use as evidence in the court of law, journalistic pieces, and even scientific literature. This changed with the introduction of Adobe Photoshop; part of the reason why there is more skepticism surrounding digital images is due to how accessible Photoshop is to manipulate those photos. Simply being in possession of an incriminating photo or dubious image is no longer a reliable way of ascertaining the truth, since the photo could have easily been doctored. Likewise, Deep fake AI introduces that same accessibility but with a larger scope and sophistication; as John Fletcher notes in his article, Deep fakes, Artificial Intelligence, and Some Kind of Dystopia: The New Faces of Online Post-Fact Performance: “Deep fakes mean[s] that anyone with a sufficiently powerful laptop [can] fabricate videos practically indistinguishable from authentic documentation” (456).

On a side note, the article listed above is very much worth a skim, if not a read. It provides an excellent math-free introduction to AI and makes several excellent points about the evolving Deep fake dilemma

I agree with Fletcher that the accessibility of Deep fake has allowed for the increased production of disingenuous media, however, I can not agree with the overall point that anyone possesses that ability. This is mainly due to the high technical understanding one must have in order to use the software, which detracts most users who don’t possess that skill. Much like how being able to make convincing Photoshop images is also a skill set that must be learned and developed. Being able to use Deep fake AI is a skill set unto its own, even without the mathematical understanding, simply implementing the algorithm with a desired data set takes a certain level of understanding. Furthermore, the implications for this availability is especially grim given the socio-political ramifications it has. Travis Wagner and Ashley Blewer demonstrate these disastrous consequences in their article, The Word Real Is No Longer Real, where they write: “[Deep fake AI] remains particularly troubling, primarily for its reification of women’s bodies as a thing to be visually consumed” (33). The articles discusses how Deep fake AI is being used to superimpose images of females unto videos of pornography, this misogynistic reduction of females as sexual objects to be consumed is one of many troubling issues with Deep fake AI.

In essence, the novelty of Deep fake AI is a key reason for its efficacy as a tool for fake news and creating socio-political discord, as King had shown with Soviet era photography, however the audience also plays a critical role in deterring fake news.

The Complex Socio-Political Nature of Deep fake AI and How to Mitigate False Narratives

Individual agency becomes integral in the methodology against absolving the pervasive Deep fake AI issue. In recent discussions of Deep fake AI and media legitimacy, a controversial issue has been the use of Deep fake AI by deviants to spread doubt and manipulate the public. On one hand, scholars like Fletcher argue that the inevitability of Deep fake AI dominating new media is imminent and that “Traditional mechanisms of trust and epistemic rigor prove outclassed” (457). On the other hand, scholars with similar views as Wagner and Blewer argue that Deep fake AI, while concerning, can be solved by promoting digital literacy among professionals across relevant fields. In the words of Wagner and Blewer: “[We] reject[] the inevitability of deep fakes… visual information literacy can stifle the distribution of violently sexist deep fakes” (32). In sum, the issue is whether Deep fake AI is unavoidable or a solution lies within technical literacy.

“Narrative dissonance between assertion of concern for deep fakes and actualities of prominence of deep fakes. Screen[shot] from November 6, 2018” (Wagner and Blewer, 39)

My own view is that the panic of Deep fake AI is slightly hyperbolized but that hyperbolization is necessary in order to help inoculate the public from the technology’s influence. Understanding the possible malicious deeds that can be performed with this type of technology is a cautionary tale to alert the public. That, in itself, is already a powerful means to mitigating Deep fake AI and the serious ramifications it comes with. For example, the credibility of photos had been compromised because the public became aware of how easy it is to use Photoshop to doctor images. This might seem as though I favor Wagner and Blewer’s approach of technical literacy however I also agree with Fletcher that Deep fake AI is imminent and nearly impossible to combat. Additionally, I also concede to Fletcher’s main argument that “a critical awareness of online media and AI manipulations of the attention economy must similarly move from the margins to the center of our field’s awareness” (471). This differs from Wagner and Blewer’s argument since they assert a technical literacy approach, whereas Fletcher simply advocates for awareness and attention. I can see the merits of promoting visual information literacy as a viable solution, especially given the role an individual has in the spreadability of fake news.

Ultimately, I would argue that public awareness of Deep fake AI is a powerful enough mechanism to diminish the negative effects. I mainly believe this since anything more than awareness has diminishing returns, educating people on the dangers of Deep fake AI is akin to educating them on media deception. Although some might object, claiming that public awareness could also incentivize individuals to exploit Deep fake AI. I would reply with the fact that the utilization of Deep fake AI requires a deep mastery and technical understanding of the software, and that public awareness would still allow for recognition of false media. This is to say that the public release of Photoshop implores deviants to spread misinformation, it simply isn’t true and requires more thought and effort than most deviants would be willing to put in.

Concluding Remarks

I have examined Deep fake AI by exploring the previous methods of media falsification, Deep fake AI’s ramifications within a socio-political context, and how public awareness combats false narratives promoted by Deep fake AI. The misrepresentation of information is a very pertinent issue, especially in a culture fixated on fake news where personal recordings of controversial topics can instigate political turmoil. Deep fake AI introduces an entirely new tool of media falsification, however these technological advancements are simply successors to the doctored and deceptive media that came before it. Broadcasters of disinformation, as King discussed with Stalin during the Soviet era, and the introduction of Photoshop was distorting reality long before the arrival of Deep fake AI. Moreover, widespread coverage of Deep fake AI in media can help protect the public from the technology’s influence, as the public will be aware of its existence and ascertain false narratives promoted by fake news. Overall, Deep fake AI threatens the credibility of the media we digest, however historical precedents and public awareness can help us realize the issue, along with mitigating these concerns.

Work Cited

Fletcher, John. “Deep fakes, Artificial Intelligence, and Some Kind of Dystopia: The New Faces of Online Post-Fact Performance.” Theatre Journal, vol. 70, no. 4, Dec. 2018, pp. 455–471., doi:10.1353/tj.2018.0097.

King, David. The Commissar Vanishes: the Falsification of Photographs and Art in Stalin’s Russia. 1997. Tate Publishing, 2014.

Wagner, Travis L., and Ashley Blewer. “‘The Word Real Is No Longer Real’: Deep fakes, Gender, and the Challenges of AI-Altered Video.” Open Information Science, vol. 3, no. 1, 10 July 2019, pp. 32–46., doi:10.1515/opis-2019–0003

--

--

Muhammad Umar Ali
Digital Diplomacy

I am an undergraduate student studying biomedical engineering at the University of British Columbia. Check out umarali.ca and my GitHub for more!