Artificial Results: The Threat of Deepfake Technology on the Democratic Process

By: Ethan Wang, Queen’s University

TECHSLANG/2023

In the aftermath of the tragic terrorist attack at the Crocus City Hall in Moscow on March 22, 2024, a wave of disinformation regarding the incident has inundated public discourse. Despite that ISIS-K, an affiliate of the Islamic State based primarily in Afghanistan, claimed responsibility for the attack on Telegram, members of the Russian government have tried to attribute blame to Ukraine.

Shortly after the attack, President Vladimir Putin claimed that the four alleged attackers were attempting to flee to Ukraine using a “window” facilitated by the Ukrainian side. The suspected gunmen, all nationals of Tajikistan, were apprehended in Bryansk Oblast near the border with Belarus and Ukraine. Alexander Bortnikov, director of Russia’s Federal Security Service (FSB), supported Putin’s claims, stating that the alleged attackers were assisted by Western and Ukrainian special services.

Amidst these allegations, NTV, a major Russian television channel operated by the media subsidiary of Russia’s energy monolith Gazprom, aired a deepfake video that purportedly showed Secretary of the National Security and Defence Council of Ukraine, Oleksiy Danilov, claiming responsibility for the attack. Danilov was depicted referring to the events in Moscow as “fun,” noting that Ukraine would “arrange such fun for them more often”. Kyiv swiftly debunked the deepfake video that manipulated previous clips from the Ukrainian news channel 1+1, which denied any involvement in the attack and instead criticized Russia’s security services for its failure to prevent it.

The use of deepfake technology by Russia in an attempt to shift the blame for the country’s worst terrorist attack in over two decades onto Ukraine sets a dangerous precedent. As generative AI becomes increasingly sophisticated, Western intelligence officials are sounding the alarm about its potential exploitation by foreign adversaries to disseminate disinformation for their own gain. Of particular concern is the manipulation of the democratic processes, with the Canadian Centre for Cyber Security (CCCS) warning that generative AI could be utilized to influence Canada’s federal election next year.

But what exactly is “generative AI” and “deepfake” technology? Put simply, generative AI refers to artificial intelligence capable of producing data in response to prompts by using generative models. These models are trained to learn patterns and structures from input data, subsequently generating new data with similar characteristics. Deepfake, on the other hand, involves the manipulation of existing media using generative AI to create synthetic content.

Deepfake technology, like many advancements, possesses dual utility. On the one hand, it holds promise for furthering medical research, revitalizing historical events, and bridging language barriers. In 2018, tech giant NVIDIA, in collaboration with the Mayo Clinic and the MGH & BWH Centre for Clinical Data Science, successfully trained a neural network to identify abnormalities in brain scans using MRIs created with generative AI. The practical implications of this technology include expedited diagnosis and treatment of critical conditions, saving doctors precious time when it is of the utmost essence.

On the other hand, deepfake can be exploited to propagate disinformation, perpetuate financial fraud, and produce revenge porn. In 2019, a British energy firm was defrauded of €220,000 when scammers used a deepfake audio clip to deceive its CEO. Under false pretences, the CEO wired money to a Hungarian account, believing it was destined for the firm’s German parent company.

With the impending presidential election in the United States this year and Canada’s federal election scheduled for the following year, concern is growing regarding the potential misuse of deepfake technology to manipulate democratic processes. The Congressional Research Service, a public research arm of the United States Congress, released a report warning that foreign actors could exploit deepfake technology to interfere with the upcoming election. The report cautions: “State adversaries or politically motivated individuals could release falsified videos of elected officials or other public figures making incendiary comments or behaving inappropriately. Doing so could, in turn, erode public trust, negatively affect public discourse, or even sway an election.”

The Canadian Security Intelligence Service (CSIS) echoed similar concerns, highlighting in a 2023 publication how deepfakes facilitate the spread of disinformation, thereby manipulating public opinion and potentially swaying election results.

Western liberal democracies have already witnessed the deployment of deepfake technology in attempts to influence elections. During the 2023 Slovak parliamentary election, a deepfake audio recording between the leader of the Progressive Slovakia party, Michal Šimečka, and a local journalist was circulated online. In the recording, Šimečka can be heard purportedly discussing purchasing votes from the country’s Roma minority. Šimečka would go on to lose the election, and though it is impossible to ascertain whether or not the deepfake audio had any tangible influence, the incident’s mere existence underscores the inherent risks associated with such technology.

So, how can we safeguard against the influence of deepfakes and generative AI in the lead-up to Canada’s federal election? The Communications Security Establishment (CSE), Canada’s national cyber intelligence agency, possesses the authority to conduct defensive cyber operations that include removing misleading content from online platforms. Additionally, the collaborative efforts of the CSE, CSIS, the RCMP, and Global Affairs Canada (GAC) aim to disseminate intelligence regarding any potential election interference. CSE chief Caroline Xavier stated that Canada’s use of paper ballots in the election affords it a degree of protection against online interference.

Education, particularly enhanced media literacy, emerges as a pivotal strategy for mitigating the influence of deepfake disinformation. The CCCS has a guide on how to identify misleading information, offering strategies for critically analyzing the validity of online content. Tech companies are doing their part as well, with platforms like YouTube requiring creators to disclose when realistic content has been created using AI, mirroring similar policies in TikTok’s community guidelines. Furthermore, deepfake detection software is continuously being developed, representing a step towards countering this emerging threat.

The advent of generative AI and deepfake technology undoubtedly harbours the potential to positively impact humanity across various domains. Nevertheless, it is imperative to recognize that the same capabilities that empower this technology can be exploited by nefarious actors. In a world where the ability to manipulate and disseminate information has never been easier, concerted efforts are essential to mitigate the detrimental effects of deepfake disinformation on our society.

--

--

Centre for International and Defence Policy
Contact Report

The CIDP is part of the School of Policy Studies at Queen’s University and is one of Canada’s most active research centres on international security.