Promise and Peril: AI in the Light of the 2024 Elections

Nare Hakobyan
SocialLab AI
Published in
5 min readJan 17, 2024
AI Generated Image

2024 is set to be a year of critical elections across the globe, as we approach this pivotal year, public concern grows about the digital landscape they encounter. Against this backdrop, the spotlight intensifies on Artificial Intelligence, and its role in democracy, elections and society. What role may that be, one of a hero or villain?

As an advocate for the implementation of AI and its vast and expansive capabilities, I am cautious and acutely conscious of AI’s potential risks when in the “wrong hands”. We must tread a fine line, remaining optimistic but critically vigilant at once.

While “fake news”, “disinformation” and “misinformation” have been long existing issues throughout history, we are now seeing a shift and unprecedented increase in the volume, quality and spread of this phenomenon, thanks to our dear friend and foe, generative AI. In a paper by the OECD, it was suggested that with such automated disinformation, those without media literacy and critical thinking are left vulnerable and struggle to interpret the media they consume.

We are all familiar with its powers, with a swipe of the keyboard one can either conjure up wonders or wreak havoc. Chillingly, we know that anyone can with ease clone a voice, create a forged video, or flood the internet with indistinguishable manufactured stories. Highlighting the gravity of the situation, the 2022 Europol report underscores that, more often than not, “disinformation is being spread with the intention to deceive”. We have seen this trend not only in the political sphere but all across society.

This report further revealed that a staggering 90% of online content is estimated to be generated by AI by 2026, heightening the urgency to understand and combat intentional disinformation. In Chatham House’s piece, we see the term “Democratizing disinfo”. What they mean by this is that Generative AI has made it easier and faster to create “simple, cheap and more convincing” disinformation.

With the rapid sophistication and advancements of Generative AI, there are fears from fact-checkers that it will continue to become harder to distinguish AI generated messages, as language models are sounding increasingly human-like. A study by the Center for Countering Digital Hate revealed that Google’s Bard AI generated believable misinformation content in 78% narratives it was tested on.

Campaigns too, they must be scrutinized, especially as our everyday data is as political as ever. A senior fellow from the Brookings Institution warns that campaigns are accessing personal data to target swing voters, leveraging data on what you may read or watch to understand the issues you care about and using that to target voters with carefully calculated and tailored messages.

The fears of AI in politics have been present since even the 2010s, although now it has surfaced in the mainstream dialogue and become especially concerning today due to widespread accessibility and affordability. For example, a UChicago Harris & AP-NORC poll indicates that 58% of those questioned believe the use of AI will exacerbate the spread of misinformation during the 2024 presidential elections.

While this piece has had a quick look at some of the concerns of AI advancement in terms of elections, it is also imperative to highlight some of the opportunities where AI can improve democratic processes and elections.

So what kind of opportunities are we talking about here? This can be as simple as using the large language models to summarize and simplify politics and political jargon for better understanding. And in another stroke, understanding your citizens from their personal data does not have to be deemed “evil” or “malintented”, it can also be a great way for politicians to understand the needs of their people and better cater to those needs. The EU Parliament noted in their brief that “such an alignment between citizens and politicians could change the face of electoral campaigns and considerably improve the policymaking process”.

Therefore, by implementing proper safeguards, AI has the potential to improve our existing systems, but even more inspiringly, to bring about better, more innovative systems that may open up the barriers to democracy and elections that are currently plaguing us.

If you’re interested in learning more, there are several bodies such as the EU for example who are working to build legal frameworks to address the dangers and mitigate the risks of AI, while also promoting and advancing trustworthy, transparent and accountable AI systems. Measures such as automated disinformation detection, instant fact-checking, and watermarking AI-generated content are already in play.

We are forever at a crossroads in developing and implementing AI systems that are trustworthy and transparent, and most importantly, are not manipulated by the ill intentioned. And so it is important to remember that the potential of AI to enhance democracy and other elements of society is immense, but so is the need for caution. We must be steadfast in our duty to create human-centric, ethically guided AI systems. Ones that may serve and revolutionize, rather than undermine, our society at large.

Finally, in the face of these challenges and opportunities, we stand on the brink of a new era. The journey ahead is unknown, this is both scary and exciting at the same time. Looking ahead, I would like to leave you with this, the power of AI and its potential for positive change is dependent not on its technical capabilities but on our collective will to guide it responsibly and ethically. So as we approach the 2024 elections, let us harness AI with wisdom and foresight to ensure it serves as a tool to transform our world.

Dedicated to AI, with cautious optimism, from your biggest fan and critic,

More from SocialLab

Visit the SocialLab Website

Follow SocialLab on:

LinkedIn, YouTube

Citations

  1. Fitzwilliam, Helen. “How Ai Could Sway Voters in 2024’s Big Elections.” Chatham House, 29 Sept. 2023, www.chathamhouse.org/publications/the-world-today/2023-10/how-ai-could-sway-voters-2024s-big-elections.
  2. OECD (2023), “AI language models: Technological, socio-economic and policy considerations”, OECD Digital Economy Papers, №352, OECD Publishing, Paris, https://doi.org/10.1787/13d38f92-en.
  3. Europol (2022), Facing reality? Law enforcement and the challenge of deepfakes, an observatory report from the Europol Innovation Lab, Publications Office of the European Union, Luxembourg
  4. “Google’s new Bard AI generates lies.” Center for Countering Digital Hate | CCDH, 05 Apr. 2023, https://counterhate.com/research/misinformation-on-bard-google-ai-chat/#:%7E:text=Google%E2%80%99s%20new%20%E2%80%98Bard%E2%80%99%20AI%20generates%20false%20and%20harmful%20narratives%20on%2078%20out%20of%20100%20topics
  5. “Artificial Intelligence, democracy and elections” Think Tank | European Parliament, 19 Sept. 2023, https://www.europarl.europa.eu/thinktank/en/document/EPRS_BRI(2023)751478
  6. “UChicago Harris/AP-NORC Poll: There is Bipartisan Concern About the Use of Artificial Intelligence in the 2024 Elections.” The University of Chicago, Harris School of Public Policy, 03 Nov, 2023, https://harris.uchicago.edu/news-events/news/uchicago-harrisap-norc-poll-there-bipartisan-concern-about-use-artificial

--

--

Nare Hakobyan
SocialLab AI

Interested in AI, Social Sciences and Innovation... a fan and a critic.