Big Tech’s Role in Protecting Elections in 2024 From AI

Gurman Dhaliwal
Bouncin’ and Behaving Blogs TOO
5 min readMay 3, 2024

--

Big Tech’s Responsibility to Mitigate Malicious Generative AI Uses To Protect the Democratic Process and Rebuild Public Trust

Photo by Clay Banks on Unsplash

There are 30,000 articles about the impact that generative AI (GAI) will have on the elections in 2024. We’re a quarter way through and already seeing the impacts. Elections in Slovakia were manipulated after audio surpassed Meta’s policies that apply to video only.

The world’s largest democracy, India, is also having a moment with deepfakes as it grapples with nationalism. China is experimenting with deepfakes in Taiwan and the U.S. North Korea uses the money from malicious cyber activities with AI to pay for half of the country’s nuclear and missile program.

Timeline of AI Influence In Taiwan Elections by China, Image provided by author

A comprehensive effort to keep elections safe requires collaboration between citizens, industry, academia, and government. But big tech companies have a larger brunt of the responsibility. They’ve propelled the development of GAI tools and released them for ease of use to the public without any adequate solutions to deal with the drawbacks, such as misinformation. This election year is poised to be “the most consequential” in living memory and tech companies are already failing. 160 rights groups across 55 countries are collectively calling on big tech to take greater measures to safeguard voters from misinformation and hate speech.

External pressure has been effective to some degree. Pretty much every tech company you could think of including Amazon, Anthropic, Google, IBM, LinkedIn, McAfee, Microsoft, Meta, OpenAi, Snapchat, TikTok, and X announced the Tech Accord. The accord lays out expectations for how risks from the malicious use of AI should be mitigated, and they fail to meet their expectations. The tech accord instead serves as a shoddy attempt at self-regulation where the companies hope acknowledgment of the problem relieves them from the actual creation and enforcement of a solution.

Image provided by author

Stronger Self RegulationIs In their Strategic Interest

Aside from pre-emption, effective self-regulation could be preemptive, avoiding more stringent government regulations and rebuilding public trust.

The Digital Services Act (DSA) is Europe’s attempt to protect user privacy and ensure more transparency where enforcement is heavy financial fines of up to 6% of total revenue. These regulations apply to nineteen companies, including Meta, X, and TikTok, who are at risk of noncompliance and could also potentially no longer be able to operate in Europe. While the U.S. does not have plans to implement such stringent regulations, political activity aiming to limit the spread of technology platforms has been kicking up. The U.S. is moving closer to a TikTok ban.

Photo by Darren Halstead on Unsplash

This case could be considered an anomaly given its geopolitical implications. However, the core of the problem is the obscurity of the TikTok algorithm. We do not know as consumers what data is and how it is used to influence the content we are exposed to. The lack of transparency erodes voter and consumer trust.

Customers (and Voters) Are Becoming Wary.

Adobe found that 84% of respondents were concerned about election integrity since online content is vulnerable to manipulation. 76% of respondents also say it’s important to know if content was generated by AI while 70% believe it’s becoming more difficult to discern credible content from misinformation. The impact of the prevalence of misinformation and lack of credible information is significant since NPR found that 64% of the American population believes the U.S. democracy is at risk of failing.

As faith in government wanes and lawmakers are coming down on TikTok, public trust is at a critical point, and it matters. McKinsey reports nearly 40% of consumers report they have detached themselves from a company after learning their data was not protected.

Photo by Element5 Digital on Unsplash

What Should The Tech Companies Do Next?

  1. Build Digital Trust in Organizations

Harvard Business Review discusses how demand has risen for companies to show they are trustworthy beyond their customers’ expectations and what is legally required. The solutions do not need to be based on technology but rather serve as guidelines for the creation of technology and its deployment. These can be encapsulated in the three board pillars put forth by the World Economic Forum:

  • Security and Reliability
  • Accountability and Oversight
  • Inclusive, Ethical, and Responsible Use

2. Rebuild Trust and Safety Teams

In March 2024, the CEOs of X, Snap, and Discord revealed deep cuts to their trust and safety teams, and Meta and TikTok did not disclose exact staffing. Their role in promoting digital literacy, ensuring compliance, and developing and enforcing platform-specific policies are essential to protecting voters from misinformation on social media platforms.

3. Invest in GAI Detection Tools

The advancement of GAI detection tools is faltering compared to the advancement and ease of GAI models. One reason is the way such models are trained. They are trained to work on the GAI tools at the moment but fail to work effectively on the next set of GAI tools being released. Additionally, the lack of funding for GAI detection tools limits innovation.

Consumers and voters are increasingly skeptical of our democratic process, and they attribute it to the advances in GAI put forth by technology companies. Therefore, it is not only important for these companies to take more responsibility to protect voters in the upcoming election, but it is also key to ensure their long-term viability by rebuilding public trust.

--

--