Big Tech and 2024

Melissa Ryan
CtrlAltRightDelete
Published in
4 min readOct 1, 2023

2024 is a big election year, and not just in the U.S. More than 50 countries around the world will hold national elections. Billions of people will consume news and information about politics and elections on social media or connect with others about civics and politics online. It’s an opportunity for the tech platforms to show the world just how far they’ve come and just how much progress has been made countering disinformation, preventing hate, harassment, and harm — ultimately ensuring election integrity across the globe.

But that’s not what’s happening. Here’s what you need to know:

Tech companies are quietly rolling back their misinformation policies and have gutted their trust and safety teams during layoffs. The platform formerly known as Twitter has made the most recent news for this. NBC News reports that Twitter (I refuse to call it X) slashed both its disinformation and election integrity teams mere weeks after the company claimed it was hiring folks for similar positions. Reuters reports that Twitter also disabled a feature that allowed users to report election disinformation to the platform. But Elon Musk’s antics and the constant news coverage of them don’t leave much oxygen for reporting on similar backsliding from other tech platforms. If anything, Musk gives the other platforms cover for their own misdeeds.

The Right continues to use its power to attack social media researchers and attempts to curb disinformation. They’ve racked up some pretty big wins. On its face, this is an American problem, but it’s going to have downstream global effects. As I’ve written about previously, the Right is using their slim majority in the House — led by Rep. Jim Jordan — to attack the credibility of researchers and the institutions they’re associated with and harass them with investigations and constant demands for information. They’ve also attacked communications these institutions have had with the platforms and the US government about disinformation and election integrity. These efforts are having an impact, and per the Washington Post this week, institutions that study disinformation are buckling from the pressure, attempting to figure out how it’s possible to continue their work.

A handful of court cases will almost certainly change the landscape for tech companies and the US government multiple times over the next year, complicating efforts to preserve election integrity. I’m not even going to attempt to break this one down for y’all, especially since there’s been news every day this week on this front, but Vox’s Ian Millhiser has a helpful explainer that gets updated frequently. I highly recommend taking 5 minutes to read it.

All of the above only makes it easier to spread hate, harassment, and harm online. That’s a problem generally, but we live in a world where targeted harassment, hate towards marginalized communities, and incitement to violence are commonplace political strategies. Authoritarian candidates and political organizations across the globe use these strategies to attack the opposition or any institution, community, or person that gets in their way. Social media has long been their distribution channel of choice, and these attacks have increased over the last few years. As tech companies roll back their policies and enforcement, attacks and incitement become even easier to deploy.

I hate that we have to have this fight again, but it’s important to remember that we’re not starting back at square one. Civil society groups across the globe have amassed a lot of knowledge, built coalitions, and know Big Tech’s playbook by heart. The EU’s Digital Services Act is now in effect and should help protect more than 45 million Internet users, including from election disinformation while holding social media companies accountable for the harm they cause online. We’ve got more tools in our toolbox than we’ve had in the past.

This week, a number of Civil Society organizations, led by Accountable Tech, released Democracy By Design: A Content-Agnostic Election Integrity Framework for Online Platforms. The report offers a “​​framework meant to avoid political landmines and broadly resonate across nations with distinct laws and cultures, and platforms with incongruous architecture and resources — a consensus roadmap to enhance systemic resilience against election threat.”

The report includes three major planks:

  • Bolstering Resilience, which focuses on ‘soft interventions’ that introduce targeted friction and context to mitigate harm;
  • Countering Election Manipulation, which outlines bulwarks against evolving threats posed by malign actors and automated systems and
  • Paper Trails, which highlights key transparency measures needed to assess systemic threats, evaluate the efficacy of interventions, and foster trust.

I like this framework, particularly since it avoids the use of “disinformation,” a term I think is overused and often not helpful. Will bad actors and authoritarians immediately attempt to politicize and weaponize this framework if it catches on? Of course. Will tech companies attempt to dismiss and discredit it? Absolutely. But the way the framework is written and the research it cites (often from the tech companies themselves) makes that more of a challenge. It’s a good starting point for ramping up advocacy as we head into 2024.

The above article is an excerpt from Ctrl Alt-Right Delete, a newsletter devoted to covering the rise of far-right extremism, white nationalism, disinformation, and online toxicity, delivered on a weekly basis to more than 16,000 subscribers.

Subscribe to Ctrl Alt-Right Delete Here

--

--

Melissa Ryan
CtrlAltRightDelete

Politics + technology. Author of Ctrl Alt Right Delete newsletter. Subscribe here: https://goo.gl/c74Vva. Coffee drinker. Kentucky basketball fan.