Reclaiming Truth in the Digital Age

StartingUpGood
StartingUpGood Magazine
6 min readMay 6, 2024

Explore additional insights from our comprehensive conference coverage of the 2024 Skoll World Forum.

The recent Skoll World Forum convened a panel of experts to discuss how to counter the spread of false or misleading information, hate speech, and conspiracy theories online.

2024 Skoll World Forum

Key Takeaways

Understand the Complexity of the Crisis

The digital age has brought unprecedented access to information but has also paved the way for the rapid spread of false or misleading information. The evolution of technologies like AI and deepfakes have made discerning truth from falsehood more challenging, undermining trust in media and the authenticity of information. The complexities of countering this rapidly expanding global crisis should not be underestimated.

I think I’m just deeply pragmatic about the fact that this is a very complex problem. How do you give the ecosystem the ability to have insight into the problem and (that) it’s changing all the time and (that) there are definitely lots of different issues, as you have heard here — all the way from business models to people on the front lines who are doing journalism and fact checking, or to access to research. There’s so many layers of different problems and each of them require a deep intervention.
— Mevan Babakar, Google

While it’s important to understand their differences, the outcomes of sharing mis-, dis-, and malinformation is oftentimes equally harmful, regardless of intent. (See our definitions for different types of information to level set.)

Demand Platform Accountability and Regulation

Social media algorithms often amplify content that is the most violent, hateful, or conspiratorial. Social media platforms have failed to adequately address their role in the spread of false or misleading information and hate speech. This is likely due to misaligned economic incentives and a lack of legal liability.

This is not just about the existence of disinformation, it’s about the mass acceleration of that disinformation at the expense of the truth because it engages people, keeps them on platform, maximizes ad revenues. That’s the real problem we’re dealing with here. Please don’t make this about “Someone said something stupid once on the internet,” that’s not the problem. The problem is platforms that amplify it and profit from it.
— Imran Ahmed, Center for Countering Digital Hate

Recent legislation like the EU Digital Services Act and the proposed Platform Accountability and Transparency Act in the US are efforts to force greater transparency and accountability from tech companies. However, there are concerns that authoritarian governments may abuse these regulations to stifle dissent and free speech.

So I am all for platform accountability, but we are also dealing with other actors, right? So it’s our governments, the governments who will try to hold them accountable in the name of accountability, but their basic objective is to basically control descent in their own country on their platforms.
— Nighat Dad, Digital Rights Foundation

AI-Generated Disinformation Amplifies the Danger

Advances in AI are enabling more sophisticated disinformation tactics, such as deepfakes (synthetic media that replaces a person’s likeness with someone else’s) and cheapfakes (selectively edited or re-contextualized content).

In Pakistan’s recent election, there was a surge of AI-generated audio and video of politicians used as political propaganda, which will likely be replicated in other elections happening around the world this year.

Platforms and fact-checkers are struggling to keep up with the speed and scale of AI-generated content. Detection tools often require high confidence levels that are difficult to achieve, especially for under-resourced frontline activists.

So when you translate that into the front lines of human rights defense and journalism, we have a real challenge — in access to the tools for detection, access to the skills in this emerging AI era where it’s just super easy to create falsified content and super easy to claim that real is fake.
— Sam Gregory, WITNESS

Empower Front-line Communities

Effective fact-checking and counter-disinformation efforts must go beyond elite institutions and empower local communities to evaluate the credibility of information and narratives for themselves.

The only way our ability to deal with mis- and disinformation is going to be successful in an AI era is by embedding it in community-based communicators.
— Sam Gregory, WITNESS

Solutions must center the voices and experiences of those in the Global South, who are often an afterthought in content moderation systems designed for Western contexts.

Global North also make mistakes… So if they are making mistakes, admit them and then see how they can bring global majority actors in… We are the ones who face challenges, we are the ones who bring examples. We face on the ground realities and we can shape your policy better because we are bringing real examples into the room.
— Nighat Dad, Digital Rights Foundation

Funders should invest in community-based media literacy programs and ensure that human rights defenders have access to the best tools and training to document abuses in this new information environment.

And more and more, the public just want the ability to appraise trust for themselves. They are overwhelmed with information that’s pointing in lots of different directions, and we want to build a world where somebody can say with confidence, “I trust this piece of information and here’s why.”
— Mevan Babakar, Google

A Call to Action: 4 Opportunities to Reclaim Truth

Despite the enormity and complexity of this crisis, the session highlighted several opportunities for promoting information integrity.

  1. Fund research and advocacy organizations working to hold platforms accountable and advance sensible regulation, like algorithmic transparency and liability reforms.
  2. Invest in local, community-driven media literacy and fact-checking initiatives, particularly in the Global South. Ensure these efforts are grounded in the languages, cultures and lived experiences of those most impacted by online harms.
  3. Support the development of new technologies to detect and counter AI-generated disinformation, with a focus on open-source, accessible tools that can be adapted to different contexts.
  4. For those building new platforms and tools, commit to robust safeguards against abuse and misuse from the start. Design with global equity and human rights principles at the center, not as an afterthought.

Conclusion: Structural Reforms Needed

The panelists — drawing on their extensive experience as activists, fact-checkers, and policy advocates — painted a dire picture of an information ecosystem polluted by industrial-scale disinformation and hate speech. Social media platforms’ algorithms and business models incentivize the spread of harmful content, while policy and regulatory efforts to date have made only marginal progress.

While acknowledging the necessity of stronger platform accountability measures and antitrust actions to curb the outsized power of Big Tech, the speakers also emphasized the need to empower citizens and communities with the media literacy and fact-checking capabilities to discern truth from falsehood. Ultimately, they called for a “whole of society” response that includes governments, tech platforms, civil society, and citizens.

We are going to have to rethink things and that fundamentally means that we can’t just tinker around the edges...
— Imran Ahmed, Center for Countering Digital Hate

Learn More

Watch this session in its entirety and explore additional #SkollWF 2024 sessions on the organization’s YouTube channel.

Follow the featured speakers from this session:

Our StartingUpGood team believes that events and conferences are great places to learn, share ideas, and innovate. We are committed to using our innovative tech tools to share key insights and learnings from top conferences, like the Skoll World Forum. This article uses Otter.ai to create transcripts and various LLMs to generate content summaries. All work is hand-checked for quality.

Level Set: Defining Different Types of False or Misleading Information

  • Misinformation refers to false or inaccurate information that is spread unintentionally, often due to a mistake or lack of knowledge. The person sharing the information believes it to be true, but it is not.
  • Disinformation is false or misleading information that is intentionally created and spread to deceive or manipulate.
  • Malinformation is factual information, but the context or timing of sharing the information is intended to cause harm.
  • The differences between these three types of false or misleading information involves intent, harm, and truthfulness.
  • While it’s important to understand their differences, the outcomes of sharing mis-, dis-, and malinformation is oftentimes equally harmful, regardless of intent. After all, intent is hard to know or prove.

StartingUpGood supports fresh entrepreneurial approaches to social impact. FOLLOW US on social media:

Check out SDGCounting for the latest news on tracking the progress of the Sustainable Development Goals. #SDGs #GlobalGoals

--

--

StartingUpGood
StartingUpGood Magazine

Supporting fresh entrepreneurial approaches to do good in the world. Check out our magazine: https://medium.com/startingupgood