Sitemap
Wikimedia Foundation Policy

Content licensed under Creative Commons Attribution-ShareAlike International 4.0 (CC BY-SA 4.0) unless otherwise noted.

We Need Human-Centered AI to Safeguard Information and Freedom of Speech Online

4 min readAug 25, 2025

--

Press enter or click to view image in full size
An illustration that shows Wikimedia volunteer contributors, represented as small people with different-colored shirts, collaborating with each other on Wikipedia, represented in turn as working together to climb a version of the logo puzzle globe; in addition, the Flickr logo and the Creative Commons public domain license abbreviation (i.e., CC0) can be seen decorating a few steps near the globe, which some volunteers are using to start their climb
An illustration of Wikimedia volunteer contributors collaborating on Wikipedia … literally. Image by Giulia Forsythe, CC0 1.0, via Wikimedia Commons.

Written by the Wikimedia Foundation’s: Amalia Toledo, Lead Policy Specialist for Latin America and the Caribbean, and Ricky Gaines, Human Rights Policy and Advocacy Lead

The rapid evolution of artificial intelligence (AI), especially generative AI, presents both significant potential benefits and considerable challenges for fundamental human rights such as freedom of expression and access to reliable information. At the Wikimedia Foundation, we are deeply engaged in understanding and addressing these complexities. Our global volunteer community champions open knowledge and human-verified content for the benefit of all. This unique, community-led model is crucial to ensuring the high quality and reliability of information on Wikipedia and other Wikimedia projects, which have become substantial sources for training generative AI, including large language models (LLMs).

Our concerns and experience with these emerging technologies led us to participate in a public consultation by international Freedom of Expression Mandate Holders.* This consultation aimed to inform a Joint Declaration on AI and Freedom of Expression by the mandate holders. Our submission outlined key risks and opportunities that AI represents for freedom of expression, alongside recommendations for establishing a human rights-centric AI ecosystem.

Our contribution is informed by a recently completed human rights impact assessment (HRIA) that the Foundation commissioned in 2024. This HRIA sought to better understand risks and opportunities related to AI and machine learning in the Wikimedia ecosystem. A version of this report — redacted in order to safeguard the volunteer community — will be published soon.

What follows are some of the key takeaways from our submission on AI governance.

Key Risks: Bias, Opacity, and Information Overload

We believe that the design, development, and deployment of AI could compromise a healthy online information ecosystem, and directly challenge the founding principles that guide the creation and curation of encyclopedic content on Wikipedia and other Wikimedia projects. The risks we highlighted include AI:

  • Amplifying Societal Biases. AI can perpetuate and amplify social prejudices when trained on biased datasets.
  • Eroding Public Trust. A lack of transparency in proprietary AI models makes it difficult to trust their outputs or trace the source of information.
  • Polluting the Information Ecosystem. The online information ecosystem is at risk of being overwhelmed when AI-generated slop and errors become widespread in training data, thereby creating a damaging feedback loop that produces even more biases and inaccurate content.
  • Undermining Human Judgment. Overrelying on AI can have a harmful effect, decreasing direct engagement with and participation in knowledge creation systems, and undermining the essential human work of information verification.

Key Opportunities: Enhancing Access to Information

Despite these challenges, we also believe AI holds great potential to advance freedom of expression if strong safeguards are in place. Based on the Foundation’s AI strategy, which aims to leverage AI to support human volunteers and users, we have identified that AI can:

  • Break down language barriers and promote multilingualism to increase access to information globally.
  • Enhance the efficiency of creating and curating high-quality public interest content by automating repetitive tasks that require less human oversight.
  • Improve information discoverability and accessibility through user-centric adaptation.
  • Foster greater data analysis and insights for journalists and academics.

This potential can be realized if we design, develop, and deploy AI to enhance human judgment and creativity, focusing on tools that support people instead of replacing them. It is crucial to ensure that human agency remains central to knowledge creation and curation. We also need a firm commitment to developing and using open-source AI technologies to ensure transparency and public oversight. Furthermore, empowering people to use AI tools critically is fundamental.

Key Recommendations: A Human-Centered Approach to AI Governance

To ensure AI empowers human agency and supports a healthy online information ecosystem, we urged Mandate Holders to recommend that states and policymakers focus on:

  • Protecting human creators, including journalists, academics, researchers, and Wikimedia volunteers, who are crucial to a verifiable information ecosystem.
  • Safeguarding public interest platforms like the Wikimedia projects from burdensome regulations that could hinder their ability to provide high-quality and reliable knowledge.
  • Promoting proportional AI regulation that implements strict oversight for high-risk AI while supporting beneficial, lower-risk uses that enhance human-driven content creation, as supported by the Foundation’s AI strategy.
  • Supporting open AI and ethical data access through policies that promote open-source AI models and transparent, ethical access to publicly funded data.
  • Boosting digital and informational literacy through educational programs for all ages that equip people to critically evaluate online information and understand the importance of verifiable sources.

At the Wikimedia Foundation, we believe a human-centered, open, and responsible approach to AI design, development, and deployment is essential. This will allow us to harness the potential of AI and machine learning while protecting human rights, the integrity of our shared knowledge, and the spirit of collaboration that our AI strategy aims to strengthen.

For more details, read our full submission here.

* The Freedom of Expression Mandate Holders includes the United Nations Special Rapporteur on Freedom of Opinion and Expression, the Organization for Security and Co-operation in Europe (OSCE) Representative on Freedom of the Media, the Organization of American States (OAS) Office of the Special Rapporteur for Freedom of Expression, and the African Commission on Human and Peoples’ Rights (ACHPR) Special Rapporteur on Freedom of Expression and Access to Information. Collectively, they form a group of experts dedicated to studying and shaping global policy on matters related to the fundamental human right of freedom of expression.

___

Stay informed on digital policy, Wikipedia, and the future of the internet: Subscribe to our quarterly Global Advocacy newsletter! 📩

--

--

Wikimedia Foundation Policy
Wikimedia Foundation Policy

Published in Wikimedia Foundation Policy

Content licensed under Creative Commons Attribution-ShareAlike International 4.0 (CC BY-SA 4.0) unless otherwise noted.

Wikimedia Foundation Policy
Wikimedia Foundation Policy

Written by Wikimedia Foundation Policy

Stories by the Wikimedia Foundation's Global Advocacy team.

No responses yet