Wikimedia Policy
Published in

Wikimedia Policy

Deep Dive: Australia’s Online Safety Expectations

The first deep dive in our series on online safety bills.

Image of the Parliament House in Canberra, Australia, during blue hour. The sky behind the parliament is dark blue, which is reflected in the pool of water before the parliament so the building stands illuminated in a yellow light, suspended between two dark blue streaks — the sky and the water.
Parliament House in Canberra, Australia, during blue hour. Image by Thennicke, CC BY-SA 4.0, via Wikimedia Commons.

Written by Wikimedia Foundation’s: Rachel Arinii Judhistari (Rjudhistari), Lead Public Policy Specialist for Asia, and Franziska Putz, Movement Advocacy Community Manager.

Governments around the world are drafting bills and passing laws to improve online safety and hold technology platforms accountable for harmful content that is spread on their websites. The Wikimedia Foundation’s Global Advocacy and Public Policy team is looking at some of these new laws and their implications for free knowledge and community-governed online projects. In this second blog post in our series (you can read the introduction here), we review the Australian Basic Online Safety Expectations (BOSE), for which we submitted a comment to the Australian Government’s consultation. We outline how the government’s expectations towards internet platforms currently neither sufficiently protect user privacy nor promote a diversity of platforms to exist online.

The Australian Basic Online Safety Expectations

The Online Safety Act 2021 came into effect on 23 January 2022. It grants the Minister for Communications, Urban Infrastructure, Cities and the Arts the power to articulate these BOSE. In these, the Australian Government outlined its expectations for social media services, relevant electronic services, and designated internet services. Like other online safety regulations in the United Kingdom, Germany, and elsewhere, the Australian initiative is intended to prevent online harms — including cyber-bullying, cyber-abuse, and the rapid spread of extremist content — as well as protect vulnerable individuals such as women, children, and underrepresented groups.

We welcome efforts to create an internet that is safe for all to access and enjoy. However, as expressed in a previous blog post, the aspiration of safety should not threaten — but rather promote — the rich diversity of websites, communities, and governance structures that shape content online. In addition, any internet regulation should comply with international human rights standards, especially those for privacy and freedom of expression. These BOSE place these values as well as the decentralized and community-driven content moderation practices of collaborative platforms like Wikipedia at risk, since proactive monitoring as well as reducing privacy and anonymity can hurt free knowledge projects.

Our submission on BOSE reminds the Australian government that regulations of online spaces need to strike a balance that facilitates everyone’s full participation in the digital global economy. Here is what is at stake:

  • Strict one-size-fits-all content moderation requirements can disproportionately impact smaller, not-for-profit platforms.
  • Mandates for rapid content detection can encourage the use of automatic filters, which could lead to over-censorship.
  • Requirements to collect, store, and share personal information of contributors, including those with anonymous accounts, can discourage online participation.

I. Proactive monitoring will hurt free and open collaborative projects.

The basic online expectations’ disregard for existing content moderating mechanisms on smaller platforms also plays out in the expectations to include proactive measures to minimize illegal or harmful content or activities. The requirements and reasonable steps outlined in expectations 6 and 8 are especially concerning for the Wikimedia Foundation.

Expectation 6: That platforms develop and implement “processes to detect, moderate, report and remove (as applicable) material or activity on the service that is or may be unlawful or harmful.”

Expectation 8: “If the service uses encryption, the provider of the service will take reasonable steps to develop and implement processes to detect and address material or activity on the service that is or may be unlawful or harmful.”

To meet these expectations, the Foundation would have to intervene in content moderation processes of the Wikimedia communities. More generally, the expectation to identify and address all harmful content across the entirety of a platform may push hosting providers to deploy automated tools. This could also lead to over-censorship in which legitimate or valuable content could be removed. Overall, these expectations cannot be met by platforms with community-governed content moderation processes like the Wikimedia projects. In addition, these requirements could lead to a regulatory system that is both ineffective and risks the existence of people-powered spaces like an online encyclopedia.

These BOSE focus on large social media companies that have the financial means and technical systems in place to monitor content by reinforcing automated content detection systems. In fact, many such companies already proclaim the effectiveness of their automated systems. If public interest platforms are expected to detect and address materials by using such systems, they will be placed in a negative feedback cycle comprised of three elements:

  1. Platforms could be forced to abandon preexisting and effective community-led content moderation systems and replace them with automated processes.
  2. These automated tools, which research shows can be biased and unable to capture the contextual nuances of human speech, will likely be less effective at moderating content than community-led processes. Small platforms may also have to spend disproportionate funds compared to large for-profit actors in order to implement these tools, which are often proprietary and off-the-shelf, besides less effective, and divert resources away from other important services.
  3. As a result, such systems can be ineffective against content that should be removed, while appropriate content could easily be over-removed. Smaller platforms will have diminished both their financial resources and community-based content moderation tools in order to meet the requirements in the first place, and be left trying to solve a problem that had not existed prior to the regulation.

II. Privacy and anonymity are cornerstones of a vibrant internet.

As an organization dedicated to a vision of the web as an inclusive space shaped by people from around the world, the Foundation is committed to actively promoting wide and equitable participation. Our work is built on the notion that investing in and protecting a culture of privacy is essential to championing free knowledge and sustaining freedom of expression. The logic is simple: storing personally identifiable information can imperil the privacy of Wikipedia editors, and can discourage some individuals from contributing in the first place. This is especially likely for those who wish to edit sensitive content or contribute to topics for which they might be harassed or abused, online or offline.

For these reasons, the Foundation collects very little information about both readers and contributors to Wikimedia projects, and also facilitates and encourages pseudonymous contributions. We do not track people across the internet — neither do we display ads nor sell user data. The information we do collect is stored for only a short period of time, and is not contributor identifying information.

Pseudonymity and encryption are equally essential to maintaining a culture of privacy and open participation, since they help preserve contributor privacy and create a place of safety. The ability to access Wikipedia privately allows vulnerable populations to participate in and benefit from open knowledge sharing, including by connecting with others or securely accessing information on health and sensitive topics.

While there is a possibility of misuse of anonymous accounts, Wikipedia’s Sockpuppetry policy addresses anonymous abusive conduct without compromising protections of users’ rights. The policy embodies the Wikimedia movement’s belief that online safety should not force us to give up on privacy and security.

The expectation of “verifying the identity or ownership” of anonymous accounts is an alarming demand for public policy that is meant to make the internet safer and more inclusive. Requiring hosts of knowledge platforms to obtain and store personal information of contributors would be detrimental to the makeup of Australia’s online civic spaces. Groups that already experience a disproportionate amount of abuse and safety threats online would be left even more exposed to these harms. At best, the message that regulators are sending to Australia’s Aboriginal and Torres Strait Islander people, LGBTQ+ population, and female athletes, among others, is that they have forgotten to consider the reality of what these groups face online everyday. At worst, regulators are signaling that these groups’ participation in Australian online society is unimportant or simply unwelcome.

Australians have come to rely on Wikipedia to look up information about the world. In April 2022, for example, pages on Wikipedia were viewed 244 million times by users in Australia. We are confident that Wikipedia’s status as a top visited website in the country is a reflection of the success of this collaborative knowledge project — something created by Australians, for all Australians, and everyone else across all geographies.

As we have written elsewhere, online safety shouldn’t require the silencing of volunteers that make websites like Wikipedia work, but rather empower all internet users with the resources they need to participate and ensure they have a good understanding of both risks and safety online.

Promoting the representation of diverse perspectives in online conversations about knowledge is key to creating the culture of inclusive, human-first security we are seeking for a vibrant internet.

Australian policymakers need to consider the diversity of services on the internet that would be affected by enforcing strict rules for platforms. These BOSE contain overly prescriptive content identification, removal, and enforcement expectations, which target the business models of centralized large for-profit internet platforms. However, not all internet platforms are built to captivate user attention through targeted advertising. At the same time, the threats to encryption and privacy practices proposed in these expectations would disproportionately expose historically underserved groups to online harm. The voices of women, members of the LGBTQ+ community, Indigenous people, and people of color would shrink from our online ecosystem.

As we will emphasize throughout our series, the Australian BOSE are not unique in this regard. Rather, policymakers around the world are introducing laws to promote safety online. We encourage them to seize these opportunities to ensure that everyone can safely contribute to community-governed projects like Wikipedia.

--

--

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store