Open and Online: About Freedom in Digital Spaces.

Valerie Hafez
WomeninAI
Published in
7 min readDec 31, 2020

Co-authored with Rania Wazir on behalf of Women in AI Austria

Three payphones hanging from the wall.
Photo by Pavan Trikutam on Unsplash

Over the past decade, our lives have become increasingly digital. From personal communications to public discourse, from news to general knowledge, most comes from one source — the Internet. This glut of information would, however, be overwhelming, and so we rely on gatekeepers to filter what gets through to us. And yet: these are neither qualified experts, nor impartial intermediaries — but private platforms, operated by a handful of companies that control, for profit, what type of content gets shared or not; what can be said, and what can’t. Through opaque processes and inscrutable algorithms, they regulate the flow of information in our information-driven society.

In the #SAIFE Paper published earlier this year, the OSCE details possible impacts of automated content management practices on the freedom of speech and expression and presents a set of recommendations for Member States and internet intermediaries. Women in AI Austria submitted a response to the OSCE’s questionnaire based on the paper, and we have used our response as an opportunity to reflect and present a more detailed view on the subject of AI and Freedom of Speech and Expression.

On social media: filter bubbles, polarization, …

The Internet is too vast and contains too much information for any individual user to comprehend or structure by themselves. Platforms therefore use algorithms to select information deemed relevant and present it to users of their online spaces. When we use platforms, we do not see the logic according to which information is structured: we see our news feeds and search results, we interact with content and with others, we see our online social environment and we see how our social environment interacts with the online world, and we do not usually experience much friction within this structure (unless, of course, we try to switch between platforms). All the while, this structure changes and adapts according to the way we (and our friends!) currently interact in the online space. Every second spent online, every view, every like, every comment and every person we are connected to feeds into a self-learning algorithm’s assessment of what we might deem relevant and engaging enough to remain within the digital ecosystem of the platform provider.

Content management algorithms do not care to expand your horizons, nor about your human rights. Their objective is to maximize engagement — keep the vast majority hooked and coming back for more. This happens by delivering content that reinforces your world view, or by triggering strong emotions — anger, fear. Once it finds your trigger points, it delivers more of the same (also see Lewandowsky et al. 2020). But consider also this — an algorithm trying to maximize its predictive accuracy has a vested interest in making the user more predictable; and black or white is much easier to predict than shades of grey. Hence, the promotion of divisive and polarizing content (see Stuart Russell, “Human Compatible”)

… and (self) censorship

Particularly on the social media, content that targets the usual suspects: women, or under-privileged minorities, gets a lot of interactions (see Amnesty Italy study). In a pernicious feedback cycle, the high interaction rates lead to the promotion of this type of content; and because this type of content gets promoted (hence is seen by more people), it generates more interactions. This has the effect of creating a hostile online environment for communities that are already disadvantaged, and leads to many withdrawing or being bullied from online fora.

Even well-intentioned attempts to mitigate this situation through the use of automated filters to identify and remove abusive content have been shown to sometimes exacerbate the problem. For example, research indicates that such filters often falsely label posts in African-American English as offensive — causing the African American community to be stereotyped as aggressive, silenced through algorithmic censorship, or even overly policed as predictive policing moves into the online domain (Blodgett et al, Patton et al); or that comments that deal with certain topics (gay rights, feminism, immigration) are also often wrongly tagged as offensive — leading to censorship of such topics (Dixon et al). Not only do hostile online environments negatively impact disadvantaged members of society, but recent publications have come to the conclusion that social media has a causal effect on hate crimes (Lewandowsky et al. 2020).

A signpost at sunset.
Photo by Javier Allegue Barros on Unsplash

Receiving information

Freedom of speech is a right that entails a conscious choice of engaging with others. In public, we have the choice to reveal our opinions to others, or keep them concealed if we so wish. But when online, every interaction is recorded as data. The data collection that powers personalisation across the Internet is so pervasive that your activities outside of the domains of platforms are tracked in order to sell presumably more efficient advertising. Worse yet, third-party advertising networks sell data generated by your activities on the Internet to unknown parties, depriving you even of the knowledge whether and if so, for how much, information about you and others is sold. Even two years after the implementation of GDPR in Europe, Europeans have nearly no insight into who collects or even sells their data for a profit, and to whom (as illustrated recently by Norwegian journalist Joseph Cox). These problems are further exacerbated by the burden of responsibility imposed on us for those we come into contact with. Tools like Facebook’s social graph draw not only on our data, but also on the activities and interests of our friends and contacts. The way we choose to behave online impacts not only how the Internet is personalised for us, but also for others around us — possibly increasing the severity of filter bubbles and the likelihood for self-censorship, and promoting dynamics that contribute to the spread of fake news.

To counter these developments, we think that every user should have more control over which data is generated in their wake. If profiling is used to recommend content, we should be able to use a standardised protocol to inform any requesting party to turn off the profiling, choose which companies may or may not access our data (regardless of how many parties accessed it in between), to know which features are being used to determine our profile and have the option of blocking them, and to decide which filters can be used for recommending content (e.g. most liked by people with similar profile, most interactions in general today, most read in my current geographic location). Content that is featured because of profiling should be clearly marked as such — and results of automatic decision-making should clearly indicate which data was used to make this decision. These features need to be implemented by default, because by default, we have a right to understand the processes that impact us — and we have the technology to make them understandable.

Making space

Public spaces are essential to the functioning of our society and open public spaces are vital for the development of sociality (Latham and Layton 2019), providing important nodes for interaction and exchange. Yet in the digital world, we barely have any truly public space. The most frequently used digital spaces are platforms, owned by profit-oriented corporations and embedded into a digital ecosystem optimised for control over users and competitors (CMA 2020). These platforms have become so important for online interaction that people who choose to opt out of using these platforms are locked out of significant social spaces.

We need open public digital spaces based on universal human rights standards, free of advertising and tracking, fully interoperable and accessible to all users. Just like telecommunications and postal service providers provide essential services at low cost, we need to extend universal service obligations to platform operators and further the creation of a global, interoperable and open digital public space. Users should not only be able to choose which platform’s content recommendation algorithm to use in this public space, but also understand the codes of conduct applicable to interaction — for instance, which behaviour is considered hurtful or why content was flagged or removed, and how to appeal such removals. At the same time, every user should enjoy a level of protection from state or corporate surveillance equivalent to analogue communications for their communication in this digital public space. We believe that these measures will counter the chilling effects on freedom of speech often observed on platforms.

Freedom can be used to denote both exclusion (freedom from oversight) and impunity (freedom from consequences). Dominant Internet intermediaries currently enjoy many of these freedoms. Freedom can also mean independence, empowerment and liberty. We think it is important that we as a society shape our institutions. Fundamental rights need to be protected — online and offline.

Our current online environment allows the benefits and profits from (dysfunctional) algorithmic content management to accrue to one small set of people, while the risks are carried by others. But in the words of Deborah Raji, “AI doesn’t work until it works for all of us.”

About the authors

Rania Wazir is a mathematician and data scientist researching offensive speech on social media and fairness in AI systems.

Valerie Hafez is an anthropologist interested in digital systems.

Women in AI is a nonprofit do-tank working towards gender-inclusive AI that benefits global society. Our mission is to increase female representation and participation in AI. We are a community-driven initiative bringing empowerment, knowledge and active collaboration via education, research, events, and blogging.

--

--