Open and Online: About Freedom in Digital Spaces.

Valerie Hafez
Dec 31, 2020 · 7 min read

Co-authored with Rania Wazir on behalf of Women in AI Austria

Three payphones hanging from the wall.
Photo by Pavan Trikutam on Unsplash

Over the past decade, our lives have become increasingly digital. From personal communications to public discourse, from news to general knowledge, most comes from one source — the Internet. This glut of information would, however, be overwhelming, and so we rely on gatekeepers to filter what gets through to us. And yet: these are neither qualified experts, nor impartial intermediaries — but private platforms, operated by a handful of companies that control, for profit, what type of content gets shared or not; what can be said, and what can’t. Through opaque processes and inscrutable algorithms, they regulate the flow of information in our information-driven society.

In the #SAIFE Paper published earlier this year, the OSCE details possible impacts of automated content management practices on the freedom of speech and expression and presents a set of recommendations for Member States and internet intermediaries. Women in AI Austria submitted a response to the OSCE’s questionnaire based on the paper, and we have used our response as an opportunity to reflect and present a more detailed view on the subject of AI and Freedom of Speech and Expression.

On social media: filter bubbles, polarization, …

Content management algorithms do not care to expand your horizons, nor about your human rights. Their objective is to maximize engagement — keep the vast majority hooked and coming back for more. This happens by delivering content that reinforces your world view, or by triggering strong emotions — anger, fear. Once it finds your trigger points, it delivers more of the same (also see Lewandowsky et al. 2020). But consider also this — an algorithm trying to maximize its predictive accuracy has a vested interest in making the user more predictable; and black or white is much easier to predict than shades of grey. Hence, the promotion of divisive and polarizing content (see Stuart Russell, “Human Compatible”)

… and (self) censorship

Even well-intentioned attempts to mitigate this situation through the use of automated filters to identify and remove abusive content have been shown to sometimes exacerbate the problem. For example, research indicates that such filters often falsely label posts in African-American English as offensive — causing the African American community to be stereotyped as aggressive, silenced through algorithmic censorship, or even overly policed as predictive policing moves into the online domain (Blodgett et al, Patton et al); or that comments that deal with certain topics (gay rights, feminism, immigration) are also often wrongly tagged as offensive — leading to censorship of such topics (Dixon et al). Not only do hostile online environments negatively impact disadvantaged members of society, but recent publications have come to the conclusion that social media has a causal effect on hate crimes (Lewandowsky et al. 2020).

A signpost at sunset.
A signpost at sunset.
Photo by Javier Allegue Barros on Unsplash

Receiving information

To counter these developments, we think that every user should have more control over which data is generated in their wake. If profiling is used to recommend content, we should be able to use a standardised protocol to inform any requesting party to turn off the profiling, choose which companies may or may not access our data (regardless of how many parties accessed it in between), to know which features are being used to determine our profile and have the option of blocking them, and to decide which filters can be used for recommending content (e.g. most liked by people with similar profile, most interactions in general today, most read in my current geographic location). Content that is featured because of profiling should be clearly marked as such — and results of automatic decision-making should clearly indicate which data was used to make this decision. These features need to be implemented by default, because by default, we have a right to understand the processes that impact us — and we have the technology to make them understandable.

Making space

We need open public digital spaces based on universal human rights standards, free of advertising and tracking, fully interoperable and accessible to all users. Just like telecommunications and postal service providers provide essential services at low cost, we need to extend universal service obligations to platform operators and further the creation of a global, interoperable and open digital public space. Users should not only be able to choose which platform’s content recommendation algorithm to use in this public space, but also understand the codes of conduct applicable to interaction — for instance, which behaviour is considered hurtful or why content was flagged or removed, and how to appeal such removals. At the same time, every user should enjoy a level of protection from state or corporate surveillance equivalent to analogue communications for their communication in this digital public space. We believe that these measures will counter the chilling effects on freedom of speech often observed on platforms.

Freedom can be used to denote both exclusion (freedom from oversight) and impunity (freedom from consequences). Dominant Internet intermediaries currently enjoy many of these freedoms. Freedom can also mean independence, empowerment and liberty. We think it is important that we as a society shape our institutions. Fundamental rights need to be protected — online and offline.

Our current online environment allows the benefits and profits from (dysfunctional) algorithmic content management to accrue to one small set of people, while the risks are carried by others. But in the words of Deborah Raji, “AI doesn’t work until it works for all of us.”

About the authors

Valerie Hafez is an anthropologist interested in digital systems.

Women in AI is a nonprofit do-tank working towards gender-inclusive AI that benefits global society. Our mission is to increase female representation and participation in AI. We are a community-driven initiative bringing empowerment, knowledge and active collaboration via education, research, events, and blogging.

WomeninAI

We are on a mission to close the gender gap in AI.