Beyond Content Moderation

Benji Xie

--

Addressing online polarization and hate speech will require platform designs and regulation that foster new forms of online governance and engage nations from the Global South.

Last week, I spent the day at attending a conference on “Beyond Moderation: How can we use technology to de-escalate political conflict.” This workshop was organized by Diana Acosta Navas (Loyola Chicago/Stanford) and Ting-an Lin (Stanford) with support from the McCoy Center for Ethics in Society. Diana started us by situating content moderation as a mechanism of manipulation (e.g. by governments to justify conflict) which social media and generative AI have only exacerbated. Ting then connected content moderation with continuous modern warfare through weapons such as cyberattacks and disinformation campaigns.

Photo of large conference room with people sitting around a dozen round tables. At the front of the room is a large screen that reads “political and global impact” with five panelists seated under it.
The conference brought together philosophers, policy experts, and technologists from around the world.

Keynote: Political Machines (Colin Megill)

Colin Megill standing at a podium and pointing at a screen with a news article projected.
Colin Megill of Pol.is.

Colin Megill (Pol.is) gave the keynote address on Political Machines. Polis is open source infrastructure used by governments and community organizations to gather and make meaning of public perspectives. It was born during a time of political unrest. He considered applying machine learning for deliberate democracy to resist dichotomous discussions and instead fosters a “high dimensionality” of public discourse. Pol.is made new methods of online and offline data collection possible (e.g. government workers driving randomly around a region to simulate random sampling). He described the use of pol.is by governments around the world to understand public perspectives, with the higher dimensionality complexity helping realize nuanced. Colin described the opportunities and risks associated with integrating new AI tools such as automatic text summarization (a collaboration with Anthropic).

Colin also described his work Birdwatch on Twitter (which happened before and after Twitter’s acquisition by Musk). Community Notes uses an algorithm to combat misinformation, fact-checking tweets from everyone from Musk himself to White House. The algorithm and data is all open source, enabling researchers to explore. The innovation space of public interest technology is massive! Modeling the public has advanced computing, with Colin dating this back to machines from centuries ago that tallied census results faster. He raised the question who controls the models of public opinions.

Panel 1: Ethical and Societal Perspectives

The first panel investigated ethical and societel perspectives on using content moderation to descalate global tensions. Johanna Rodehau-Noack (Stanford) moderated the first panel. The panelists include Renée DiResta (Stanford Internet Observatory) who investigates how platform design interactions with user and crowd behavior, Thunghong Lin (Stanford; Academia Sinica) who investigates regime types and online disinformation, and Jonathan Stray (Berkeley Center for Human-Compatible AI) who investigates the effects of recommender systems on polarization and well-being.

Johanna Rodehau-Noack sitting at a table with Renée DiResta, Jonathan Stray and Thunghong Lin. Renée is speaking.
Johanna Rodehau-Noack moderated a panel with Renée DiResta, Jonathan Stray, and Thunghong Lin on ethical and societal perspectives of online platforms.

Panelists first began by presenting on their work and perspectives. Renée identified a “crisis of legitimacy” with content moderation and concerns with unaccountable private power (e.g. bias and cultural competency, censorship, and who decides tradeoffs between maximizing free expression and minimizing harm and how), manufactured controveries (bad faith interpretations of specific mistakes, misplaced concerns about “censorship”). She spoke to educational and design efforts to address these challenges, such as alerting users of potential violations prior to posting. Thunghong Lin identified how AI tools have the potential to catalyze state-run disinformation campaigns and censorship. He described studies that related censorship to autocracy and suppression of collective civil actions. Jonathan Stray discussed bridging work between peace-builders and technologists in international conflicts. Jonathan emphasizes how there are types of conflict, and not all are necessarily “bad” (e.g. shifting from violent to non-violent conflict). He described work on using Large Language Models (LLMs) that can add contribute non-polarizing notes, re-allocating attention to improve long-term changes to user behavior (e.g. positive outcomes with online news).

Panelists answered questions related to legitimacy, moderation tools, and impact on human behavior and impact. Drift of users from platforms with stricter content moderation to those with looser policies is a known phenomenon that is challenging to study. Tools of content moderation were framed as “remove, reduce, inform.” Massive de-platforming can often backfire, such as when Twitter removed 60,000 accounts following the January 6th, 2021 insurrection in the US, many of those accounts gained popularity on other platforms such as Telegram, and then they returned to Twitter/X with a much larger following after Musk granted them amnesty.

Person sitting in front of a laptop next to other people holding a microphone and asking a question.
Panelists fielded questions from the audience.

Panel 2: Technological Opportunities

In the second panel, Veronica Rivera (Stanford) moderated a panel including Susan Benesch (Harvard Berkman Kelin Center and Dangerous Speech Project), Amy X. Zhang (UW), Michael Bernstein (Stanford), and Deepti Doshi (New_Public).

Veronica Rivera, Michael Bernstein, Amy Zhang, Deepti Doshi, and Susan Benesch sitting at a table with a screen over them that reads “Panel 2: technological opportunities”
Veronica Rivera moderated a panel with Michael Bernstein, Amy Zhang, Deepti Doshi, and Susan Benesch on technological opportunities with online platforms.

Susan Benesch discussed the cumulative risk of content to motivate political conflict. She called out the narrow fixation of content that happened during a political conflict and not enough attention on the cumulative effect that led up to the conflict. The analogy she gave was how we could consider online content as “drips of petrol (gas)” continually on some spaces and on some people, so that a future spark can cause a great fire. An example she gave was of elected US representatives taking family Christmas photos wielding firearms to lower the barrier to enacting future violence. This virtue talk mentions notions of “honor” or enacts religious doctrine and often implies or justifies violence without directly mentioning it (differentiating it from hate speech and dangerous speech).

Amy Zhang discussed her research on decentralized and poly-centric approaches to social media governance. She considered the design of tools that can help diverse communities and users govern online communities. She considered the design principles of flexibility, powerful (to respond to scale), comprehensible/controllable/easy to use, and private/anonymous/ accountable. For example, Amy developed FilterBuddy, a tool that helped users combat online harassment in their comments. Another tool was PolicyKit, a toolkit for online communities to encode and carry out flexible governance procedures. When considering whether content customization increases polarization (“filter bubbles”), Amy challenged currently popular ideas of putting random people with different perspectives into a room to yell at each other.

Michael Bernstein discussed encoding societal values into social media AIs (“tuning our algorithmic amplifiers”). He challenged current approach that considered individual liberties against democratic participation and spoke to a vision of a library of societal values to encode into AI systems. In collaboration with social science and policy researchers, Michael and his colleagues explored embedding social objective functions to mitigate partisan animosity. His longer vision is to create a library of societal values tied to communities and cultures, build methods to optimize deployment into online communities, and then conduct global longitudinal values.

Deepti Doshi (New_Public) discussed the design of online social spaces. Similar to how the physical structures of space dictate in person interactions, Deepti thought about how the design of digital space influenced online interactions. She focused on supporting the community stewards that make pro-social interactions happen. For example, she described her local public library and her kid’s relationship to it. Furthermore, it was the librarian that made the space feel alive. She considered aligning incentives, federated small spaces (with defined norms and interoperability), and community stewardship.

Veronica Rivera, Michael Bernstein, Amy Zhang, Deepti Doshi, and Susan Benesch sitting at a table smiling and laughing.
Redesigning online platforms can be fun.

Panelists answered questions about nuancing dangerous speech (e.g. “No justice, no peace” increases the probability of violence, but perhaps justifiable violence) and a consideration of shared group identity and how speech shifts individual or collective perspectives. Panelists also acknowledged the impossibility of expecting a single organization of moderating global communities, and expectation of doing so leading to homogenization. Furthermore, a panelists emphasized how it was not just the design of AI tools, but also the design of online spaces determined behavior in online communities.

Panel 3: Political and Global Impact

Avshalom Schwartz (Stanford) moderated a panel on the political and global impact of content moderation. Panelists included Niousha Roshani (The Black Entrepreneurs Club), Oladeji M. Tiamiyu (University of Denver Law School), and Ravi Iyer (USC), and Marietje Schaake (Stanford Policy Center and HAI).

Niousha Roshani discussed the work of narratives, emphasizing narratives that states in the Global North defined for states in the Global South (e.g. the AI policymakers in AI defining a vision for the future of Kenya). She described innovations defined by Global South nations including South Africa, Iran, and Columbia to design technologies to report harmful narratives and deescalate conflict. Crucial to this is centering economic justice. The crucial question Niousha proposed is how we can afford to only have select institutions and people in the US to design technology for the rest of the world, when instead we could engage the knowledge of the broader word.

Oladeji M. Tiamiyu is a Professor of Law who researches alternative dispute resolution. He described his work with Gambia’s Truth Commission, speaking to the internal tension that comes with increased access and quality engagement. He drew connections between content moderation and conflict resolution, both online and in-person. He drew connections to online dispute resolutions, such as e-commerce’s early challenges with trust and the quality of products they sold. He called into the question the role of “justice” (justice for whom; justice for what) in online spaces and at what threshold non-state services can act as a sufficient form of justice. As an example, he questioned justice for who and justice for what in regards to the Facebook Content Moderation teams.

Ravi Iyer (USC Neely Center for Ethical Leadership and Decision Making) described the Design Code for Social Media that he and his team defined. Ravi focuses on platform design changes to reduce hate speech. He differentiated fear speech and hate speech, and how both could lead to dangerous speech. Dangerous speech is much more about context than about message (e.g. no way to moderate content with the same policies across Arabic and Hebrew communities). As an example of his design approach of eliciting users’ desired content, recognizing how elicited responses varies from actual responses.

Marietje Schaake (Stanford Policy Center and HAI) compared US and European election monitoring. She framed US election monitoring as very content-focused, whereas European monitoring took a more holistic approach with data on preferences, browsing history, demographic, etc. By understanding data profiles, European nations could create regulations against targeted micro-advertising based on overly specific data profiles. She called for discussions on more considering data profiles for more specific content moderation.

Panelists answered questions relating to how content moderation may or may not scale as AI makes troll farms easier and across languages. When asked about the impact of Chinese state use of AI, panelists considered monitoring the impact of China’s AI-informed “Smart Courts” given the backlog of legal cases in many nations around the world. Another emphasized that there is no evidence that US companies building generative AI focus on democratic principles, but regulation may enable more alignment with democratic principles. Panelists discussed how platforms deciding not to make decisions (e.g. Facebook deciding not to regulate speech and communities) was a decision in itself. A panelist concluded on how global governance requires consideration of what justice and for whom. Part of justice is accountability, so they saw hope in open source AI models and online platforms.

Prof. Leif Wenar closed the conference by calling for a plurality of responses to heterogenous content.

Concluding Thoughts

Seven professionally dressed people standing inside and looking at the camera and smiling.
The people at the McCoy Center for Ethics in Society who made this event happen!

Twitter was once considered the “global town square.” Deepti Doshi called out the contradiction of such a term. I think that this reflects a broader naïvety in how we can reasonably expect a few social media companies built by a select region of the world with a certain set of values can moderate content from global users when content is broadcasted to a broad audience by default. Instead, this conference called for engagement with members from the Global South to rethink the values, technology, roles, and policies that go into designing online platforms. By doing so, we can reimagine platforms that consider context when balancing individual autonomy and democratic principles!

Following Chatham House Rules, I leave ambiguous which panelist said what when answering questions. All photos by Benjamin Xie and can be shared for commercial and non-commercial purposes with attribution.

--

--

No responses yet