What Do We Owe to the Internet’s “First Responders?”

Experts share perspectives on the ethics and legality of how social platforms moderate content

by Daniel Dennis Jones

Worldwide, nearly 15,000 people are employed in the daily duty of keeping Facebook “clean” for the site’s 2.32 billion users.

Part of the growing content moderation industry, workers on these teams are tasked — often through outsourcing — to scrub social networks like Facebook, Twitter, and YouTube of posts that might violate any number of legal or community standards — from pornography and violence, to copyrighted content, to certain kinds of political speech.

A recent screening of The Cleaners at Harvard Divinity School co-sponsored by Harvard’s Berkman Klein Center for Internet & Society and Digital HKS shed light on the working lives of these content moderators.

The Cleaners trailer

Content moderation is low paid work, but — particularly for those in countries that have become hubs for content moderation like the Philippines — marginally better compensated and less physically strenuous than other available forms of lower skilled labor.

“Given the essentiality of these jobs to make the sharing of content free and fast, where should the obligations fall?” Lily Hu, a Ph.D. candidate in Applied Mathematics at Harvard and a Berkman Klein Fellow, asked during a panel following the film. “There is no free lunch. Entitlements on one end always require people on the other end.”

“It is our content, our world, that they are navigating for us,” panel moderator Mary L. Gray, a Senior Researcher at Microsoft Research, said.

Gray interviewed hundreds of people employed in content moderation and other remote task based jobs for her book Ghost Work: How to Stop Silicon Valley from Building a New Global Underclass. Though the film draws a metaphor of content moderators as janitors or trash pickers, Gray noted that of the moderators she interviewed many considered themselves to be more like firefighters or emergency crews, “first responders in cleaning up the Internet.”

(from left to right) Mary L. Gray, Greg Epstein, Lily Hu, Mo Safdari, and Salome Viljoen

To reduce the role of human beings in moderating harmful content Facebook CEO Mark Zuckerberg has frequently invoked artificial intelligence. However, experts in machine learning say we have a long way to go.

For one, image and video recognition systems cannot capture and then act on the highly complex and context-specific understandings of permissible and non-permissible media content. Although machine learning methods have improved systems’ image processing capabilities, they still cannot reliably recognize offending content like illicit pornography or copyright protected media. Algorithmic systems continue to generate an unacceptable rate of false positives or negatives that then have to be re-reviewed by human moderators.

“It’s like the old Artificial Intelligence ‘Wizard of Oz trick,’” said Hu of AI promises made by platform CEOs, “where users believe that a feature is built on AI techniques, but it’s actually just human beings doing the work in the back.”

AI also has trouble keeping up with increasingly nuanced and context-based guidelines developed by platforms to keep bad content offline.

“The more complex the policy, the more complex the algorithm will need to be,” said panelist Mo Safdari, a former Program Manager for Election Integrity at Facebook who spent six years working across teams focused on abuse mitigation, product quality, and civic engagement.

Facebook’s community guidelines are modified dozens of times a year with greater levels of specificity. Responding to the harassment of mass shooting survivors by conspiracy theorists, March 2019 saw the addition of a new line of policy targeting “content about a violent tragedy, or victims of violent tragedies that include claims that a violent tragedy did not occur.”

How would a machine learning system be trained to recognize whether a user is making such a statement in satire or sincerity? When is a post a political statement and when is it incitement to harassment or violence? Context and tone is something that AI has trouble being trained for.

Until AI catches up, human moderators will continue to ingest thousands of posts of illicit or toxic content that can leave lasting and serious psychological impacts. A recent profile of Facebook moderators by The Verge’s Casey Newton highlighted one contractor “who developed a belief that 9/11 was not a terrorist attack, sleeps with a gun, and was diagnosed with PTSD and generalized anxiety disorder” after moderating radicalizing content on the platform.

Similarly, as in the case of self-harm content moderators cited in the film, behaviors and ideas spread through social media can be quite literally contagious. “Exposure to suicide increases the likelihood of suicide,” noted Greg Epstein, Humanist Chaplain at Harvard and MIT. “Social networks need to spend more money on mental health for moderators.”

“These are unregulated workplaces that can be incredibly unhealthy if not properly supported and recognized as valuable work,” Gray added, calling for the development of guidelines to help support workers exposed to harmful content. The Occupational Health and Safety Administration (OSHA) currently regulates workplace conditions to reduce harm — a similar body could develop standards for the content moderation industry.

Safdari encouraged Facebook to build on their work to democratize decision-making, for example the “Facebook Supreme Court” initiative they began pushing publicly in 2018. The film notes where there are tensions between the desire for loose content moderation (for example, nudity in fine art, or boundary pushing political speech) and more strict content moderation. For news gathering organizations, like those tracking the Syrian conflict, or for political activists challenging those in power, content moderation presents a nearly existential threat. Safdari proposed that if users had a stronger voice in how platforms make rules there would be a greater sense of legitimacy. “Could decision-making be democratized so that we are making these choices together?

Berkman Klein Fellow Salome Viljoen cited the passage of 1996’s Communication Decency Act (CDA) as a pivotal moment leading to the social media platforms we know today. Section 230 of the CDA “protects platforms against laws that would hold them liable for content [posted by third parties]” (with a few exceptions, including content related to child pornography, and, since the passage of FOSTA and SESTA in 2018, some content related to sex work).

Without this protection from legal liability social media companies would have had less incentive to be open to all kinds of content. “If platforms had been on the hook since 1996, their technical systems would likely have been developed very differently,” Viljoen said. “Technology like live-streaming might not even exist.”

Epstein noted how the ‘move fast and break things’ approach has generated an unsustainable amount of content, and some real world harms to both platform users and their moderators.

“Maybe Facebook, YouTube, and similar networks are just bad ideas, and after a decade or so of experimentation, we can think about turning them off.”

This panel discussion was a part of the Berkman Klein Center’s events series, and was co-sponsored by Digital HKS and hosted by students at Harvard Divinity School and fellows at the Berkman Klein Center for Internet & Society at Harvard University, with support from Harvard Divinity School. Special thanks to Joe Pinto of the Harvard Divinity School.

Learn more about this and all of the Berkman Klein Center’s events

Berkman Klein Center Collection

Insights from the Berkman Klein community about how technology affects our lives (Opinions expressed reflect the beliefs of individual authors and not the Berkman Klein Center as an institution.)

Berkman Klein Center

Written by

The Berkman Klein Center for Internet & Society at Harvard University was founded to explore cyberspace, share in its study, and help pioneer its development.

Berkman Klein Center Collection

Insights from the Berkman Klein community about how technology affects our lives (Opinions expressed reflect the beliefs of individual authors and not the Berkman Klein Center as an institution.)