How Facebook Tries to Regulate Postings Made by Two Billion People

Berkman Klein Center
Berkman Klein Center Collection
9 min readOct 19, 2017

Berkman Klein Center hosts a day of conversation about reducing harmful speech online and hears from the Facebook executive in charge of platform moderation policies

By David Talbot and Nikki Bourassa

Jonathan Zittrain and Monika Bickert

Violence at a white nationalist rally in Charlottesville, VA, and recent revelations about the spread of disinformation and divisive messages on social media platforms has increased interest in how these platforms set content policies and then detect and remove material deemed to violate those rules. In the United States, this process is not subject to any regulation or disclosure requirements.

On September 19, the Berkman Klein Center for Internet & Society hosted a public lunch talk with Monika Bickert, the Head of Global Policy Management at Facebook. The public event was followed by a meeting at which members of the Berkman Klein Center community explored broader research questions and topics related to the challenges — including whether and how — of keeping tabs on the daily social media interactions of hundreds of millions of people.

The day was hosted by the Center’s Harmful Speech Online Project. Questions surrounding the algorithmic management of online content, and how those processes impact media and information quality, are also a core focus of the Center’s Ethics and Governance of AI Initiative.

During the public lunch talk (full video is available here), Jonathan Zittrain, professor of law and computer science at Harvard University and faculty director of the Berkman Klein Center, asked Bickert about Facebook policies today and how they might be designed in the future.

Bickert said 2 billion people use the site, and 1.3 billion use it every day. Eight-five percent of users live outside the United States and converse in “dozens and dozens of languages,” she said. (Facebook later provided a specific number, 40). Particularly large user communities thrive in India, Turkey, and Indonesia, she added.

The “policy” in Bickert’s title refers to Facebook’s policies or rules defining what material Facebook prohibits (including hate speech, certain kinds of graphically violent images, and terrorist content). The exact means by which the company judges content have not been made publicly available, but some internal training documents detailing past policies leaked to The Guardian. And the company has issued Community Standards that broadly describes its process, and Bickert responded to the leak with a public editorial of her own.

If a user or an automated system flags content as violating policies, the content is sent to a human reviewer for a final decision. Facebook says it is in the process of hiring 3,000 new reviewers, which will bring the total number of content reviewers to 7,500. These employees by reviewing them against Facebook’s internal policies.

Bickert called the policies reported by The Guardian a “snapshot in time” because “our policies are always changing.” She added that: “We have a mini ‘legislative session’… every two weeks where we discuss proposed policy changes” with legal, engineering, and company public policy team members present.

“I’ve been in this role just over four years and I would say that in general [Facebook’s policies] have gotten more and more restrictive and that’s true not just at Facebook but for all the large social media companies,” she said.

Zittrain asked: “Given Facebook’s primacy at the moment — it’s a big social network — does it seem right that decisions like this should repose with Facebook in its discretion — it’s a private business; it responds to market; it’s got its policies — or is there some other source that would be almost a relief? It’s like ‘You know what, world? You set the standards, just tell us what to do, dammit, we’ll do it.’”

Bickert stopped short of endorsing an imposition of external standards. But she said the company does reach out to get opinions from experts outside the company, including from a safety advisory board, as the company revises policies and even makes decisions on particular pieces of content. “We are always looking at how we can do this in a more transparent way,” she said.

“In the area of terrorism for instance … we have a group of academics around the world; we have on our team some people who were in the world of counterterrorism,” she said. “It’s very much a conversation with people in the community as we make these decisions.”

A COMMUNITY DISCUSSION

Later in the afternoon, members of the Berkman Klein Center community came together for additional discussion about content moderation broadly, and related questions of hate speech and online harassment.. About 80 community members: librarians, technologists, policy researchers, lawyers, students, and academics from a wide range of disciplines attended.

The afternoon included presentations about specific challenges in content moderation by Desmond Patton, Assistant Professor of Social Work at Columbia University and Fellow at the Berkman Klein Center, and Jenny Korn, an activist-scholar and doctoral candidate at the University of Illinois at Chicago and also a Berkman Klein Fellow.

These researchers explained just how difficult it can be to moderate content when the language and symbols used to convey hate or violent threats evolve in highly idiosyncratic and context-dependent ways.

Patton discussed his work on how social media postings fuel gang violence. He displayed a slide showing a Tweet posted in 2014 by Gakirah Barnes, a Chicago teenager and gang member shot and killed later that year, when she was only 17 years old. This was the tweet:

He later explained that the tweet can be perceived as violent and threatening. The hand emoji is used to make the point “anyone can get these hands” (meaning: I’m ready to fight). She reinforces the point with a poop emoji and hands-crossed emoji before “lackin” means “this shit is never lacking.” She says she has a gun and communicates “I mean what I say” with the gun and “100” emoji.

Barnes had amassed 2,500 followers and posted 27,000 tweets — many with threats of violence — over her three years on the platform. The meaning behind these Tweets eluded Patton and other academics until Patton investigated. “We had no idea what people were saying online. We hired young African American men and women to interpret information from Facebook and Twitter.”

The companies hosting the content are exerting little if any control over it, he said. “Someone might post a picture of someone killed by a rival gang and a few weeks later, people make disrespectful comments or draw things on the picture. When people see the disrespectful content, they make retaliatory comments,” he said. “There is a lot of content around grief and trauma that just sits there on Facebook. Even two years later, three years later, it becomes the thing that triggers another violent event.“

Korn, who spoke after Patton, pointed out that overtly ⁠racist statements can be replaced by symbols and linguistic adaptations that are harder for online platforms to detect. Symbols that serve as social cues for a racist ideology include the Confederate flag, but an image of the flag itself would not be sufficient reason for a content take-⁠down nor easily, unequivocally identifiable as violent speech.

Context is crucial; the accompanying text may support or critique a racist interpretation of the symbol, and linguistic adaptations are always proliferating: terms that have cropped up for “white people” include “wypipo” and “DeWhites” in defiance of automated systems that may be programmed to look for the term “whites” used in the context of hate speech. Relatedly, if automated systems flag messages with words like “whites” in them, it could also lead to deletions of comments by people criticizing white supremacy, not just racist comments.

Korn said more research is needed to detect when groups of people start to use common combinations of symbols and language that might constitute hate speech. She also pointed to a greater need for individuals, especially those with power, to take action to combat hate speech online, including through private messaging and “public upstanding,” meaning posting publicly to confront hate speech. She said companies should find ways to employ social contagion in positive ways and social stigma in negative ways to discourage hate speech and encourage individual actions to counter hate speech.

The afternoon session included discussion on a number of other topics, a small sampling of which appear below.

Mary L. Gray, Senior Researcher at Microsoft Research and a Berkman Klein Fellow, moderated the afternoon discussion and noted Facebook faces no regulation in determining how to monitor content, and does not reveal the full details of the number of people hired to moderate content, their employment conditions and the precise practices it uses to moderate content. “Facebook is juggling a lot of content moderation and it has a constantly changing logic for how to do it,” Gray said. Gray said that next steps might be talking with Facebook about forming experiments in community-based moderation, to do a better job of removing harmful speech online.

Urs Gasser, Executive Director of the Berkman Klein Center, discussed experiences with different laws around the world, including free speech and safe harbor regulation in the United States, extensive control over online intermediaries as practiced by China, and increasing regulation of social media companies in Europe, and sketched three trajectories for how future governance of hate speech may unfold. “We will have to discuss whether it is a good thing to have a plurality of local and national laws on speech and online platforms. Similarly if a few powerful platforms come up with the corporate-made ‘laws’ in forms of Terms of Services or similar policies that have global reach, is that a good or a bad thing from a public interest perspective?”

Mary Minow, librarian, lawyer, and fellow at the Berkman Klein Center, brought up the fact that some Twitter followers are suing the Trump administration because they have been blocked from following @realDonaldTrump on Twitter. In that case, the government is blocking Twitter users, which they regard as a violation of their First Amendment rights. She asked the participants to consider what the ramifications might be if a government entity, say a city or public library, uses a private company like Facebook to communicate with the public. Facebook has its own policy to remove user comments, yet many of those comments would be protected by the First Amendment if the government was directly removing the comments.

Susan Benesch, a faculty associate at the Berkman Klein Center and founder of the Dangerous Speech Project, said pressure is rising on Internet platforms to delete content. The new German Network Enforcement Act, which orders platforms to take down any ‘evidently illegal’ material within 24 hours or face a fine of up to 50 million euros, is just one example. Under this pressure, the companies will rely more and more on automated systems that could amount to a huge, secret system of censorship. To avoid this, we have to come up with mechanisms for oversight consisting of systematic outside review of which content the platforms are taking down, and which they are leaving up.

Andrew Gruen, an affiliate at the Berkman Klein Center, pointed out that underlying the conversation on content moderation practices was the unanswered question of how well anyone actually knows what’s happening on Facebook and other platforms and to what extent the platforms are being transparent. He suggested that research was needed to develop methods forcing private companies to release information, something akin to a Freedom of Information Act request, but applicable to the private sector.

Other speakers discussed the implications of the current debate on Capitol Hill over proposed changes to Section 230 of the U.S. Communications Decency Act. The section eliminates broad categories of “intermediary liability,” ensuring that companies like Facebook are not seen as the “speaker” of any harmful content online, and thus can not be held liable for users’ posts in many instances. A proposed amendment, called the “Stop Enabling Sex Traffickers Act” (or SESTA), would weaken this protection somewhat, making it illegal for companies to knowingly or with reckless disregard assist in, support or facilitate sex trafficking. This could lead to far more restrictive content moderation.

Toward the end of the session, Zittrain wondered aloud whether the industry — now dominated by a few large platforms including Google, Facebook, Twitter and certain subsidiaries — had evolved in an optimal way. He invited the community to contemplate how next-generation social media platforms might be designed.

“How would we prefer that services like these be architected? Do we wish that unowned platforms like diaspora had taken off? Or that we should not seek to interfere with the freedom of communication and association among private groups, while those addressing the public at large face some hurdles in getting a message out?” Zittrain’s questions, as well as additional topics discussed in short breakout sessions — including ones on the future of Section 230, the pros and cons of content removal, and international approaches, perspectives and responses — set the stage for future convenings and research.

--

--

Berkman Klein Center
Berkman Klein Center Collection

The Berkman Klein Center for Internet & Society at Harvard University was founded to explore cyberspace, share in its study, and help pioneer its development.