7 ways Facebook’s reporting system is failing its community standards, and its users

Rua M. Williams
8 min readDec 28, 2016

--

There have been several accounts of Facebook’s reporting system being exploited by cyber bullies to harass and silence marginalized voices over the years. In the past few weeks, I’ve witnessed it happen over and over to friends of mine, over comments that seemed completely benign, standard fare for Social Justice Advocates. So I began running experiments to better understand the nature of the system. What I found was deeply upsetting.

1. The review process operates without context

If I were to write a thread on Facebook, “What’s your favorite thing to put in soup?” and you responded with the obvious, “crackers!”, myself or anyone else could report that comment for hate speech, and it would be removed 100% of the time, possibly resulting in a temporary ban for you.

This is exactly the kind of entrapment that has landed myself and many friends in Facebook jail this holiday season.

Friends have been banned for talking about soup, Christmas crackers (a British holiday tradition), and, unbelievably, Cracker Barrel Country Store.

That’s right. I was banned for 3 days for sharing my love of down home cookin’ and kitschy shopping with my internet friends.

2. Hate speech reviews are dependent on keywords, and the glossary is inequitable

At first, all of the comments being reported in my friends groups and resulting in removal and/or banning contained variants of white identifiers. This led frustrated friends to wonder, how do we talk about white people without calling them white people? The obvious tongue in cheek answer was, of course, cracker. Before you get all flustered, you should know three things, 1) I am white, 2) Cracker is a pejorative but not a slur, 3) No one was actually being called cracker.

This pattern of whiteness being a forbidden topic of discussion was deeply distressing to my friends, especially those of color. We could all recall times when we had reported genuinely hateful comments - racist, xenophobic, homophobic, and transphobic comments - and Facebook responded with a very patronizing “there there, did you know you could block people?”

In my tests, cracker was removed every time, regardless of context. Similar words which are only pejorative in context, like monkey, queer, and poof, were not removed by the reporting system, even after repeated report attempts by multiple individuals.

An issue of personal concern for me, the R word was never removed, even when used explicitly as a slur against a disabled person.

3. Review results change with repetition

It’s impossible to tell from the outside whether this means the system depends on volume to judge something as offensive, or if individual human reviewers make different rulings. Either option is unacceptable, really. And both scenarios have the same impact. The reporting system is ripe for abuse by cyber bullies.

Not only can they trap you into posting a keyword they can then report you for, or go searching through your posts to find something innocuous that contains a keyword out of context, but they can gang up on their victims, reporting innocent comments often enough until, presumably, the system sides with the reporter or a human reviewer makes an assumption in their favor.

I haven’t yet tested unequivocally benign statements against volume reporting. One would hope “I can haz cheezeburger” would be immune. However, the impact on social justice advocacy is clearly illustrated here and here.

4. Swearing makes you vulnerable

Many of the comments landing my friends in Facebook jail contained swear words. In searching for a Facebook policy on swearing, I came across a Snopes review of a claim that Facebook did indeed have an anti-swearing policy. Snopes evaluated the claim as false.

However, this comment got my friend banned for 3 days.

That swearing by itself ever results in a ban is absurd. If anything deserves a “there there, you can block the mean person.” response, it’s a report complaining about cussin’. Not only is swearing a natural, beneficial, universal human behavior, but to delete content and ban users for using it is the height of tone policing. I honestly can’t decide between simply saying, “It’s rather unhelpful, don’t you think?” and “It’s fucking oppressive.”

5. There's no accountability

In addition to irritating flaws like a deep navigation tree to actually file reports, an obscure support inbox location, and an inability to see the reviewed content on the same page as the review results, when a review result does change, there’s no evidence to that effect.

I personally reported “Heterosexuals Inspiring Pride” 7 times, along with many friends, for displaying anti-LGBTQ rhetoric and hate speech. Each time, the review came back supporting this page’s compliance with community standards. Some time later, the page was finally taken down, and now every support ticket in my inbox claims the page was taken down, as if that was always the review result from the very start.

The entire system is so unpredictable and inconsistent, it’s difficult to determine how to censor yourself when you are being targeted by cyber bullies. The temporary ban lengths just keep stacking and stacking, with no way to come up for air and defend yourself against the onslaught.

There is no channel of communication a victim of this cyber bullying can use to defend themselves. Targets have to wait out their bans and then retreat from the places they were being victimized in. In this manner, more than any other, Facebook fails to defend its users against cyber bullying, and in fact maintains and cultivates an unsafe environment.

This will come as no surprise to victims in the Trans community who have had their profiles attacked for being “fake” for years. That Facebook seems to have learned nothing about protecting its users from such vicious bullying has brought many to believe that Facebook just doesn’t care. I’m choosing to hope instead that it’s simply a matter of bad design. Bad design is easy to fix. Systemic -ism, as we all know so acutely, is much more difficult to dismantle.

6. It's exploitable by cyber bullies

I think I've already made my case on this point above. But let's clarify it.

A person interested in discussion and persuasion will not report something offensive. They will point out its offensiveness and attempt to educate the guilty party. Reporting in the system as presently designed is an act of aggression. It’s a retaliatory movement intended to shut down the people that don’t agree with you. The irreversible removal of content is an act of silencing. There is no way for users participating in a discussion to see that content was removed, or what it was, so there’s no way for them to witness or practice discourse and education.

A person interested in silencing and even eventually erasing someone’s digital persona is easily aided by the flaws in the reporting system. A person’s account can be taken out by a single cyber bully, or a small collection of attackers, within a very short period of time. They need only to find a few keywords and report them, context is irrelevant. Then, they can trap them into repeating these keywords, and report those. Next, alone or with the help of others, they can repeatedly report some choice content or the profile itself. Repetition will eventually lead to victory. The target will never have the opportunity to defend themselves to Facebook. For the victims, the prospects are utterly hopeless.

7. It doesn't support social justice advocacy

Facebook's community standards on hate speech seem poised to support social justice advocacy.

“People can use Facebook to challenge ideas, institutions, and practices. Such discussion can promote debate and greater understanding.”

And yet it is exactly these discussions which are being clotheslined by fragile white folks, exploiting the system’s inability to reconcile social justice advocates’ need to identify whiteness and implicit bias as the foundation of institutionalized oppression with its desire to provide “equal” racial protection.

In recent weeks, it has been impossible for my friends to speak about institutionalized racism, white fragility (an academically recognized phenomenon), and the dangerous normalization of white supremacy. Every time they try, their content is reported. And because the system has been shown to remove content combining race identifiers with negative traits (in this case, white with fragile or racist) on the first or second report, usually with immediate temporary ban, these discussions are effectively and infuriatingly shut down, cut short, and erased.

It is literally impossible to “challenge ideas, institutions, and practices” which contribute to systemic white supremacy without using this language. And yet Facebook does not protect us.

But when white people with implicit or explicit biases use coded language to malign, attack, abuse, and spread hatred for people of color, members of the LGBTQ community, or disabled persons, the system is either unable to detect this violent dehumanization, or the humans reviewing the reports are complicit in it.

I hope that these points have helped you to see the grievous flaws in Facebook's content reporting system. At best, the current system design can be judged to be a mere pacification. It presents the veneer of community support without the substance. At worst, it is an arm of white supremacy, a tool of cyber bullies, and a bludgeon of oppression.

Facebook, I believe you can do better. Do better.

Edit: they are peppered throughout this piece in hyperlinks multiple times. But I need to be sure I explicitly show you some of the women that shape my feminist perspective. I work to shed my implicit bias daily, for them, for their children, and mine.

@BrittBrownMarsh

Not Your Asian

Sun Light, Moon Rising

Roaring Gold

See also: “How to Redesign Facebook’s Reporting System in 7 Steps” @StarFeuri https://medium.com/@StarFeuri/how-to-redesign-facebooks-reporting-system-in-7-steps-a7045ed68f0f

Edit: some of the links to experiment posts on my wall have been set back to friends only from public after being banned for comments there after the investigation was completed. I will be moving the content to another account but it will take some time.

--

--