How to Redesign Facebook’s Reporting System in 7 Steps

Previously, I laid out the ways in which Facebook’s reporting system fails its own community standards and leaves its users vulnerable to cyberbullying.

Examples of such bullying can be found on Britt.’s blog, NotYourAsian’s Tumblr, and Sun Lit, Moon Rising’s blog.

Here, I propose a redesign of the system to prioritize cyberbullying prevention and an ethical handling of Internet hate crimes.

Step 1: Throw it in the Trash

You need to abandon the idea that algorithms can help you detect hate speech. They clearly aren’t working the way they are written now.

Judgements made by algorithms should always be evaluated by a conscientious human reviewer. In the current system, it’s ambiguous whether or not a human is involved in any part of the process. Reported content reviews have broad sweeping impacts and in order for the system to be accountable for those impacts, the review process needs a human face. Support tickets need to feel like an authentic QA experience, not a sky net-esque overseer.

Step 2: Make Cyberbullying Prevention Your Primary Objective

Facebook has a precarious history of reticence when it comes to stepping into interpersonal relationships on their platform. This is likely the primary motivation behind the language repeated throughout their reporting system, encouraging you to simply block the users that are abusing, harassing, or otherwise aggravating you. This is a flawed position for two key reasons.

Firstly, it puts the onus on the abused to protect themselves. In this way, the reporting system repeatedly victim-blames and gaslights its users. When someone with a marginalized identity is being attacked by another user, attempts to report them for hate speech have been shown to repeatedly come back “not in violation”. This is invalidating, and the system’s suggestion that you simply block your abuser ignores a very powerful truth: if I block my abuser, they are still free to abuse others. This makes me complicit in the damage they do. This is why victims of cyberbullying so often refuse to use the block feature.

This brings me to the second flaw in the “just block them” position: it creates silos. I personally approve of any victim who wishes to block their abuser. However, the language in Facebook’s support tickets encourages blocking for even minor annoyances. This proliferation of blinders creates the hyper confirmation bias networks which have lead to the anguish-filled social phenomena in the post-brexit UK and the post-election US, to name a few. For more about why these bubbles are bad, read here and here.

Cyberbullying isn’t just for kids. Adults participate in online abuse culture too.

I personally define cyberbullying as the use of electronic media to either directly or indirectly target another user or user group in order to belittle, shame, torment, or isolate that target.

Direct actions include sharing private or sensitive content without permission, using unwanted, persistent, and hostile communication, and directing slurs at the target,such as those which dehumanize race, ethnicity, gender, or sexual orientation.

Indirect actions include targeting victims through exploitable reporting systems, hacking account passwords, and turning a user’s support network against them through private gossip and slander in order to isolate them from their peers.

If Facebook wants to make a mark in cyberbullying prevention, it should turn its eyes toward cyberbully rehabilitation. Provide flagged users with resources for their own mental health and well-being. Require completion of an anti-bullying course in order to regain account access. Any child psychologist can tell you time-outs don’t work. So why are you putting grown adults in time out and expecting any impact?

Step 3: Make Your Hate Speech Definition Equitable

Facebook’s community standards present a rather vague definition of hate speech:

Facebook removes hate speech, which includes content that directly attacks people based on their:
Race,
Ethnicity,
National origin,
Religious affiliation,
Sexual orientation,
Sex, gender, or gender identity, or
Serious disabilities or diseases.

Facebook does slightly better than the legal definition, which does not explicitly acknowledge gender identity, or even ethnicity:

Speech that is intended to insult, offend, or intimidate a person because of some trait (as race, religion, sexual orientation, national origin, or disability)

However, certain embellishments in Facebook’s definition actually harm its application to protecting marginalized identities. What does directly attack mean? If the target of offensive content isn’t “in the room” so to speak, does it not count? If an asshole spits in a forest, and no one is around to be spat on, does it make a sound? What is a serious disability? According to ineffectual results of reporting against the use of the R word, it would seem Facebook does not believe cognitive disability is serious. These holes in dedicated protection for marginalized voices is critical to creating an inclusive and healthy community.

The current system has been shown to protect white supremacist rhetoric over social justice advocacy with an overwhelming regularity. If Facebook wants to survive this era of social justice awakening, the platform needs to explicitly and confidently declare its refusal to accommodate White Supremacists.

Reverse racism is not a thing. Content criticizing the white establishment, discussing white privilege, and acknowledging institutionalized racism must be protected and amplified. Actual prejudice and predatory behavior against white individuals will be covered by cyber bullying prevention. White supremacy has no place in the mainstream and should not be tolerated.

If Facebook isn’t a part of this stand against the rise of neo-nazisim it will become an arm of it. People with marginalized identities won’t even have to lead an exodus themselves. The system is currently designed to leave the platform a white supremacist wasteland in the wake of coordinated and systematic abuse of people of color, transgender individuals, and disabled persons.

Step 4: Implicit Bias Testing and Awareness Training for all Reviewers

Any person involved in reviewing or evaluating reports of hate speech or cyberbullying must be made aware of their implicit biases and associations and be provided with tools and resources to help them make decisions outside the influences of these biases.

Implicit bias is a symptom of institutionalized vilification in our society’s common narratives. Having implicit bias does not make you a bad person. But acting on them can cause you to make dangerous and damaging decisions.

It’s embarrassing that Facebook emerged from the same institution as Project Implicit, and yet the social network seems to take no cues from research in social psychology in order to build a healthy online community.

Step 5: No More Dirty Deletes

The only person protected by content removal is the person who posted it. When Facebook erases this content with no trace of its existence, the person who posted the abusive content is also free from evidence of their abusive behavior. Facebook is also free from evidence of having deleted content it shouldn’t have, as in the case of social justice advocacy being routinely flagged as violating standards.

Rather than deleting content, put it behind a wall. Allow users to click through a warning label to see the content and the poster, hold users accountable for their abusive actions. Allow the community to participate in the review process by permitting them to send a note to you about whether or not they agree the barrier should be in place. Hold yourself accountable to your community for your decisions.

Step 6: No Excommunication Without Representation

No more authoritarian sentencing to solitary confinement. Users who would be banned for reported content deserve to defend themselves against wrongful punishment. No less than two human reviewers must agree that the content was abusive and that the dependent was in the wrong. The reviewers must have the full context of the content posted. No more bans for “cracker” on posts about soup.

Prominently display the ban schedule, with concrete, actionable steps to roll back to first time offender status. Victims of wrongful reporting should not be subject to the exponential banishment slope. Make a one-touch option for saving an account’s image data to an off-network location. Don’t make it hard for victims to protect their memories.

Step 7: new interface

The current interface is like a choose-your-own-adventure where over 3/4ths of the paths lead you to a patronizing lecture about how you can just block people you don’t like. Upon selecting the report option, users should immediately see their available actions. Block, Report for Cyberbullying, Report for Hate Speech, Report for Illegal Activity or Credible Threat. Exactly what constitutes each of these kinds of content should be presented. When a review is finalized, the reviewed content should be shown on the same screen as the verdict. Allow the users to dispute the review. Hold yourselves accountable. Present yourselves as equally capable of making mistakes and correcting them. Any system which is designed on the assumption that it won’t make mistakes, and that reports would only be filed by the victims, and never the abusers, as your system is currently, is designed to fail its users. I belive you can do better.