How Facebook Enforces its Community Standards

Razeeb Mahmood
An Attempt at Writing
5 min readJul 17, 2018

People come to Facebook to stay connected with their friends, family and share things that are important to them like photos, videos, news. There is a lot of freedom to what can be shared on Facebook but the company does have a specific set of community standards. They are designed to lay down some rules on what can and can’t be shared on the platform, and also to ensure that users find Facebook as a safe, informative and productive space.

These are the main areas Facebook Community Standards cover.

  • Violence and Criminal Behavior
  • Safety
  • Objectionable Content
  • Integrity and Authenticity
  • Respecting Intellectual Property

The company has been quite successful in enforcing these standards. I’m going to highlight how Facebook currently enforces some of these standards.

User Feedback Signals

Believe it or not Facebook relies quite heavily on user feedback to monitor and build tools to help them address violations. It is used to help identify almost all violations in Facebook. On each post users can provide feedback on a variety of issues. Depending on how many users provide feedback on a post it gets reported to Facebook’s growing community standard operation team. Who will then examine and make a decision on whether it needs to be flagged and any action taken.

Photo Detection Technology

When a photo or video gets reported for a violation and subsequently gets flagged by Facebook, if the photo or video is attempted to be uploaded again by the original user or anyone else Facebook will flag it, and likely remove it with its photo-matching technology. If not during the process of uploading, then soon after and before anyone likely even sees it or reports it. This is where all those user signals come in handy. It’s using what Facebook already established to be a violation and applying that knowledge on future shares of the same content, simply by machines. Facebook can greatly limit nude, sexual, violent, terrorist, hate speech photo and video content this way. Also used to take down copyrighted content.

Cross-Platform & Company Collaboration

The more extensive the list of flagged content the better Facebook tools will be in identifying and flagging previously flagged content. That’s why Facebook collaborates with its other platforms Instagram and WhatsApp to have a growing list of flagged content database to query and learn from.

Facebook also has partnered with companies like Microsoft, Twitter, YouTube and governments, agencies to make sure it has cross-company knowledge sharing as well. The whole tech space benefits with this type of collaboration.

Machine Learning & Artificial Intelligence

Having lists of flagged content is great but if photo-matching is all what Facebook is doing it’s still being heavily reliant on user signals to help flag new content of violations. That’s why Facebook uses AI to fight most of its community standard violations.

Take nude and sexual content for example. What Facebook does is it applies machine learning to all its previously flagged nude and sexual content to identify patterns that are unique to those contents. I won’t list what they are (hehe) but they are listed here. Once that is done Facebook AI can then use computer vision to look for those patterns in a new photo. If it can satisfy the requirements of a violation with high accuracy it flags the photo of violation.

All these three components ML, computer vision and AI are constantly being tweaked to be more and more accurate. Currently most community standard violations are flagged by Facebook AI. Here is a stat from Facebook’s enforcement numbers just for adult nudity and sexual content.

We took down 21 million pieces of adult nudity and sexual activity in Q1 2018–96% of which was found and flagged by our technology before it was reported.

Text Detection Technology

Speech violation is probably the most difficult thing to address by Facebook because in speech context is so important, which machines can’t really understand (even human can’t). This is why Facebook is using trained reviewers in all parts of the world reviewing content in their respective region and culture to understand whether they violate certain community standard.

Certain texts in sequence can be flagged through Facebook AI, like terrorist propaganda or copyrighted material through text-matching technology.

User Behavior Signals

How a user behaves can help Facebook in identifying possible bad actors, violations and even individuals in need of help.

Fake Accounts & Spamming

Normally when a user signs up for Facebook he/she would build up their profile, look up their friends, send requests etc. Not go liking or sharing posts in high volume from the get-go. Same principle even if a user is legit. Red flags if specific content are being shared and/or liked over and over.

Facebook doesn’t really care if you have a duplicate and or fake account, about 4% of its active users are fake. It’s the spamming and scamming through those that they care about greatly.

Terrorist Acts

Disturbing behavior like sharing, liking, sympathizing of terrorist groups, or announcements of plans, coordination of attacks raises red flags for Facebook. It can identify the people involved, take action itself and also contact the authorities if they deem something dangerous is imminent.

Clearly given almost all recent shootings Facebook needs to get much better in improving the detection of this type of behavior.

Mental Health Problems

Quite a few people commit suicide or do self-harm on Facebook, through livestreams (won’t link them here). Sadly, Facebook can’t stop every unfortunate events from happening. However what it can do is try to help these people in need if there are signs of depression or suicidal tendencies. Feedback from friends and family showing concern also assists human moderators to reach out to people maybe in need.

AI has shown to predict suicide with high accuracy and Facebook employs it’s own AI to detect signs and to try help people.

Real-time Audio & Video Monitoring

How do you stop someone from livestreaming a PPV event on Facebook? Well Facebook can take audio samples from an event and check to see if any livestreams have that sample (repeating the check to be more sure). If audio is turned off :) Facebook can also apply computer vision techniques to compare the video feed and check if it matches the PPV feed. With a combination of these and the popularity of the feed, Facebook can take down livestreams and stop repeat offenders from using the feature.

Human Moderators

Facebook can detect and take action on most of the content it flags of violation through its AI. But content that are no so clear to flag (speech for example) and/or content that are in dispute human expertise are needed and employed. And they play a vital role in helping Facebook monitor, flag content and improve Facebook’s monitoring and detection technologies.

Maintaining community standards is a never ending battle for companies such as Facebook. The goal is to always stay at least one step ahead of the violators. Because whatever Facebook does the violators will adapt and respond. And once they respond Facebook will adapt and respond. And the cycle continues. I’m sure people working in the space are equally exhausted and excited by the continuously changing battle scenarios and players.

--

--