Facebook says it deleted half a billion fake accounts and 837 million spam message in 1Q 2018
When your platform is built around user-generated content, not everybody will use it for good. Policing the content to make sure people aren’t breaking the law, posting nude photos, hate speech, spreading fake news, or spamming everyone is difficult stuff. It’s a daunting task.
“We’re often asked how we decide what’s allowed on Facebook,” Facebook VP of Product Management Guy Rosen said in a recent blog post. “And how much bad stuff is out there.”
Last month, Facebook published the guidelines their internal teams use to decide what to remove from the social media source. You can read them yourself here.
Facebook uses a combination of artificial intelligence and user report to ID content that violations its community standards. The platform report more than 7,500 people that review flagged content. That’s a 40% increase in the past year and they monitor content 24/7 in 40 different languages.
“We won’t prevent all mistakes or abuse, but we currently make too many errors enforcing our policies and preventing misuse of our tools,” said CEO Mark Zuckerberg said earlier this year. “We know we don’t always get it right.”
“We built Facebook to be a place where people can openly discuss different ideas, even ideas that some people may find controversial or offensive,” said Alex Schultz, VP of Analytics at Facebook. “But we also want to make sure our service is safe for everyone. Sometimes that is a hard balance to strike.”
Here’s what Facebook reports it did in the first three months of the year:
- We took down 837 million pieces of spam in Q1 2018 — nearly 100% of which we found and flagged before anyone reported it.
- The key to fighting spam is taking down the fake accounts that spread it. In Q1, we disabled about 583 million fake accounts — most of which were disabled within minutes of registration. This is in addition to the millions of fake account attempts we prevent daily from ever registering with Facebook. Overall, we estimate that around 3 to 4% of the active Facebook accounts on the site during this time period were still fake.
- We took down 21 million pieces of adult nudity and sexual activity in Q1 2018–96% of which was found and flagged by our technology before it was reported. Overall, we estimate that out of every 10,000 pieces of content viewed on Facebook, 7 to 9 views were of content that violated our adult nudity and pornography standards.
- For graphic violence, we took down or applied warning labels to about 3.5 million pieces of violent content in Q1 2018–86% of which was identified by our technology before it was reported to Facebook.
- For hate speech, our technology still doesn’t work that well and so it needs to be checked by our review teams. We removed 2.5 million pieces of hate speech in Q1 2018–38% of which was flagged by our technology.
“In many areas — whether it’s spam, porn or fake accounts — we’re up against sophisticated adversaries who continually change tactics to circumvent our controls,” Facebook’s Rosen said. “Which means we must continuously build and adapt our efforts.”
Still, things are still going to slip through.
“Our policies are only as good as the strength and accuracy of our enforcement,” said Facebook VP of Global Policy Management Monika Bickert in a blog post. “And our enforcement isn’t perfect.”