Facebook : Moderating 2 Billion

Anshuman Pati
GDSC KIIT
Published in
11 min readSep 5, 2020

Social Media — What's the name that comes to your mind when you read that? Let me guess. Instagram, WhatsApp, or Facebook are one of those surely? I think that says something. Monopoly? Check. Anti-competitive? Check. We have ourselves a giant, evil corporation. Now, unlike the movies, it isn't made of mean folks in black jackets or shady guys wearing face-masks (wait, no), in fact, it's some pretty great engineers who love what they do. And while most of them aren't evil and most of them don't work with Facebook's agendas in mind, they add up to run this social media juggernaut that tracks people, jeopardizes democracies, incites violence, spreads fake news and breeds derogatory mindsets. "But that's the people, not the platform", but you see, or should I say, what you DON'T see is that the platform selectively enables controversy. The human mind is a sucker for controversy, conspiracy, argument and validation. So, how do you get the gullible everyday person hooked in? You feed them stories that strike those chords, and you make them feel the need to speak up, to debate, to argue and to antagonize any opposition. "Nice philosophy but what are you getting at?"

Shooting? That’s covered in our policy

If I wrote about every instance of Facebook’s abuse of “power” or reach, it would make for a complete case study, not an article. Let’s begin with the most recent ones that I can remember. On Tuesday, two protesters were shot dead by a “counter-protestor” How is Facebook involved in this you ask? Facebook had a page that went by “The Kenosha Guard”, a white supremacist group that organized a “Call to Arms” event. You might think this slipped by Facebook’s filters, but you would be wrong. The page and its posts which encouraged violence were reported by people on at least twice, and Facebook’s moderators simply found that these posts and comments which mentioned “being locked and loaded”, “bringing in everything”, etc. And this was all prior to the night of the shooting. Facebook took down the page the next morning. Now, maybe the shooter didn’t conform to that group. Maybe he did. But what could possibly be the reasoning behind keeping an openly violent page live on a platform after reports by users about its nature and existence?

Facebook moderator’s response to a report about a violent post on the Kenosha Guard facebook page
Facebook’s response to a report regarding violence inciting posts on the Kenosha Guard page. Viz. “Because discussing shooting people down is just part of our community standards”

Moderation

That’s the most recent event of Facebook’s blind-eye, there are thousands more, recorded and not recorded. Now that we’ve looked at an event. Let’s have a look at how Facebook handles moderation on its platform. Mark Zuckerberg and many others at Facebook have promised that AI can solve the problem of content moderation, they’ve mostly used the automated handling as an excuse when things go south. In 2017, Facebook started experimenting with a model to detect content that is “advocative of extremism”, presumably based and expanded over previously developed algorithms like eGlyph.

A basic image recognition model made to recognize memes.
Model being trained to parse memes. (Source : Facebook, 2017)

But facing the facts, AI at its current stage is not capable enough to understand the nuances of human language. More often than not, the content has context that is relevant to the speaker or a subject that the speaker has interest in. AI is rather better at being the first line, flagging potentially harmful content, and classifying them based on a “score” so that false positives or doubtful negatives can be reviewed by human moderators. But the inherent flaw in this is the bias in the very training of the AI, as it will be tuned according to Facebook’s “internal” community standards. More on that in a bit. Facebook has time and again emphasized that “AI can’t catch everything”, so it’s rather confusing why Mark Zuckerberg continues to use AI as the scapegoat whenever Facebook gets into a mix. The company takes a certain amount of pride in its human moderation operations and efforts. And having the huge userbase that it does, Facebook consults with professional service vendors around the world to get a contractual moderation force. The company globally has a workforce of 30,000 employees working on safety and security areas, about half of these are content moderators. The majority of the moderation used to be based in developing countries like Philippines (Manila) and the recently expanded operations in the US. On its home soil, the most notable among these professional vendors (and the better documented) is Cognizant. Why outsource? Simply put, contract labor is about 10 times cheaper. The average content moderator in the US gets paid $28k opposed to the Facebook employee’s $240k per annum.

An image showing the office workspace at a moderation site office operated by Cognizant
Cognizant’s moderation site office in Phoenix, Arizona (US) (Source : The Verge)

These employees have to sign non-disclosure agreements, to not discuss their work for Facebook, or even mention that Facebook is a client of Cognizant. While the workplace isn’t dingy dark basements with green screens, a regular day in the moderation sites has the employees go through the moderation queues that are assigned to them. This is no ordinary queue of policy violation, as almost every post has hate speech, violence, pornography, conspiracy theories, or a combination of some of these. Collectively, the employees described a workplace that is operating on the brink of chaos, with employees coping using dark humor, drug abuse, etc. Some have embraced the fringe viewpoints of the videos and memes they sought to remove. Every single break and even the 9 minute “wellness break” is micromanaged by the overseeing managers. On a typical day, the moderator will go through about 400 posts, spending about 30 seconds on each. Two documents serve as the “constitution” when dealing with this flagged content. The public community guidelines and a 15,000 word long internal document (a.k.a private guidelines).

The post is checked for key identifiers like abusive or racial slurs, and then dependent parameters like the context of the post itself. (eg — “I hate all men” is violation of policy by itself, but “I broke up with my boyfriend, and I hate all men” is not a violation). For content that seems more complex or confusing , each workspace has subject matter experts, who are employees that specialize on specific topics. There’s a FAQ document that includes previously flagged questions.

A board showing how moderation goes
SOPs for Moderation

However, a lot of the content tends to fall outside the scope of the documents, and in these cases, the moderator makes “a call of conscience”, for better or worse. During major events like mass shootings, moderators have to make a quick consensus based decision and work on the surge of reported content. Wrong decisions are later corrected by Managers over review. I think we can see exactly where this mechanism fails, as the most of the situations would either fall out of the documents’ scope, fewer would come under the “internal” document, and even fewer still under the publicly available bits.

An internal flowchart created by Facebook to explain the handling of hate speech regarding migrants.
An internal flowchart created by Facebook to explain the handling of hate speech regarding migrants. Notice the “consider other policies”. (Source: Vice)

Facebook calls these moderators the most “vital” part of keeping its platform safe and healthy. But how can we expect these underpaid, overworked, mentally abused workers to work at the best of their capacity to filter out all the wrong stuff when they’re continually subjected to a workplace that’s exploitive by its nature, everyday? How do we expect them to do their work without the needed transparency and a more expansive set of guidelines to work with? We just can’t. Now, this seems like a problem that can be solved with ‘more transparency’. Does Facebook have a problem there? Yes they do.

The Showrunners

We read about the problems Facebook faces with moderation on its platform. We know that solving the moderation problem is in no way simple or straightforward. But if there’s one thing that could be done better, that would be the transparency in handling these situations. Why doesn’t it happen? Well, simply put, you don’t become the biggest social network in the world without playing dirty and ruffling a few feathers. It’s often difficult to classify something as good or bad objectively, but I think Facebook as a company needs to be better than the “evil social network company” it has come to be called. In March 2019, the company reached a legal settlement with civil rights groups against running discriminatory ads. It promised it wouldn’t target users with ads specifically based on their race, gender, ethnicity or religion.

A job ad for personal care work.
This ad targeted users of “African American multicultural” ethnicity under the age of 55 years.
A job ad for mechanic at NY Transport Department
Supposed to target anybody, this ad targeted men 13 times more than it targeted women.

Yet, in August 2020 an investigation found that Facebook still gave advertisers the option to target people based on those parameters. Not to mention why this existed in the first place, it’s an inherently flawed (and illegal) mechanism, and the only reason to keep it running was because it would help Facebook drive more revenue because of running “better” targeted advertisements and content. And these shady practices are not limited to just the advertisements about jobs of course.

On Aug 14, 2020, The Wall Street Journal published a story on how Facebook’s Indian head of public policy, Ankhi Das, had shown open bias towards the ruling political party. She was said to be supportive of the BJP and Prime Minister Modi, and disparaging the opposition. A story published 2 days later showed how she had actively prevented posts and profiles of leaders from being taken down under the dangerous individuals and organizations policy. The report cited “current and former” Facebook employees as saying that her intervention is part of a “broader pattern of favoritism” by the company towards the ruling party, and the reason she cited was that it would damage the company’s business interests in its biggest market (346 million users). Some Facebook employees said the sentiments and actions described by the executive conflicted with the company’s longstanding neutrality pledge. Facebook took down the posts made by the executive soon after the reports were published. Of course, this favoritism isn’t just to support conservative or liberal interests (even the ruling BJP has summoned Facebook for allegedly having an anti-conservative bias) it’s just in favor of Facebook’s broader business interests in its markets. While being calculative about business is normal for corporations, it’s not right when a social network hosts content and people who are harmful to the neutrality of the platform. When operating in this manner, Facebook becomes the biggest tool for misinformation, fake news, targeted propaganda and communal hatred. And I think we all know the current situation of fake news on Facebook and WhatsApp in India. While no single person can be held to account for everything that’s wrong, the executives and management at Facebook is more responsible for this situation and the company’s handling of it than any other.

How to make this right?

If I knew the absolute answer to that, well, it still wouldn’t make much of a difference. But I believe this situation can be worked out from two ends, the users and the company. We, as users, are the most vital part of any social network, and yes, Facebook is too big now to be bothered about what a handful of informed users think, but people in general need to be more aware of how the platform works, and people need to be informed about what’s objectively wrong. Everyone’s allowed to have an opinion but no one is allowed manipulate someone else’s opinion. Most people are gullible to misinformation, most people don’t care, and at least people in developing countries have a tendency to believe what they read on the internet (of all places). If you believe you’re one of those ‘better informed’ people, then help others see something that’s not right. Educate the people around you, and ask them to do the same. I’ll leave the practicality bit to you. But that’s really all we as users can do. Educate ourselves about these biases, differentiate between what’s generally wrong and right.

A chalkboard at Facebook’s HQ about building a community.

As for Facebook, there’s some necessary changes that would weed out a lot of these issues from the get-go. The company should make moderation more transparent. It can’t be having “internal” documents with a different direction than the publicly available guidelines. It needs to stop using AI as a scapegoat for when things go wrong. Yes, the current models aren’t good enough to offer complete moderation, but they can be used as a first check for all the reported content. “Isn’t that already being done?” Well yes, and no. Because these models selectively ignore the same borderline situations that are treated “differently” in the internal documentation. In short, the filtering is biased. This causes some potentially harmful posts to be flagged as less important, and then it is sometimes left up and about after a negligent review by a moderator. Note that this isn’t speculative, it’s exactly what happened with the Kenosha shootings incident, among (many) other instances. The company also needs to maintain a neutral board of executives and in turn, a neutral policy. Business interests can’t be the make or break factor being the biggest social network on the planet; there’s a huge amount of responsibility towards keeping the platform safe and neutral. The company should listen to its own employees, who think that it has become a platform for all the fake news, misinformation and propaganda on the internet. Healthy communication can only strive in a healthy environment. And at the end I think it boils down to being something good or bad. I’d like to close with this bit from Google’s code of conduct (for the lack of one at Facebook).

Don’t be evil.

Oh and the funny bit? It was removed by Google in 2018.

References and further reading -

Facebook chose not to act on militia complaints before Kenosha shooting — The Verge

Does Facebook Still Sell Discriminatory Ads? — The Markup

The Trauma Floor, Secret Lives of Facebook Moderators in America — The Verge

Facebook Employees Are Outraged At Mark Zuckerberg’s Explanations Of How It Handled The Kenosha Violence — Buzzfeed News

The Great A.I. Beta Test — Slate

The Impossible Job: Inside Facebook’s Struggle to Moderate Two Billion People — Vice

AI won’t relieve the misery of Facebook’s human moderators — The Verge

The Laborers Who Keep Dick Pics and Beheadings Out of Your Facebook Feed — The Wire

--

--

Anshuman Pati
GDSC KIIT

Android enthusiast, interested in consumer tech, ethics in tech and writing about all of that.