Getting Zucc’d: How Biased is Your Facebook Feed?

Jessica Paige Barrett
The Public Ear
Published in
4 min readApr 2, 2019

You wake up, roll over and, due to your terrible millennial habits, the first thing you do is check your phone. You’ve been told a million times, through a million sources don’t look at your phone in the morning, or before bed but, whatever. Millennials, according to the generations before us, are a group of barely-functioning trainwrecks practically addicted to technology, so what the hell, why not embrace the stereotype?

The thing is, we’ve grown up alongside platforms such as Instagram and Facebook. They’ve become so ingrained in our lives and routines that it’s easy to lose sight of the fact that, while you’re staring at your screen, there’s someone staring right back.

Worldwide, Facebook has over 2.32 billion active users, with 1.52 billion of them engaging, updating and interacting with their Facebook accounts every single day. Every day, we rely on around 30,000 people to monitor and manage the security of those interactions.

An insane amount of recent articles have discussed the issues faced by these content moderators, specifically regarding the trauma they endure from the graphic nature of the content they see. However, the major problem that is faced by us, the users on the other end, is that we are entrusting this small group of men and women to dictate our online experiences.

A Facebook group that I have been part of for well over a year now has been taken down upwards of 6 times- a meme page for women, queer and non-gender conforming people. The group isn’t anything like the notoriously infamous Bad Girls Advice page, which let women’s abusive behaviour towards their partners run rampant. Instead, it was a page for sharing humor without the gaze of straight cisgender men.

All over the internet there are accounts of an inherent bias in Facebook moderating which has highlighted the double standards when it comes to censorship. Facebook Jailed, an activist group that works towards exposing this bias, found that users posting threats about killing or physically harming women and posts such as “women are scum” were left to remain visible and easily consumed — yet post’s stating “men are trash” are another story. Post these and you’re looking at a 30 day ban from their platform.

A recent exposé into the lives of Facebook moderators revealed that, given their exposure to the content that they see every day, some moderators are led down a path that sees them shifting their world views into believing the earth is flat (News flash, its Not) to questioning the legitimacy of the holocaust (News Flash, It happened).

Is it a stretch to believe that being exposed to content that is racist and sexist may also have a negative impact on their beliefs and in turn the content they allow you to see?

In 2017, The Guardian revealed leaked guidelines surrounding Facebook’s moderating policies, most of which are indirect and open to interpretation, allowing for moderators to apply their own morals to dictate their decisions. For example, while Facebook bans “Racism” and specifically bans white supremacy, the leaked documents revealed that it allows white nationalism and white separatism, which Black history scholars say is essentially identical to white supremacy.

A line that particularly stood out to me in Facebook’s Moderating Manual was, “we aim to allow as much speech as possible but draw the line at content that would cause real world harm.” At what point do we start to question, whether or not allowing white supremacy, sexism, and the belittlement and subjection of minorities online is having real world impacts?

As we rush towards 2020, the internet has interconnected our societies in a way we could have never imagined. Is it time for companies to stop taking a step back under the guise of free speech? Because it seems like in reality, they just want more people using their platform for the benefit of their own bank accounts.

Facebook itself may not be creating hateful ideologies, but are they complicit in allowing them to thrive and flourish?

At what point do we as a global society have to stop allowing the immoral behaviours of the past to follow us into the future? I guess like every argument, this one is particularly nuanced, but this bias, whether it comes from policy design, content moderation or the blatant greed of those who run these organisations, can have a real-world impact, further marginalising communities that already experience prejudice and persecution offline.

Facebook may have what seems like an impossible job, but I’m sure it’s one their nearly 16.9 Billion dollars in revenue could go a long way to fix.

--

--