Facebook has had its share of scandals recently. Just within the past few weeks, Facebook was the subject of an undercover report about content moderation in Ireland which led to calls for fines against the social network. Mark Zuckerberg, Facebook’s co-founder and CEO, defended the right of Holocaust deniers to post on the site, a statement that required almost immediate clarification. Leaked documents reveal that Zuckerberg actually congratulated the Trump campaign on its imaginative use of the platform. And outgoing Chief Security Officer Alex Stamos sounded the alarm over Facebook being invasive and not taking moral stands on clear moral and humanitarian issues.
In the midst of its turbulent year, which has resulted in the largest one-day stock price in history, the platform has committed to improve the experience for its users and to be more transparent with the public. Facebook has enlisted the help of Laura Murphy to conduct a civil rights audit of the company’s impact on underrepresented communities and communities of color in an attempt to be more accountable with American users. Laura has already begun meeting with stakeholders in the technology policy and civil rights arena. As the final deliverables start to take shape, it is imperative that we keep Facebook’s focus on the importance of this audit and changes to its user experience.
A significant step in the right direction requires changing the community standards to make it more accessible, readable and easily searchable. While the standards are not prominently displayed on or linked through the homepage, the URL is easily found on search engines. The body of the standards is divided into six categories which are navigable on the left. If the topic of interest is not easily found, the search bar at the top of the page directs you to the exact paragraph containing the keywords. However, an important detail that is not easily found is when the community standards were last updated. As previously reported, the standards seem to be updated sporadically, typically after a controversy. It is not clear how to find the last time the standards were updated or what was changed since the last iteration. This demonstrates a lack of transparency with its users.
Another serious concern is that the standards are not applied consistently. For instance, it is not uncommon for racist and offensive posts to remain while posts using the word “cracker” (in the context of Christmas cookies) to be swiftly removed. Anti-Muslim rhetoric is often left up, even after flagging. Black activists are routinely banned on Facebook while their critics have been able to game the system to have users sent to “Facebook jail.”
This isn’t a matter of people being overly sensitive or unable to take a joke; it’s about the safety of our communities. Facebook has the tools and resources to stop these roots of hate if they choose to do so. Instead, they are turning a blind eye and, until recently, putting the entire burden of proof on the victims. This year, Facebook introduced new artificial intelligence (AI) to flag hate speech. But, so far, it hasn’t been much better than the human review process considering that it accidently took down part of the Declaration of Independence.
Even when proof is presented by its users, Facebook does little to nothing to remedy the situation. For example, the Southern Poverty Law Center flagged 200 hate groups and less than 10 were removed. Facebook has devised convoluted rules to determine what is or isn’t hate speech and routinely ignored the advice of external experts. Instead, the company relied on its internal standards that have not been able to protect the most vulnerable communities on their platform thus far. According to Zuckerberg, even if content is deeply offensive, he does not believe that Facebook should take it down. That viewpoint has allowed hate speech to spread throughout the platform. In fact, it the reason why users like Alex Jones have been able to spread hate and conspiracy theories on the platform, only recently restricted by a brief suspension.
For many typical users of Facebook like myself, we are aware that hate speech lives on the site but we rarely encountered it in our interactions with the platform. Thanks to Facebook’s algorithms, unless you are a public figure or prominent activist, it is unlikely that you will be exposed to content that you find offensive or hateful. For example, as a Latina, I have never seen a anti-Latino post on my newsfeed nor does Alex Jones ever make an appearance on my screen. Like most, I tend to engage with those with similar ideologies and have created a little echochamber for myself. It would take a drastic change in the way that I engage with Facebook to start seeing posts from the 71 anti-immigrant groups on Facebook¹. However, this does not mean that the hate speech that lives on the platform does not affect me.
Just because I don’t see hateful posts on my page does not diminish the real-life consequences of their existence or wide-spread distribution. The National Hispanic Media Coalition has studied for years how hate speech online translates to hate crimes in real life. Ever since the advent of the Internet, hate groups have used it to recruit members, spread racists ideas, and encourage offline violence². Within the past year, we have seen a 51% increase in hate crimes against Latinos in California. The Unite the Right rally was largely organized on Facebook, with reports of a Unite the Right 2 event being organized through Facebook messenger.
Despite evidence of over 700,000 unique members of over of 1,870 far-right extremist groups that regularly post and share events, Facebook has made external investigations close to impossible. Nonprofits and researchers have been shut-out of the data required to conduct external reviews and audits of Facebook’s algorithms and policy practices³. Researchers have either decided to focus on Twitter or pursue hate speech on its several other platforms including 4chan, 8chan, Gab, and Youtube⁴. Propublica has used crowdsourcing research techniques to circumvent Facebook’s stronghold on their data and is working to reverse-engineer Facebook’s algorithms. It wasn’t until July 11th of this year that Facebook allowed academic researchers access to limited datasets.
While cracking down on hate groups on Facebook won’t stop hate speech on the Internet, Facebook should not be a platform for hate groups to organize, spread their harmful messages, and ignite real-life violence. In their own words, “there is no place for hate on Facebook.” From its inception, no one expected the platform to grow to engage 1.45 billion daily users or to be tackling hate speech, data misuse, and fake news. But as Facebook takes a stand against hate speech, all we ask is that Facebook follow through on its commitment to improve the user’s experience and be more transparent with the public. The civil rights audit is a great start and we will be monitoring its progress closely.
— — — — — — — — — — — — — — — — — — — — — — — — — — — — — — —
- Email communication with Dr. Squire, Computer Science Professor at Elon University. (Jul 1, 2018).
- Ben-David, A. and Matamoros-Fernandez, A., 2016. Hate speech and covert discrimination on social media: Monitoring the Facebook pages of extreme-right political parties in Spain. International Journal of Communication, 10, pp.1167–1193.
- O’Neil, C., 2016. Weapons of math destruction: How big data increases inequality and threatens democracy. Broadway Books.
- Phone Interview with Joan Donovan, Media Manipulation/Platform Accountability Research Lead at Data & Society. (Jun 25, 2018).