Evolution of Online Hate Speech and Fake News

Folajomi Agoro
An Idea (by Ingenious Piece)
4 min readSep 15, 2020
Photo by https://www.pikrepo.com/search?q=fake+news

According to Facebook’s community standard, ‘hate speech is defined as a direct attack on people based on what we call protected characteristics — race, ethnicity, national origin, religious affiliation, sexual orientation, caste, sex, gender, gender identity and serious disease or disability’.

Ability to communicate with wider audience has meant that people’s engagement with politics, public affairs and interaction with one another has evolved from what it used to be. Hateful messages and incitements to violence are distributed and amplified on social media in ways that were not previously imagined.

In 2013, a year on from their Initial Public Offering; Facebook had gained a major abuse and hate speech problem and seemed ill-equipped to deal with it. Several changes were announced to tackle this problem head-on, namely:

· A review and update of the guidelines used to evaluate reports of hate speech

· Update the training for teams responsible for these evaluations

· Increase accountability for creators of “cruel and insensitive” content meaning they must reveal their identity

· Increase communication with groups already working against hate speech

Between 2015–2017, Facebook has had to further improve the user experience to try to eliminate the scourge of fake news/spread of false information, disruptive video click baiting and harassment of all sorts online. Updates ranging from users being able to spot and report fake news, Facebook working with fact checkers as well as hitting traffic websites that intend to spread hoaxes and fake news for malicious purposes were introduced with varying level of success experienced. Worries about how Facebook might be used to sway political events in the future (whether through mined data or fake news and content) grew.

It didn’t help when the US Congress released data on the ads being bought by the Internet Research Agency, a company backed by Russia, between 2015 and 2017. This brought renewed attention to political meddling on the platform.

Analysts say trends in hate crimes around the world echo changes in the political climate, and that social media can magnify discord. At their most extreme, rumors and attacks disseminated online have contributed to violence ranging from lynching to ethnic cleansing. The response has been uneven, and the task of deciding what to censor, and how, has largely fallen to the handful of corporations. But these companies are constrained by domestic laws. In liberal democracies, these laws can serve to defuse discrimination and head off violence against minorities. But such laws can also be used to suppress minorities and dissidents.

Much of the world now communicates on social media, with nearly a third of the world’s population active on Facebook alone. As more and more people have moved online, experts say, individuals inclined toward racism, misogyny, or homophobia have found niches that can reinforce their views and goad them to violence. Social media platforms also offer violent actors the opportunity to publicize their acts.

Social media platforms rely on a combination of artificial intelligence, user reporting, and staff known as content moderators to enforce their rules regarding appropriate content. Moderators, however, are burdened by the sheer volume of content and the trauma that comes from sifting through disturbing posts

Facebook CEO Mark Zuckerberg called for global regulations to establish baseline content, electoral integrity, privacy, and data standards. But problems have risen when platforms’ artificial intelligence is poorly adapted to local languages and companies have invested little in staff fluent in them. For instance, in a bid to preempt bloc-wide legislation, major tech companies agreed to a code of conduct with the European Union in which they pledged to review posts flagged by users and take down those that violate EU standards within twenty-four hours. Although policymakers are increasingly pushing social media platforms to take responsibility for the regulation of contents being shared, some considerable amount of fear exists that this approach will lead to repression amongst minority factions, hinder freedom of speech and promote authoritarian regimes worldwide.

Considering how complex this problem is, paying close attention to new legislative initiatives around the world is necessary to assess whether there is a good balance between protecting freedom of speech and the prohibition of hate speech. For this monitoring to take place, social media companies need to be transparent about the content that they are removing and make their data available to researchers and the wider public for scrutiny.

REFERENCES

· https://www.brandwatch.com/blog/history-of-facebook/

· https://www.facebook.com/communitystandards/hate_speech

· https://researchoutreach.org/articles/hate-speech-regulation-social-media-intractable-contemporary-challenge/

https://www.cfr.org/backgrounder/hate-speech-social-media-global-comparisons

--

--

Folajomi Agoro
An Idea (by Ingenious Piece)

An avid overthinker. Project Management, Social Media & Sports Enthusiast.