The Evolution of Content Moderation Rules Throughout The Years

The birth of the digital public sphere

CheckStep
CheckStep
Apr 2 · 7 min read

Online forums and social marketplaces have become a large part of the internet in the past 20 years since the early bulletin boards on the internet and AOL chat rooms. Today, users moved primarily to social platforms, platforms that host user-generated content. These platforms comprise the new online public squares where ideas and information are exchanged and debated by anyone. Platforms which offer free services to their users decide on the rules of engagement, and as is the norm, these rules have changed and evolved throughout the past decade and a half. From little to no moderation, platforms have introduced algorithms and guidelines that have shaped public conversations throughout the world (e.g., ethnic violence in Myanmar, BREXIT, the “Stop the Steal” campaign) by their successful implementation or lack thereof. This post summarizes the evolution of content moderation rules and community guidelines of four popular international platforms: Facebook, Twitter, Instagram, and YouTube.

The Timeline

2000–2009: Launch of social platforms

2004: Facebook launches for university students at select US schools

2005: YouTube launches

2007: Facebook becomes available to the world

2009: Major platforms apply PhotoDNA technology to flag and remove child pornography image (CSAM) online

2009: YouTube and Facebook have become global social platforms, experience blocking in at least 13 countries around the world

2010–2019: Launch of standardized community guidelines

2010: Instagram launches

2010–2011: Social media platforms such as Facebook, Twitter, and YouTube play a major role in carrying the voices and reporting from regional protests in North Africa and the Middle East known as the Arab Spring

2010: Facebook releases its first set of Community Standards in English, French, and Spanish

2011: YouTube makes an exception to allow violent videos from the Middle East if they are educational, documentary, or scientific in nature in response to activists in Egypt and Libya exposing police torture

2012: Facebook acquires Instagram

2012: Twitter launches its first transparency report

2012: YouTube removes, blocks “Innocence of Muslims” video in several Muslim countries

2012: Twitter institutes “country withheld tweet” policy (soon after blocked content in Russia and Pakistan)

2012: Documents from Facebook’s content moderation offices leaked for the first time (Gawker)

2013: Facebook launches its first content moderation transparency report

2010–2019 Launch of standardized community guidelines: Misinformation, terror-linked and organized hate content implodes online

2014: ISIS, terror-linked content, and online radicalization become a major issue on social platforms in several countries

2014: The beheading video of American Journalist James Foley appears online amidst a big wave of terror-linked content

2014: YouTube reverses its policy on allowing certain violent videos

2014: Platforms apply a new rule against ¨Dangerous Organizations¨ linked to terrorism

2015–2016: Twitter changes its content moderation rules on harassment after a remarkable harrassment case against the stars of the rebooted Ghostbusters

2016: Major platforms are under amateur and state actor-linked information manipulation campaigns during the 2016 US presidential election

2016: Facebook launches a fact-checking program on its platforms and partners with IFCN fact-checking organizations

2016: Platforms fail to stop campaigns of misinformation that spur ethnic violence in Myanmar against the Rohingya minority

2017–2018: Launch of the Global Internet Forum to Counter Terrorism (GIFCT)

2016: FB live starts to attract an increasing number of suicides and live shootings .

2018: Video of the shooting of Philando Castile in the USA is broadcast live on Facebook.

2018: TikTok launches in China

2018: Twitter removes 70 million bot accounts to curb the influence of political misinformation on its platform

2018: YouTube releases its first transparency (enforcement of community guidelines) report

2018: Facebook forms an Oversight Board to rule over restoring removed content

2018: Facebook allows its users to appeal its decisions to remove certain content

2019: Christchurch terrorist attack (originally broadcast live on Facebook Live) leads to the Christchurch Call to (eliminate terrorist content)

2019: Twitter allows its users to appeal its content removal decision

2019: TikTok launches internationally and attracts a new growth of international audiences

2019: The first emergence of the novel coronavirus in China

2020-Present: New rules to moderate content online expand internationally to counter an ever international phenomena of hate speech, election misinformation, and health misinformation

2020-Q1: Novel coronavirus known as COVID-19 spreads around the world; COVID-19 misinformation soon follows on major social platforms

Major social platforms intervene to ban health information that contradicts government and official sources

Major platforms start to label COVID-19 related misinformation at scale

2020 (Q2+Q3): Facebook Oversight Board chooses its first cases

Twitter and Facebook start labeling the posts of US President Donald Trump

Facebook introduces a slew of new content policy shifts that address:

  • Holocaust denial content
  • US-based organizations that promote hate
  • Organized militia groups
  • Conspiracy theories

Major platforms introduce new vetted content around the topic of election integrity in the US after repeated claims of voter fraud

Twitter launches transparency center

2020-Q4: Misinformation around election integrity and fraud intensifies in the US promoted by the US President Donald Trump

Major platforms start fact-checking posts by the US president and other similar users

Facebook starts labeling content in India

Platforms introduce new rules to counter Covid-19 vaccine misinformation

2021-Q1: Jan 6, Facebook suspends the account of US President Donald Trump for violating its community guidelines and inciting violence

Jan 6, Twitter deletes some tweets of US President Donald Trump and locks his account

Jan 8, Facebook, Twitter, YouTube, Instagram ban the account of US President Donald Trump for the remainder of his term in office, block his posts from other accounts

Facebook oversight boards make first rulings on the 6 cases they selected

Facebook Oversight Board announces it will rule on the ban of Donald J. Trump

Facebook amends its groups’ moderation rules, adds new grounds for removal

The rules are complex

As private companies increasingly take on the public forum’s role, users and businesses operating on these platforms may find their rules increasingly complex. Between curbing the rise of hate speech, disinformation (which sometimes can amount to national security threats), and protecting the fundamental right of expression for their users, platforms find themselves making consequential decisions to operate free-thought-exchanging forums within a profit-driven business model and the grace of new regulatory frameworks that may impede on their ability to expand internationally.

Trust, safety, and accountability are new rules that major platforms have committed to operating by following the Santa Clara principles. With the rise of disinformation and false content on the internet in general and the larger online information ecosystem of which social platforms are a part, trust is the currency with which social platforms and their online communities can thrive. Trust is the evidence users are engaged in a healthy debate online. They trust the information they read and the users they interact with. Accountability is the other side of the same coin. In a competitive, complex world of online spaces and competing ideals, social platforms must ensure they are accountable to their users by providing them with the metrics and the feedback when guideline enforcement mechanisms are applied to protect them from unfettered exposure to harmful online speech and online content.

AI and the first line of defense

The scale of content on modern internet platforms and the promise of social means that we have got to delegate some of the moderation work to algorithms and machines. While humans must stay in the loop, AI content moderation systems leverage the scale of large datasets of classified speech to filter harmful online content. The largeness of platforms like Facebook, Twitter, Tiktok, and YouTube and the hundreds of millions of pieces of user-generated content every day makes it imperative to implement AI systems for content moderation. Still, big and small platforms alike will benefit from these solutions in the long run. AI content moderation systems do not eliminate human moderators’ role, because contextual knowledge and judgment continue to be important. Human moderators also contribute to building up new training data that AI algorithms will act on instead of relying on copies from old AI models.

A particularly robust AI content moderation system that adapts to several languages can help community managers and users in the pre-moderation phase. The AI system can both detect and hide offensive content but also educate users on the community rules they have agreed to before they commit a violation and create a healthier online environment conducive for healthy and constructive conversations and exchange of information.

CheckPoint

Sharing ideas for quality information and building healthy communities online.

CheckPoint

A publication of CheckStep providing content moderation at scale to online communities. We publish articles related to disinformation, fact-checking, online moderation, and free expression.

CheckStep

Written by

CheckStep

AI to boost good content 🚀 and moderate bad content ⚠️ — Manage a UGC platform? Say hello@checkstep.com

CheckPoint

A publication of CheckStep providing content moderation at scale to online communities. We publish articles related to disinformation, fact-checking, online moderation, and free expression.

Medium is an open platform where 170 million readers come to find insightful and dynamic thinking. Here, expert and undiscovered voices alike dive into the heart of any topic and bring new ideas to the surface. Learn more

Follow the writers, publications, and topics that matter to you, and you’ll see them on your homepage and in your inbox. Explore

If you have a story to tell, knowledge to share, or a perspective to offer — welcome home. It’s easy and free to post your thinking on any topic. Write on Medium

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store