Meta’s New Content Policies: How Removing Protections Encourage Hate Speech, Not Free Speech
by Jenny Liu
Facebook is a fundamentally different platform from the version Mark Zuckerberg created in the early 2000s. With the acquisitions of Instagram and WhatsApp and the addition of features like Marketplace and livestreaming, the company now known as Meta has shaped how individuals keep in touch with friends and family, find community, and consume news and information. But its sites are also used for sinister purposes: the anti-vaccination movement gained traction on Facebook and Instagram, and election denialists used various “Stop the Steal” Facebook groups to plan the January 6, 2021 attack on the U.S. Capitol. Racist memes and dehumanizing posts about Asian Americans have flowed freely across Meta-owned Facebook, Instagram, Threads, about Asian Americans have flowed freely across Meta-owned Facebook, Instagram, Threads, and WhatsApp.
To tackle these problems across its platforms, Meta needs to strengthen its misinformation and hate speech policies, not dismantle them. Over the past several years, Meta’s choices to sunset misinformation tracking tool CrowdTangle and reinstate the Facebook and Instagram accounts of known purveyors of hate speech and disinformation have drawn criticism from independent researchers, journalists, and tech policy experts alike. In an alarming continuation of this trend, on January 7, 2025, Meta announced it would end its independent fact checking program, pare down its hateful content policy, and implement crowd-sourced Community Notes. These decisions will have devastating impacts on Asian Americans and all marginalized communities who use their websites.
Misinformation across a wide range of topics will thrive on Meta’s platforms in the absence of any meaningful guardrails against the spread of falsehoods. In the past, Meta’s algorithms demoted content that had been fact-checked and deemed inaccurate; now, election and health-related misinformation will circulate without recourse across the platform. And while recent research has repeatedly shown gaps in Meta’s content moderation efforts, especially its moderation of non-English content, this policy reversal is a significant reversal for a company that has outwardly maintained a dedication to fighting misinformation.
User-driven fact-checking will not protect against misinformation. Meta’s decision to shift the onus of content moderation to its users through a Community Notes-based approach will not safeguard its users from malicious disinformation. Community Notes that reflect opinions of a majority of contributors rather than facts cannot be effective when they may themselves be misinformed or pursuing political agendas.
This decision will have especially negative consequences for Asian Americans. For many limited-English proficient individuals, in-language enclaves — in the forms of WhatsApp group chats or community-based Facebook groups — serve as critical lifelines to access news and information. Unfortunately, these closed spaces also function as breeding grounds for misinformation. Meta has yet to release specifics regarding its proposed Community Notes program. If it operates similarly to that of X, it is likely English-language content will receive the most user reviews and notes, leaving non-English speakers on their own to decipher whether the information they are seeing might be false or misleading.
In addition to an increase of in-language falsehoods, these policy changes will also likely lead to a jump in falsehoods and conspiracy theories about or at the expense of Asian Americans that blurs the line between misinformation and hate speech. Meta’s updated Hateful Conduct policy, shows particularly egregious edits to its existing regulations. Since 2020, we have seen the blaming and scapegoating of Chinese and Asian Americans for the COVID-19 pandemic online. Platforms have appropriately worked to target such misinformation. But under Meta’s new policies, gone are protections against the hateful language that can lead to physical violence offline. Also absent are provisions prohibiting comparisons of women to “household objects or property” and calling LGBTQ+ people “mentally ill.” Rules banning insults about an individual’s appearance based on “race, ethnicity, national origin, disability, religious affiliation, caste, sexual orientation, sex, gender identity and serious disease” have also been removed. These changes endanger immigrants, communities of color, members of the LGBTQ+ community, and other marginalized groups who are already disproportionately the subjects of identity-based hate speech and harassment online.
Meta’s recent decisions send a clear message to minority communities seeking a place to connect. It has turned its back on any commitment to creating safer, more inclusive platforms. In the days following its initial announcement around content moderation policy changes, Meta also told employees that it would terminate its diversity, equity and inclusion (DEI) initiatives through an internal memo. Users should have the right to social media platforms without being inundated by false or misleading information, becoming the targets of harassment, or seeing hateful or racist content. Online manipulation, conspiracy theories, and political influence campaigns all thrive on social media platforms that lack robust protections against misinformation. Regrettably, Meta’s recent policy choices will embolden disinformers, degrade public trust of online information, and make its platforms more dangerous places for its users.
Jenny Liu is the Senior Manager of Mis/Disinformation Policy at Asian Americans Advancing Justice | AAJC.