Internet companies must stop ignoring the racism and other forms of hate that are prevalent on their platforms and acknowledge that the hateful discourse of the few silences the speech of the marginalized many.
Internet platforms like Facebook, Google and Twitter use core algorithms to intentionally gather likeminded people and feed them self-validating content that elicits powerful reactions. Combine this with the platforms’ ability to finely target messaging and ads and you’ve created a potent formula for the virulent spread of disinformation, propaganda and hate.
Indeed, White supremacist organizations are using a multitude of internet platforms to organize, fund and recruit for their movements to normalize and promote racism, sexism, xenophobia, religious bigotry, homophobia and transphobia, and to coordinate violence and other hateful activities.
These coordinated attacks not only spark violence in the offline world, they also chill the online speech of those of us who are members of targeted groups, frustrating democratic participation in the digital marketplace of ideas and threatening our safety and freedom in real life.
Emboldened by the Trump administration’s racist and anti-immigrant policy and rhetoric, extremist hate groups are on the rise in the United States. They’re joined by fascist and anti-government factions, rounding out a surge in far-right nationalist activity and violence.
In response, more than three-dozen racial justice and civil rights organizations — including our group, Free Press — have spent more than a year evaluating the role of technology in fomenting hate. Today (October 25), we unveiled a comprehensive set of model corporate policies for stopping hateful activities online, with an emphasis on the preservation of free speech and net neutrality.
Our goal is for online platforms and financial transaction companies to adopt corporate policies that prevent the spread of hateful activities and follow procedures to ensure those policies are enforced in a transparent, equitable and culturally relevant way. That means employing a team that includes members of impacted communities, and providing clear and easy ways for people and groups to appeal removal of online content.
These model policies align with our commitment to the First Amendment and net neutrality. If applied correctly, these policies would ensure that members of marginalized communities are able to fully participate in and express ideas on digital platforms without fear of abusive consequences in real life. Right now, when marginalized communities speak out against racism and other forms of oppression, platforms often remove their content — compounding the violation of their rights to free speech.
People do not have the inherent right to amplify their racism, xenophobia and other forms of bigotry on online platforms. The First Amendment limits the government’s role in policing speech, but those limits don’t apply to private online platforms.
Since online platforms are speakers, like newspapers, they can curate content on their sites without violating the First Amendment. If a platform bans a user, that doesn’t stop that person from accessing the open internet to speak — the user is simply not permitted to broadcast hate on that particular platform.
Read more of Carmen Scurato and Jessica J. González on de-platforming hate speech online: http://bit.ly/2CH6Rtm.
© 2018 Colorlines. All right reserved.