Should social media companies screen content before it’s posted?
Should social media companies screen content before it’s posted and weed out offensive or extremist content? Setting aside the First Amendment for a minute, here’s the question: Would you be willing to accept a delay in posting content if it meant increased vetting from tech companies?
Tech Companies Team Up
Facebook, YouTube, Twitter, and Microsoft recently announced they were forming a global work group to team up to remove terrorist content. That comes on the heels of new legislation in the European Union, such as Germany proposing $6 million dollar fines for failing to remove hate speech postings in a timely manner.
The tech companies say they will talk best practices, content detection techniques, and transparency reporting for removals.
Tech companies, especially social media, have been under attack for not doing enough. It’s a slippery slope — who gets banned? In the past two years, Twitters says it has suspended nearly a million accounts tied to offensive speech and thousands more with ties to Russian propaganda.
“Debate is part of a healthy society. But when someone tries to silence others or attacks them based on who they are or what they believe, that hurts us all and is unacceptable,” posted Facebook CEO Mark Zuckerberg.
What does Facebook consider hate speech? You might be surprised.
Take this quiz to find out.
Whether you think it’s a violation of free speech or not, they are private companies. They can set their own terms for use and they have the right to discipline or remove users that violate those terms. The U.S. government is not allowed to restrict free speech. Private companies, however, can choose to do so.
Tech Companies “Not Doing Enough”
While I don’t have U.S. results, Demos and Opinium surveyed 2,000 British adults. Here’s a bit of what they found:
- 76% do not believe tech companies are devoting enough resources to removing extremist content from their platforms.
British citizens want action, and when extremist content is posted, they want it removed fast. The UK government has asked for a two-hour takedown window. Right now, they gauge it can take an average of 36 hours, even though the four major social media platforms signed a voluntary code of conduct pledge in 2016 to takedown offensive content within 24 hours.
With a reported two billion worldwide users, it’s simply impossible for Facebook to manually monitor everything that’s posted. Part of the solution has to be automated, which means it’s an imperfect system.
In the Demos/Opinium survey said they didn’t think it was the job of tech companies to police content. Maybe they worry about what would get policed. We might all agree the ISIS terrorist groups need to be censured, but who gets to decides what groups are extreme and worthy of being blocked. There really is no perfect system.
How Long Would You Wait?
The majority of those surveyed (79%) said delaying posts by 30 seconds or more would be acceptable if it meant more vetting of content.30% said a delay of three minutes or more would be acceptable.