When should social media platforms remove content? Let’s ask Congress.

Photo by Getty Images/bauhaus1000

By Allison Berke, Executive Director of the Stanford Cyber Initiative at the Freeman Spogli Institute for International Studies.

On July 17th, the House Judiciary Committee held a hearing titled “Examining the Content Filtering Practices of Social Media Giants,” featuring testimony from Monica Bickert, Vice President of Global Policy Management at Facebook; Juniper Downs, Global Policy Lead at YouTube; and Nick Pickles, Senior Strategist on Public Policy at Twitter. The hearing took place shortly after President Trump’s summit with Vladimir Putin in Helsinki, prompting many Congresspeople to use portions of their speaking time to condemn Trump’s statements, or question why the Committee was holding a hearing on social media and not Russian election interference, immigration policy, or other pressing matters.

Aside from occasional tangents on the obligations of a public company to its shareholders and a defeated motion to go into executive session to discuss Russian election interference, the bulk of the three-hour hearing was spent probing the boundaries of what content a social media platform should disallow, and what process should be undertaken before that content is removed. All three platform representatives touted the measures they’ve taken to remove objectionable content, from automatic filtering (YouTube) to external advisory committees (Twitter) and a team of fact-checkers (Facebook). A shared extended mea culpa was offered by all platforms for the times when they incorrectly removed permissible content or failed to remove objectionable content quickly enough, including Facebook posts from a group called “Milkshakes Against the Republican Party” that Bickert was forced to read aloud.

The recurring issue driving the questions addressed to Facebook, Twitter, and YouTube is whether the companies are neutral platforms or potentially liable publishers. Answers to this question have tended to vary based on which is most expedient for the company, and Facebook has notably declared itself a platform in public while calling itself a publisher as the defendant in the case Six4Three v. Facebook. A platform takes less responsibility for what it hosts, while a publisher exercises more control; the bulletin board outside City Hall may not be responsible for the content of the fliers posted to it, while the city’s newsletter is responsible for what it prints, whether externally or internally authored. As Mark Zuckerberg noted in an interview earlier this week, that means that Holocaust deniers are not removed from Facebook, as neither being incorrect nor being objectionable are considered sufficient to invoke its “community standards.” Judging by the reaction to Zuckerberg’s statements, though, community opinion disagrees; Bickert was careful to point out that Facebook does not hold a monopoly on social media, but the 68 percent of Americans who use Facebook are a substantial audience for any message, and the ability to reach them is valuable.

Re-litigating the First Amendment specifically for Facebook posts is a thankless exercise for the committee to undertake, but allowing foreign-sponsored agents using aliases to conduct psychological warfare on a platform that reaches more than two-thirds of our country is similarly distasteful. When pressed for definitions of “fake news,” none of those testifying could provide a succinct definition, with Facebook explicitly relying on consensus by committee to determine what should be demoted in the news feed. Posting fake information on Facebook is not prohibited; it’s the addition of source and intent that make a lie into propaganda. Facebook has adopted rules for purchasers of political ads that are designed to prevent foreign interference in domestic elections, but the new rules don’t apply to posts that aren’t advertisements, even if they are political, or incendiary.

None of the companies represented at the hearing seemed to want to be in the middle of international information operations; none was asserting a counter-insurgency doctrine for how Facebook page operators should or shouldn’t use the platform to win hearts and minds. The past two years have been a learning experience, and initial doubts over the scope of foreign political influence on social media have given way to working groups and plans to better utilize internal data sources to identify potential new campaigns. The potential for Congressional regulation at this point is unclear; Congress would likely do no better at defining “fake news” than the platforms themselves and is severely restricted by the First Amendment from directly addressing information operations, online or elsewhere.

Chairman Goodlatte’s introductory line of questioning at the hearing insinuated that Facebook’s monopolistic position means it is insulated from the threat that users will leave over bad moderation calls, but lax moderation, or a pattern of post-exposure cleanup rather than prophylactic prevention, could still push users away. An app store that sometimes delivers your purchased ebook and sometimes delivers a virus, or a food court that every so often serves a disgusting dish instead of your order, is at great risk of disruption, which should worry even these social media giants.

Faculty views expressed here do not necessarily represent those of the Freeman Spogli Institute for International Studies or Stanford University, both of which are nonpartisan institutions.

--

--

FSI Stanford
Freeman Spogli Institute for International Studies

The Freeman Spogli Institute for International Studies is Stanford’s premier research institute for international affairs. Faculty views are their own.