A First Amendment For Social Platforms

by Nabiha Syed and Ben Smith

BuzzFeed
4 min readJun 3, 2016

The great 21st-century platforms — Facebook, Twitter, YouTube, Snapchat, and the rest — have this year found themselves in the middle of the speech wars. Twitter is struggling to contain vile trolling and harassment, and Facebook has gotten scalded on the little toe it dipped into curating journalism.

They have run into trouble where the lines blur between their missions and the missions of the journalists, activists, and other citizens who use them. The platforms’ own missions are vast, and clear: They power social connection, free expression, and the distribution of news and entertainment on an unprecedented scale. But they largely don’t create speech themselves — most don’t have their own reporters or they generate their own content.

And so their core mission can’t and won’t be realized by what they say, but rather in how they empower, constrain, and manage what other people say. The trust we place in them is ultimately about whether we trust them to manage our own collective expression.

For this trust to endure, these platforms must be transparent about their own policies and be consistent in their enforcement. Fortunately, experimenting platforms do not need to start from scratch. Lawyers and judges have spent centuries wrestling with similar questions surrounding free speech. Their answers can be deployed to defend the remarkable global public squares the platforms have created.

As it now stands, however, major platforms have improvised their own rules. (Facebook’s is here, Twitter’s here, for example.) They cover policing everything from hate speech to graphic content. These rules are rooted, however, in no clear precedent, tradition, or philosophy. Critically, people have no way of knowing how a platform’s broadly stated community norms work in individual instances.

Transparency requires that these “cases” on which platforms rule should be made public in some form. Twitter, Facebook, and their peers should consider making, if not their decisions to ban users, their appeals process and the outcomes public. Where the outcomes may reveal private information, the platforms should provide summaries of their decisions to ban a user or overturn a previous choice. The public should understand how platforms are applying their rules, and have faith that they are being applied consistently.

We recognize that workable transparency has to give platforms room to experiment with strategies to fend off determined trolls and other bad actors. There are types of harassment in which platforms like Twitter are right to maintain ambiguity. “Shadowbanning” users, for instance, can be a valuable tool for stopping sexual and gender-based harassment without the harasser knowing he has been muted. Similarly, a colleague of ours suggests allowing a user to set a rate limit on the replies she or he can receive, something that would not need to be public. “Sealing” these strategies from the public view also has an analogy in the law, which permits limited secrecy in extraordinary circumstances.

First Amendment principles can help clarify other tricky areas in online speech. For example, the First Amendment recognizes that public figures receive fewer reputational protections than private ones. Rather than seeking to apply vague, blanket rules around abuse, platforms should instead rely upon this legal standard. No one should be banned for parodying Vladimir Putin. And who exactly is a public figure on social media? We agree that can be a hard question. But the law has guiding principles to draw from.

Even distasteful speech — whether it be from funeral picketers or in violent video games, as the United States Supreme Court has addressed — can carry some value. Of course, the line between honest criticism and harassment can be fuzzy, but the difference is crucial. Platforms need not protect speech solely designed to drown out dissenting voices.

Where the application of First Amendment principles is less settled or a less perfect match for a social network — like with what constitutes a “true threat” of violence, or where doxxing is permitted — the platforms should of course develop their own standards in the meantime. But they should develop them in the context of a new, explicitly stated commitment to the principles of free expression.

We are not suggesting — as many have — that these services be regulated in the U.S. as a public utility, making speech on it subject to certain guarantees. We are suggesting, however, that the platforms make a public commitment not just to opaque and ad hoc rules, but to time-honored principles and process. These private companies want and need the public trust. We hope they will commit to earning it.

Nabiha Syed is the assistant general counsel of BuzzFeed. Ben Smith is the editor-in-chief of BuzzFeed.

--

--