Platforms or Publishers? How to fix Section 230

I propose that the reach of individual content should be the main factor that distinguishes social media’s role as either a publisher, or just a platform.

Richard Zack
Our.News
5 min readDec 10, 2020

--

Back in the early 20th century, soapboxes were the “platform” of choice for sidewalk preachers and political activists alike. These little wooden boxes offered speakers a way to “rise above” the noise of the crowd and demand an audience. And for passersby (usually the working poor), this soapbox oratory was a free source of entertainment.

The soapbox wasn’t just a place to be entertained. Very often, you’d encounter a radical demagogue on his box, preaching a message of “freedom” for the working classes and mobilizing action. Local police and policymakers saw this as a treat to law and order and regularly tried to pull the box out from under these speakers — often literally.

Attempts to silence the soapbox pundits were met with intense opposition on first amendment grounds, and the “free speech fights” that followed would become some of the most significant battles in the longstanding effort to protect free speech in America.

The Soapbox Goes Digital

One-hundred years later, nobody stands on soapboxes anymore.

Instead, we have Facebook, Twitter, and the rest of social media. Like the soapbox, each of these platforms offers every man, woman, and child the opportunity to stand up and be heard. Better than the soapbox, their potential reach extends not just to whoever happens to be walking by but the 1.8 billion people who log on to Facebook every day.

But, with this newfound reach comes the same old potential for civil unrest.

Demagogues of every ideological persuasion have used these digital platforms to gin up support for their cause. Conspiracy theorists have used them to stoke fear and erode trust in institutions. Politicians and parties have used them to vilify the “other guy.”

We don’t have to reach far to discover high-profile cases of misinformation being propagated on social media: Pizzagate, Plandemic, QAnon, Biologically Belligerent 5G.

Thanks to the global reach of platforms like Facebook and Twitter, Winston Churchill’s words are now truer than ever: “A lie gets halfway around the world before the truth can put on its boots.” If the conspiracy theorists had only a conventional soapbox to stand on, then their lies would barely make it around the street corner.

Can We Hold Them Responsible?

One-hundred years ago, if you tried to hold a manufacturer liable for the kind of nonsense that was spewed from atop one of his soapboxes, you’d probably be laughed out of court.

In our social media world, however, things aren’t nearly so simple.

Right now, a huge conversation is going on in the space where regulation, technology, and politics intersect. To what extent should we hold the social networks accountable for the information spread on them? How much do we expect them to actively clamp down on misinformation? And, at what point should we be concerned about censorship?

If you ask the social networks, they’ll tell you they’re simply a platform. Like the soapbox manufacturer of old, they simply provide the wood for others to stand on. Consequently, they can’t be held responsible for what use people make of the platform.

Social networks find their security in Section 230 of the Communications Decency Act and the protections it offers to platforms like theirs.

But, do the social networks really take such a hands-off approach to content?

Clearly not, as Facebook and Twitter have repeatedly been taken to task for inconsistent fact-checking and moderation. While some support the networks’ active role in content moderation, others believe these inconsistencies reveal an inherent bias.

For these right-leaning critics, Facebook and Twitter need to get honest. As soon as they start to exercise editorial control over the content on their network, they cease to be platforms and become publishers. As such, they should lose their Section 230 protection and be held every bit as liable for their content as the New York Times.

A Better Way Forward

As you could imagine, this discussion tends to be driven by ideological concerns over who’s censoring whom. A better way forward, I propose, is to give up the qualitative game in favor of a quantitative method for distinguishing platform and publisher.

Here’s my rough sketch of a proposal: for each piece of content published on a social platform, draw a regulatory threshold that, once crossed, flips the network’s responsibility from that of a platform to a publisher. The most likely candidate is impressions, although arguments could be made for reactions and/or shares.

Beneath the threshold, content can roam free without fact-checking, labeling, or moderation — all under the protection of Section 230. Above the threshold, the network is responsible for fact-checking the content, labeling, and moderating it appropriately.

Questions to be Answered

As I said, this is just a sketch. There are a few questions to be answered:

  • Where do we draw the line?
  • Should networks throttle content?
  • How can networks implement large-scale fact-checking?

The first question is a matter of research and discussion, although I propose the line to be somewhere around 10,000 impressions.

The second is a matter of choice; networks can either limit the reach of their content or own their responsibility to ensure the veracity of posts that surpass the threshold. I suppose they could also choose to do nothing, but at that point, they would not be protected against liability.

The third question has to do with technology, and I’m proud to say that our team at Our.News has created the infrastructure to facilitate access to fact-checking data needed to check and label wide-reaching content in real-time.

Through our Newstrition platform (which includes a growing database of fact-checks sourced from IFCN verified signatories) and REST API, we can instantly grab all the data associated with a particular URL or claim and serve it up to the network, allowing them a seamless way to cross the threshold without opening themselves up to a world of legal hurt.

And ideally, display all this information to their users.

Conclusion

We’ve come a long way in the past 100 years. In many ways, though, we’ve stayed the same. In the wake of wide-reaching conspiracy theories and misinformation, most people recognize the awesome power of social media to spread ideas rapidly — even if we disagree over whether and to what extent something should be done about it.

My proposal is modest and elemental, but I hope to move the ball forward as we think together about how to increase the quality of information we spread across the web. What do you think? Let me know in the comments below.

--

--

Richard Zack
Our.News

Father. Husband. Open Source Leader. Entrepreneur. 5x Cofounder. 2x Exits. Ex-VP@Canonical/Ubuntu