Banned in Boston and Everywhere Else

Big Policy Shift in Content Enforcement for Big Tech

Kyle Dent
Checkpoint

--

I n the ongoing conversation about what can and cannot be said on the Internet, the big question remains “who gets to decide?” While some people have celebrated Facebook and Twitter’s recent steps to ramp up their moderation of fringe groups and conspiracy theories, others view their moves as censoring conservative voices on their platforms. Following the January 6 attack on the U.S. Capitol, Twitter purged more than 70,000 accounts that were “engaged in sharing harmful QAnon-associated content at scale” according to the Twitter company blog.

The violent assault on the Capitol has triggered an inflection point. The idea of an out-of-control mob infiltrating Congress with the express purpose of interfering with the democratic process represents a Rubicon most Americans were not willing to cross. Banning extremist users has marked a distinct shift in policy for social media companies who previously had been reluctant to block any kind of political speech. Up to this point, the platforms went so far as to give special consideration to well-known politicians like former President Trump whose postings were left online even while they were acknowledged to violate published policies. As it stands, he is now banned from both Twitter and Facebook.

Image showing mobile phone unable to access Parler with AWS website in the background
Amazon’s AWS platform removed Parler from their hosting infrastructure

Those banned users who migrated to less moderated platforms like Parler, a Twitter competitor, soon found their efforts stymied there as well because it wasn’t just the social media companies who decided to take action in this moment. Several players in the big tech stack that we don’t normally associate with managing user-generated content were also motivated to act. For example, Stripe, which does payment processing, stopped handling transactions for the Trump Campaign. Apple and Google removed Parler from their app distribution platforms, and Amazon kicked them off their hosting service.

Not so long ago, the closer a company was to providing infrastructure (think telephone networks), the more important it was to maintain strict neutrality. But in December of 2017 net neutrality regulation was, for the most part, halted under the Trump administration’s FCC when the board voted to end network neutrality principles. The original regulations were primarily focused on broadband providers and prevented them from favoring some kinds of content over others, which they might have wanted to do for extra payment or because they found the content controversial. The irony here should not go unnoticed.

While Amazon apparently warned Parler about violent posts (and possibly illegal content, which is a different story), their decision might have run afoul of the previous regulations or at least the principles behind those regulations. It’s as if tech companies suddenly realized speech has consequences and decided they should do something about it. Their actions may have revealed something unsettling though, that is, just how much power they actually have. People on all sides, including the Secretary-General of the U.N., have expressed concern that major questions of free expression should not be left up to a handful of California internet companies motivated primarily by financial considerations.

Online falsehoods and hate-speech have, in fact, led to real-world harms, and many misinformation experts in the U.S. are calling for a federal response to the ‘reality crisis’ of disinformation and domestic extremism. The solution is not a simple matter of countering false claims with factual information. People are drawn into conspiracy-theory echo chambers like QAnon for more than just their fantastical and bizarre views. These groups give their proponents a sense of community, of belonging to something bigger. However, once inside the concocted alternate realities, radicalization happens quickly, fixing outlandish ideas that can be hard to shake.

Governments Taking Action

Section 230 of the 1996 Communications Decency Act provides liability protection to companies for user-generated content on their platforms. When life online was just getting started, liability protection was foundational to creating the digital public sphere we now take for granted. However, in those early days, social media companies operated more or less as simple conduits for speech. Section 230 sought to indemnify them from responsibility for others’ speech, freeing them from the need to interfere with people’s free expression for fear of being sued. Things have changed a lot since then. Social media companies now decide what people see and don’t see. They promote some messages and de-prioritize others. In other words, they’ve started to resemble publishing companies much more so than inert public spaces.

“We can’t ignore the fact that platforms ultimately make their decisions based on profitability and primarily for the benefit of their shareholders.”

Broad calls to simply remove Section 230 protections are not helpful to the discussion. It’s true that the law might require tweaking, but the solution is not as simple as what Senator Hawley would have you believe. We have to collectively work out a framework that allows ongoing public discourse without it being overrun by harassment and abuse. Tech companies’ almost total control over what can be said and who can say it is a problem, but reversing Section 230 is not likely to get us to a better situation.

Outside of the U.S., other governments are already moving to legislate social media companies by defining their responsibility with regard to the user-generated content on their sites. The proposed E.U. Digital Services Act sets out new rules intended to foster safer digital spaces. Courts will have a role in deciding what kind of speech is illegal. Similarly, Britain’s Online Harms Bill will require companies to take responsibility for and remove harmful content, especially related to child abuse and terrorist activities, to limit its spread.

The bills are not without critics who claim the proposals are ineffective and possibly counterproductive. They believe the new laws could simply move bad actors to smaller platforms and away from regulatory oversight. They also point out that the stiff penalties might lead gun-shy companies to take down perfectly legal content.

In the U.S. federal lawmakers tend to steer clear of these kinds of debates, both because of Americans’ expectations of strong speech rights and because of legislators’ long standing tradition of deference to corporate interests. But their inaction puts the private tech companies in charge. Platforms certainly have their own First Amendment right to leave speech up or take it down and are free to define their platforms according to their own vision. We wouldn’t expect a Disney discussion board to resemble Twitter, for example. But customers should demand that companies be transparent about their choices.

We can’t ignore the fact that platforms ultimately make their decisions based on profitability and primarily for the benefit of their shareholders. For most people, that’s a problem. Kate Ruane, senior legislative counsel for First Amendment issues at the ACLU, recently spoke about the problem on the ACLU’s At Liberty podcast. She explained the issues in practical terms:

“Content moderation policies for these big companies are difficult to understand. When they censor speech they often get it wrong and their rules are not transparent; and they’re not transparently applied. When they do get it wrong or when users think that they get it wrong, there’s no clear or accountable due process or any ability to have these issues corrected over time.”

There’s no question that the problem of moderation is complex and will be made even more so as this conversation gets subsumed into the bigger discussion around regulating big tech generally. There could be an upside though. Activists supporting social media regulations have floated the idea of requiring platforms to conform to an new standards for data exchange and interoperability. If standardization is enforced, people could have the freedom to switch providers without losing the value of the network they’ve built up on a particular platform. This should generate more competition and open the door for new social media providers. Regulations that require clear content policies that explain the hows and whys of banned speech plus the option to choose providers means individuals can make free choices about which platforms to use or not depending on the rules of a particular company.

If you would like more information or are exploring options for AI enhanced moderation for your platform, contact us at contact@checkstep.com. Alternatively, you can also visit our website www.checkstep.com.

--

--

Kyle Dent
Checkpoint

Technology & Society, Responsible AI, Data Science and Visualizations