Protecting the real world: Capitol riots & content moderation

Sentropy Technologies
Sentropy
Published in
5 min readJan 15, 2021

It has become impossible to deny, after the events of 2020 and the Capitol riots, that something is intrinsically flawed with how we interact with each other online. A dark presence of hate and vitriol persists in a majority of our online interactions. This dark presence spreads online and continually leaks into what many call the “real world.”

Let us be clear: the Internet is the real world. The joys that spring from catching up with an old friend are real-world joys, the pain of an angry word or a blunt rejection is a real-world pain, the vitriol that people spew represents real-world rage. These of course are just emotions, but they play the same in one’s mind when reacting to text on a screen as they do when conveyed in person. Emotion often drives action. In the best version of the Internet, these actions can be those of stunning charity, touching familiarity, and a real, true connection with a stranger be they a city or a continent away. The counterpoint is that not all emotions are positive, and not all actions worthy, and just as this connected network has provided a structure for productive interaction, so too has it provided a structure for darker deeds.

Content moderation in the face of an ever-shifting reality

Sitting somewhere between the platforms we use to communicate, and the people using them, lies one of the most complicated and misunderstood functions on the internet: content moderation. At times, moderators are volunteers, working tirelessly out of a desire to see their community, regardless of its size, thrive. In the most noted circumstances, moderators are teams of professionals managing user bases counted in the billions, aided by technology and guided by policymakers. At any scale, in any community, they face similar questions:

  • What content is unacceptable?
  • How do we find this content?
  • What actions should we use to mitigate objectionable content?
  • How should we correct the offender?

These are questions with incredibly complicated answers. As these teams develop answers, the ground is shifting beneath their feet. Social norms change — what was acceptable once may not be acceptable now. Language changes — what was a slur can be repurposed as a statement of pride. Cultures change, people change, time passes and any static rule set or dictionary of terms is left hopelessly behind.

Human moderators, grappling with ever-evolving content and policy, view the worst of us and have to make the call five times a minute, eight hours a day, and try to protect their own mental health in the process. The events of the last eighteen months have proven that the systems in place to manage content, to keep at bay that dark presence, have failed catastrophically in the places that need them to work the most. It’s the complexity of this problem and the dire need for tools to help solve it, that inspired our team to build Sentropy.

We spend every day thinking about this problem, building tools that help platforms and users identify and protect themselves from abuse on the internet.

The Capitol riots and the need for proactive moderation

The Capitol events were planned, propagated, and executed online, on platforms both major and minor. From IRC to Discord, from Twitter to Gab, from Facebook to 4chan, the seeds that grew into this moment have been sewn for years. We have seen the rise of a culture of escalating rage, doom scrolling and hate clicking, ideological bubbling, and dehumanization. This was not done in some dark corner of the internet and then slithered its way out, it was done in front of our eyes.

The refrain this week after the events at the Capitol, and many times before this has been: “How could we have missed this, how could this have happened!”. Interest and bias towards action often appear after the harm has been done.

A fundamental flaw in the way content moderation is often done: by the time action is taken to remove the offending content, the harm has already been done. A hateful insult takes a second to wound, a complex disinformation campaign can take weeks to identify. Most content moderation is reactive, and often painfully so. Be it held in the morass of corporate policy and inertia, the lack of resources and training, or simple unawareness of the problem, action is finally taken long after any chance of protection has dissolved.

Content moderation is caught between multiple goals: protecting the interests of the platform as a business, protecting social good, and protecting the people using the platform. While these interests are often described as at odds, we at Sentropy believe — and the research demonstrates — that protecting their users protects platform viability in the long run.

Where we are and where we need to go

So here we are. The dust has settled slightly, but what is emerging is a consensus: the way moderation has been done is not working, and the cost of it not working can be catastrophic. These events must lead to a reassessment on platforms of all sizes of Trust and Safety, content moderation, and the data and systems supporting them. Trust & Safety cannot be thought of as just protecting a single platform or the internet anymore, it has to be thought of as protecting the real world with all the gravity that entails — the internet is real life.

What we’ve seen these last eighteen months will not vanish — the instigators will not disappear because they were banned. While they may have been removed from common platforms, they did not get kicked off the internet. They will gather in new places online. Hateful rhetoric, planned or incidental, will continue to change and evolve in response to the best efforts to contain it. It is our charge and our duty to push all of our will, expertise, and resources toward making the Internet, and thus the world, safer. A place of connection, of equality, of acceptance, of knowledge.

--

--

Sentropy Technologies
Sentropy

We all deserve a better internet. Sentropy helps platforms of every size protect their users and their brands from abuse and malicious content.