Content moderation for your platform: Should you build or buy?

Sentropy Technologies
Sentropy
Published in
5 min readFeb 15, 2021

If you’re looking to detect and defend against abuse on your platform, consider the ins and outs of developing internally vs. purchasing.

In tech, it’s a rite of passage: the moment when you first have to decide whether to build a solution yourself or buy it from someone else. In the scrappy startup days of a business, it’s often an easy question to answer. Without the engineering resources of a larger company, buying is born of necessity.

But as you grow, the decision becomes a little knottier. When you have the ability to choose between buying and building, a whole host of factors come into play — cost, complexity, and timelines, to name a few.

The “build vs. buy” question is increasingly on the minds of platforms looking to moderate user content. That’s because content moderation is at an inflection point, transitioning from “nice to have” to “absolute must.” If you’re running a platform that allows anyone to post, you need to screen for harmful content. And the traditional method, unassisted human moderation, is expensive, doesn’t scale well, and takes a psychological toll on moderators.

Machine-assisted content moderation is now akin to business tactics like emailing user newsletters and updating web pages. But unlike more established technologies like email and content management, content moderation is one that many platforms are just beginning to wrap their heads around. To help untangle the questions that content moderation brings to light, let’s take a look at why buying may ultimately be the way to go.

Is building even an option?

Hiring a team of content moderators might be the most straightforward approach to the problem. But few companies have the resources, let alone the stomach, to do it. Even with an unlimited budget, the ever-escalating volume and intensity of abusive content make humans prone to burnout and distress.

So it’s not difficult to make the recommendation for offloading key aspects of content moderation to machines. And building the tools to do the job internally may be the right fit for certain companies. For very large platforms with giant purses and the cachet to draw in top-tier engineering talent — think Facebook or Twitter — it’s feasible. The challenge, as we’ve learned, is fascinating. Content moderation is ever-evolving and makes use of vast data sets that are the stuff of machine intelligence dreams. Here’s the rub, though: The things that make the problem so exciting are the same things that make it such a bad idea for most companies to take on.

The complexity issue

The biggest hurdle facing companies considering building is that it’s hard. It’s an ever-changing, costly, gargantuan beast of a problem. Unless you’re looking to dedicate a lot of resources to the problem, you can easily find yourself outmatched by the abusive content streaming into your platform.

Even if you can resource your content moderation efforts, you still have to ask yourself why. Strategically, each person you dedicate to building content moderation tools is a person not dedicated to building your product. And then you’d face a choice: split your focus between your core product and your moderation tool, or focus on your core product while the moderation tool languishes. Even if you’re able to do double-duty in the beginning, your core product will win in the end. And for the record, we’re assuming you wouldn’t consider the third option — focus on your moderation tool while your core product languishes.

Going deeper

Even if you’re able to dedicate resources, it’s no guarantee that your automated content moderation efforts will be effective. It’s still a new discipline that requires an intensely thoughtful approach, one that can’t be spun up out of nowhere. Much more than just familiarity, you need expertise in machine learning and natural language processing.

What does the difference between familiarity and expertise mean in practical terms? The former is detection modeled on past behavior; the latter is that plus the ability to anticipate previously unidentified threats. Given the speed with which abuse evolves (just look at the ferocity of anti-Asian abuse since the onset of the pandemic) proactive measures are the only way to get it under control.

Maintenance mode

Unless a project is an active revenue driver, no one wants to continue pouring significant resources into it indefinitely. Internally built support systems are often designed to ramp down to a point where they can go into “maintenance mode,” requiring minimal upkeep. This is where any content moderation system will get snagged. Abusive content doesn’t draw from a static list of terms. Language in the internet age morphs with every trending tweet and new viral meme, and abusive content moves every bit as quickly and continuously.

In other words, maintenance for content moderation systems looks an awful lot like active development. There are no half efforts or minimum viable products, and attempts to get by with them are bound to spring leaks.

Why Sentropy?

It’s clear that the time, expense, and focus required by content moderation are better bought than built more often than not. And Sentropy fits the bill. Just as your product is your focus and area of expertise, building a world-class platform for detection and defense against abusive content is ours.

We’ve created models for understanding the evolution of language across the web with the intent of continuously learning and adapting. And because it’s our focus, we go deep. We train our models on both the open web — including the gray and dark webs — and customer platforms without breaching privacy restrictions. Combined, these allow us to quickly update models to help you detect issues before they rise to crisis level. And we’re able to cover a broad array of abuse types — everything from identity attacks and sexual aggression to physical violence and self harm — across multiple languages.

Even if you’ve got reasonably effective tools for combating abuse, Sentropy can easily be integrated to make them even more effective. In the end, getting a handle on abusive content is about serving your users. Platforms can’t afford to take a passive role. As threats increase in volume and intensity, ensuring you’re providing a safe haven for your users is a necessary competitive advantage. And more importantly, it’s the right thing to do. So ask yourself: don’t your users deserve the best protection you can give them?

--

--

Sentropy Technologies
Sentropy

We all deserve a better internet. Sentropy helps platforms of every size protect their users and their brands from abuse and malicious content.