Rebuilding trust in the Internet

Justin Davis
Spectrum Labs
Published in
4 min readSep 24, 2020

Back in 2018, we were working on a proof of concept for a well-known messaging platform. They had content moderation, but wanted to know how bad the bullying problem was on their service so they could start to more actively address the issue. As part of our development work, our models uncovered a growing issue of kids using coded language to discuss and promote discussing self-harm. Until then it had gone unnoticed.

That moment, we knew we were on the right path. Recognizing behaviors impacting the safety of a community — better than anyone else could — was our guiding mission. Improving the workflow with streamlined processes, automation, enhanced insights, and a better moderator experience would come next. But being the best at recognizing the problems was where everything would stem from. You can design a beautiful workflow, but it’s worthless if you can’t accurately detect behaviors.

Three years later, being single minded about our mission has matched us to a wide range of customers in social networking, gaming, dating, and marketplace apps, and investment that’ll help us accelerate our work to rebuild trust in the Internet

Today we’re announcing our $10m Series A round, led by Greycroft with participation from Wing Venture Capital, Ridge Ventures, Global Founders Capital, and Super{set}. We’re also sharing real customer results, from companies like Riot Games, Mercari, and The Meet Group, who are all using our platform to make their piece of the Internet a safer and more valuable place for their own customer communities.

Context is critical

Over the years, we’ve learnt a huge amount about how Trust & Safety teams can scale what they do to ensure their customers are protected, and we’ve built the technology they need to do it. The key to everything is context.

Consider the billions of messages sent on consumer apps, sites, and services every day. Most will be OK, but some will be hate speech, harassment, or some other kind of toxic activity, and others might fall somewhere in between. As the Internet continues its rapid expansion, online toxicity gets worse and the job of finding those incidents and quickly resolving them becomes impossible for legacy moderation technology or approaches.

On our journey to recognize issues faster and more accurately, we know the context of the message will be a significant clue to whether a message is truly toxic. In which online community is this taking place? What happened just before this incident? Does the language used make a difference? Does the user have a good or bad reputation on the platform?

Contextual AI models that recognize behavior in real-time sit at the center of the Spectrum Labs platform. Those models continue to learn, thanks to our industry-leading knowledge vault of information from international data partners who, with us, represent a community powerful enough to fight back the rising tide of online toxicity.

Safety by design

Over the years, but particularly right now, we’ve heard many variants of the same question. “Why is it taking the Internet so long to figure this out?” “Why can’t the social networks fix the issue once and for all?” “Why is this still happening?”

This week, one of the biggest social networks is calling on its competitors to work with them to try to solve things. It’s a bold move, and is likely to create a shift in how the industry thinks about chronic, shared problems.

From my perspective, we’re about to see more significant changes in how online brands respond to their issues. For the last decade, we’ve seen incredible growth of apps and experiences. Investment has been in services and applications that sit on top of the established delivery model — the Internet and app stores — and growth came along with it.

Now, we’re swinging quickly towards infrastructure, which urgently needs to get better. Privacy and safety can’t keep up with the sheer volume of messages or growth of online platforms. The only solution is a complete redesign of the delivery model, including safety, privacy and wellness built into the infrastructure of the next version of the web.

Back to value

The last few years have seen online communities get angrier, sadder, more divided, and more frustrated. This year has seen us rely on the Internet more than ever. The result has been a divergence of expectation, with people finding it harder to believe they’ll be safe online, and the online brands needing to protect their customers from a rising number of threats.

Without safety from toxic behavior, there is no trust. People quickly move on from services where they have an unsafe experience. Finding the path back to safety will make the Internet a more valuable place for all — both for the players and consumers who rely more on Internet services, and the sites, communities, and platforms that want to stay in business.

This is why today’s news is such an important milestone for us, our community, and our customers. The world outside our homes means our mission has never been more critical. Our growing community means the technology at the core of Spectrum Labs has never been smarter. The work we’re doing to protect people while online has never been more needed.

Back in 2017, nobody really knew how bad this problem would get. But we’re glad we chose this path, and honored to welcome more resources to help us on the journey.

--

--