Trust and Safety

Moderating Livestreaming Video in a Privacy-Friendly Way

Three innovations to improve livestreaming video moderation

Geoff Cook
7 min readJul 27, 2021

When the global pandemic closed workplaces, bars, restaurants, and nearly everything else, many people found themselves socializing via dating apps and video apps more than ever before. With the dramatic influx in usage came increases in abuse. It has never been more important to prevent bad actors from undermining the community health and safety of dating and livestreaming apps.

Today, I am pleased to announce three of our latest innovations in protecting livestreaming communities, all three of which are only possible in partnership with Spectrum Labs, a company equally committed to preventing online toxicity with content moderation.

On stage recently at Safety Tech 2021 alongside LEGO and IAB executives, I was asked: “What is the ROI of safety?” My response:

“What is the ROI of being able to sleep at night?”

Businesses, by their nature, think in terms of ROI and growth, but safety and community health demand a different framework. At The Meet Group, we view community health and safety as existential concerns. We must do everything technologically and financially feasible to maintain the safety of our community. Safety is table stakes.

We spend $15 million annually on Trust & Safety, and devote more than 500 paid full-time moderators to keeping our communities safe. We also leverage sophisticated AI. Our talented Creators produce nearly 200,000 hours of livestreaming content every day, and our community watches 1 billion minutes of livestreaming content every month. We are one of the larger livestreaming platforms in the world today.

Our platform extends not only to our owned-and-operated dating apps of MeetMe, Tagged, Skout, LOVOO, and Growlr, but it also extends to more than a half-dozen other large third-party apps, including multiple of the larger dating apps in the world. For each app, we run all of the livestreaming technology, moderation, talent, and economy services. We moderate every app and community to the same high standards.

We treat this responsibility with the seriousness it demands. Video moderation is complicated. Context matters, and a strategy must be developed for every piece of content, from display names and stream descriptions to text comments to the visual aspects of the video feed to what hosts are saying — the voice-track. Trolls and other bad actors seek to bully users or post graphic content unsuitable for our healthy communities. Today we announced what is, to our knowledge, a first-in-the-industry partnership to further improve livestreaming moderation in a privacy-friendly way.

Our partnership with Spectrum Labs began because of a simple problem related to text. Some members found that they could defeat our black-list and grey-list of banned content by using foreign character sets, strategic misspellings, etc. We clearly needed an AI approach, and we did not want to reinvent the wheel. We turned to Spectrum Labs to algorithmically moderate names and stream descriptions across our communities, and we saw dramatic and instant improvement.

Naturally, with such success, we asked what more can we do together? No company, no matter how big, can deal with modern safety demands in a silo. It takes an ecosystem, and Spectrum Labs is one of the most thoughtful companies in this emerging ecosystem.

1. AI Moderation of 15 Million Text Chats a Day

Today I am pleased to announce first that we are expanding our AI textual analysis to every live chat posted in our livestreaming video channels — nearly 15 million per day. The analysis will determine the visibility the post will have to members, reducing toxicity and promoting safety. This analysis we expect will go live by the end of August across all of the properties we own or power.

2. AI Moderation of 50 Million Direct Messages

Second, I am pleased to announce that, having seen the tremendous improvement of an algorithmic approach, that we are extending the textual analysis to every direct message sent in our dating products. Our members send over 50 million direct messages a day. Our goal is to work with Spectrum Labs to score and analyze these chats so that, over time, we can build tools that identify a potentially unwanted message and confirm that the sender really wants to send it, and that also prompts the receiver of the message to classify it as unwanted. With those additional signals, we can identify trolls and abusive accounts and remove them proactively from the platform. We expect to begin to roll out this technology in the coming quarters across our owned-and-operated apps.

3. AI Moderation of the Livestreaming Video Voice Track

Our third, and potentially most interesting announcement, is that we have agreed to work closely with Spectrum Labs to moderate and analyze the voice track of livestreaming video in a privacy-friendly way. Our talented Creators value the ephemeral nature of livestreaming video. We do not think it prudent to store and record video in a global, always-on way that might undermine this expectation of privacy and ephemerality. Moreover, the cost demands of analyzing and recording 200,000 hours of content every day are prohibitive.

Our voice track moderation will be triggered if a viewer taps the button to report abuse. Once a member taps report abuse, the voice track will begin recording and transcribing even before the full abuse report is submitted. Spectrum Labs’ AI analysis will be conducted on the audio and the transcript, producing a categorization and an abuse score that we will surface to our moderators, who will continue to watch all reported streams.

Today, we maintain a 1-minute response time, between when a stream is reported and when a paid moderator puts eyes and ears on the stream. As far as we know, this is the best in the industry. We intend to begin publishing a Transparency Report on our safety and moderation-related actions later this year, and we call on other large livestreaming platforms, like Twitch, to do the same. We believe transparency is critical to establishing industry standards and best practice.

As fast as a 1-minute response time may sound, it still provides a 1-minute period of time between when a stream is reported and when a moderator reviews it. In that minute, our moderators are effectively blind to what is transpiring. That is why I am so excited to work with Spectrum Labs to fill in that gap. Now, our moderation team will have the additional context of a full transcription of that minute along with an AI-assisted score of the content, to help them make prompt and consistent decisions.

But that’s not all. Moderation technologies work best when they work together. Because we will soon be scoring algorithmically the 15 million text comments that members make a day inside of video, we expect we will gain insights into the types of streamers who may receive categories of comments that are more likely to indicate the host stream is itself abusive. We will then auto-report such streams, providing the best possible combination of powerful AI to identify problematic streams and human discretion to ensure the stream actually violates our Content and Conduct policy. These innovations will begin as R&D collaboration with Spectrum Labs, and we expect they will roll out to our livestreaming video platform in the coming months.

I could not be prouder of our team’s strong commitment to safety. It has been a part of our DNA since my siblings and I co-founded the company 16 years ago. We monitor all of our live streams proactively, review over 80 million images per day, dedicate over half our workforce to safety, and partner with industry leaders to continue leading the way in the fight against online toxicity and abuse.

Though far less than 1% of our daily users violate our code of conduct, the work of keeping our communities safe is never over. We look forward to continuing to collaborate within the larger ecosystem to not only adopt best practices but to help define what best practice looks like. We owe that to our community of millions of people who return to our apps again and again to meet, date, and love.

--

--

Geoff Cook

CEO @ Noom. Started and sold 3 companies, most recently for $500 million. Ernst & Young Entrepreneur of the Year Award Winner (Philly).