Behind the Scenes — How The Meet Group Moderates Live Streaming Content

Lauren Hallanan
May 16 · 5 min read

Live streaming content regulation has been top of mind for many recently, and it should be. It is an important issue and one that we here at The Meet Group take quite seriously. In fact, more than 40% of our workforce is dedicated to content moderation and community management.

Yet, in order to ensure content meets our standards, human monitoring is not enough. In addition to our team of moderators, we use machine learning technologies that algorithmically review millions of pieces of content daily to filter out inappropriate, and offensive content.

So how does this entire process work? And what happens when we discover offensive content? To learn more, we spoke with David Brown, VP of Operations at The Meet Group.

Lauren: It may seem obvious, but why is content moderation necessary?

David: First and foremost, we have to be able to prevent inappropriate content. Um, now with that said, some might be surprised to hear how little of that there actually is. Less than .5% of all broadcasts are ever closed down for moderation reasons.

We do not allow nudity, sexually explicit material, hard drug use, hatred or violence. We believe that a respectful community is ultimately best for everyone, and we believe it’s what the vast majority of our users want.

Lauren: Could you share the process for content moderation?

David: Of course. Our apps are monitored 24/7, 365 days of the year. There are millions of minutes of live broadcast every single day on The Meet Group’s apps therefore obviously it’s not possible for employees here to watch every minute of every broadcast. So to help our moderators, we have an AI technology that samples, or takes a screenshot, of every broadcast periodically, as frequently as every 10 seconds for many of our users.

Those screenshots are then evaluated by an algorithm that the company has created that has been trained to notice unusual content. It knows what normal live streaming content looks like. For example, the majority of broadcasts on our platform feature one person, who is completely dressed, in the middle of the frame, talking to the camera and interacting with their viewers. So if there’s a broadcast that has an image that is totally different from that, be it a group of people, or if somebody is broadcasting their dog or some event outdoors that our algorithms don’t recognize, then a human would have to look at that sampling of the broadcast to see what’s happening.

The vast majority of the samples that are taken are approved by an algorithm, but those that can’t be approved by an algorithm are shown to a human. On the flip side, our systems are also trained to recognize many types of inappropriate content and will escalate that to moderators as well.

As broadcasting continues to evolve we’re endlessly retraining and updating our algorithms to understand exactly what they should be looking for.

Lauren: What happens if the moderators decide the content violates our standards?

David: We remove photographs and discontinue live streaming broadcasts that violate our content standards and, depending on the nature and severity of the offense, we delete the account of the violator.

If it is only a minor offense, then we will send the user a message letting them know why their stream was stopped, and temporarily ban the user for streaming, typically for a couple hours.

Lauren: Do broadcasters and users have the ability to monitor content as well?

David: Yes, we also empower our streamers to remove any of their viewers who might be making negative comments using product features that we created for them as well as enabling their viewers to help them with the Bouncer feature. Additionally, certain comments are flagged as inappropriate and if a user says one, they are automatically kicked out of the stream and their comment is hidden.

We give all users robust privacy controls to enable them to not only block people they don’t want to interact with, but to also carefully manage who can view their profiles and content to begin with.

Lauren: Is there any type of content that users don’t expect to get penalized for but often are?

David: Yes, a big one is driving while live streaming. We receive complaints every day saying, “Hey, there was nothing wrong with my broadcast. I was completely clothed. I’m not sitting here abusing drugs or doing anything wrong. What’s going on with this? “

And 99 out of 100 times when that type of complaint comes in, we look at the moderation notes and see person was actively driving a car.

We’ll respond and explain what the reason was, but many still don’t understand why it’s against our rules, but we think it’s just too dangerous to broadcast and read the comments, and interact with viewers while also driving a car at the same time. We’re very firm on that.

Lauren: But what if someone is just sitting in their car and not driving?

David: So that’s a great question because there are a lot of people who regularly broadcast from the driver’s seat of a parked car and it’s totally okay. People will take their lunch break from work and go sit in their car where they can recline the seat back and be comfortable and do a broadcast from there. And that’s totally fine.

We’re only worried about that situation where somebody is actively driving. So when the algorithm spots somebody it thinks may be driving, it won’t immediately end the broadcast. It will escalate the broadcast for human moderators to look at and judge the context. If the moderator sees that the person is just sitting there, not using the steering wheel, there’s nothing going by out the window and they’re in a parked car then it’s fine, no problem.

Lauren: Where do these content regulations come from? As a former live streamer in China, I know a lot of those rules are created at a government level, I’m wondering if it is the same here in the U.S.?

David: Right, I’ve recently read that the government of China has been establishing a lot of regulations around live broadcasting, even stipulating that broadcasters can’t smoke cigarettes or show off tattoos.

To answer your question, a lot of the standards here are just company policies that are common sense safety practices more than anything else. And they’re things that the app stores would want us to do and require of us. But even if the app stores didn’t demand it of us, we would do it anyway because it’s the right thing to do.

This interview has been condensed and edited. To hear the entire interview, check out episode 21 of the Stream Wars podcast:

The Meet Group

The Meet Group (NASDAQ: MEET) is a portfolio of mobile social entertainment apps designed to meet the universal need for human connection. We leverage a powerful live-streaming video platform, empowering our global community to forge meaningful connections.

Lauren Hallanan

Written by

VP of Live Streaming at The Meet Group and China social media marketing expert

The Meet Group

The Meet Group (NASDAQ: MEET) is a portfolio of mobile social entertainment apps designed to meet the universal need for human connection. We leverage a powerful live-streaming video platform, empowering our global community to forge meaningful connections.

Welcome to a place where words matter. On Medium, smart voices and original ideas take center stage - with no ads in sight. Watch
Follow all the topics you care about, and we’ll deliver the best stories for you to your homepage and inbox. Explore
Get unlimited access to the best stories on Medium — and support writers while you’re at it. Just $5/month. Upgrade