Scaling content moderation in a community that generates 1,5MM comments per month

Gino Cingolani
Taringa!
Published in
7 min readApr 19, 2016

In Taringa! every month more than 400,000 people share their opinions publishing articles or discussing them in comment threads. As a communication tool and online community, our vision is that people of any age should feel Taringa! as an empowering and friendly site, but we also know that behind a computer hate speech, violence, illegal and inappropriate content are frequent and uninvited guests.

Beginning 2014 we developed an internal indicator to measure if the content that our users were consuming were “showable to your family”, we named it Brand & Family Safe, which means that you should be able to see that webpage in the office, with your family and that a brand could place an ad without having their brand image compromised.

To be able to analyze more than 500,000 content pieces and 1,500,000 comments associated to those pieces, we have three layers of moderation working at the same time. The goal of those 3 processes is to create a friendly environment to exchange ideas, monitor the quality of the content consumed on the site and improve the monetization options for our content.

Community driven and manual moderation

Since the website’s creation in 2004, the infringement detection process was community driven, users reported any abusive content, and the infringing URL was taken down if the number of reports were above a given threshold or a member of our 15 people moderation team reviewed it manually. Users reported this content because it’s part of Taringa!’s reward system, and they gain prestige and reputation by reporting and taking down infringing content.

This is how a month of manual moderation looks like:

Metrics panel for manual moderation performance and quality

More than 90% of the Posts reported by our community were effectively taken down by our team of moderators. This is a great health indicator that our community cares about our policies and report infringing content. The red line on the graphic represents the Posts that our moderation team removes from the platform and the spikes on that line represents days of the weekend. This is probably because the team is formed by voluntary members that have their own jobs during the week and they have more time to moderate the site on weekends.

The blue line represents Posts reported by the community and then taken down by moderators, and the green line represents Posts reported by the community but rejected by the moderation team usually because the reports weren’t clear or true.

Automatic moderation (Text analysis)

At some point in 2009, when Taringa! started to grow exponentially it became clear that the team of volunteer moderators wasn’t enough and the community didn’t report as much as we needed some issues as copyright violations and inappropriate pictures. This difference between our users perception of what´s ok and our policies meant we couldn’t rely on them reporting all kinds of inappropriate content.

That’s why we implemented flagged terms. This worked well for copyright protected content, especially by targeting download sites URLs and also helped detecting adult content by creating a list with names of adult performers or movies. Later on we developed a list of hate speech and discriminatory terms popular in Latin America.

Semi automatic moderation (Image analysis)

With content and interactions analyzed by real people and with text analyzed by computers, everything looked great, but every month in Taringa! our users upload more than 1,380,000 images just in our Posts section. When we chose an approach to moderate that insane amount of images published on the site, our first idea was to fully automate the process with an AI computer vision system that could detect adult content and violence. Besides leaving out the detection of pictures depicting hate speech or symbology, to be able to remove infringing content and warn or suspend users, we needed to be 100% sure that a content flagged as infringing is effectively infringing. This wasn’t possible with any of the algorithms we tried to automate the image detection.

This image was flagged as adult content by the algorithms we were using. Yes, it has skin colour and two rounded buns but…

To solve this problem we implemented a manual moderation process performed by a trained team of image moderators that works 24/7 and classifies images in four categories: clean, sensitive, banned or offline (Images not available on the web at that moment).

Right now we’re reviewing more than 220,000 images daily and since November 2014 we have classified 30 million images.

Amount of images manually moderated per day

When a Post is first visited, it enters a fairly complex pipeline that allows to send its images to a human moderator, and then decide whether or not the Post is Brand & Family Safe, based on the result of the moderation.

State diagram describing the semiautomatic process used to moderate images for a given Post

Given the sheer amount of Posts and images that are uploaded, it soon became clear that manually moderating 100% of the content would require a huge team, but at the same time we saw that most of the visits went to a small selection of content. We figured out we could use a priority queue, and each Post that is enqueued is weighted and sorted based on the visibility of that Post on the site and the size of the audience that is being reached. Each time a moderator requests images to moderate, the system provides images coming from the Post with priority number one at that moment.

Our manual image moderation tool in action. If the image goes beyond the orange line is classified as safe, unless the moderator chooses a different classification.

Then we faced another issue: the moderators frequently found that they were given the same images over and over again for analysis (think memes). This is because we identify an image from its URL, and the same image can be uploaded many times, so it can have many different URLs.

Our approach to this issue was to implement a system to automatically analyze and detect similar images, so that when an image is detected as duplicate and it has already been moderated it will not be sent for moderation again. We named this subsystem imageid and we open sourced it. Imageid calculates a perceptual fingerprint from each image, and then stores each image URL with its hash in a database. When an image is detected having a similar fingerprint to an existing one, it is stored in the same similarity group and with the same moderation classification than the original one.

Imageid has been online for about a year now and it has indexed a grand total of 45 million images. Of those, 27% are exact duplicates of some other image, and 47% are similar to some other. (For example, cropping or color tones)

Since the implementation of imageid on March 2015 we could see a steady growth of the amount of images moderated as a result of combining manual and automated image detection. That growth allowed us to empower the monitoring of the quality of content consumed on the site and the monetization of that content.

Brand & Family Safe

As a result of the automatic text analysis and semiautomated image moderation processes we decide if the content is suitable to be Brand & Family Safe, which may result on that piece of content promoted to our Discovery recommendation system (more on that here), as an input of our daily recommendation email and social network posts and we also activate ad networks that pay better but are more strict on their ad placement policies.

Since we started to focus on the Brand & Family Safe concept as a KPI on October 2014 we have been able to go from ~9% of article reads we can certify are Brand and Family Safe to ~49% of the grand total.

Percentage of traffic on Brand & Family safe Posts per day

Our vision

I hope this tour on our moderation processes managed to convey a glimpse of the effort involved on maintaining our community healthy and the conversations friendly and on topic. As a Product Manager for this operation, I can assure you that this work would not have been possible without having an interdisciplinary approach (Engineering, UX design and cultural understanding on the uses of the platform) of the problem.

In a time where lots of digital media companies have decided to close their comments and community sections or delegate the responsibility to moderate them to third party social networks, in Taringa! we’re constantly developing and investing on resources to improve our moderation processes and tools, since we believe that the Internet still should be a place to build relations and exchange ideas with people all around the world.

Thanks to Diego Essaya, back end developer at Taringa!, who helped me writing this article with technical specifications and ideas.

--

--

Gino Cingolani
Taringa!

Product Lead DAO @Decentraland / Ciencias de la Comunicacion@UBA / Mash-up analógico-digital. Brunch guerrilla.