NEW YORK TIMES AND JIGSAW PARTNER TO SCALE MODERATION PLATFORM

Today, we are excited to announce that Jigsaw is investing in new research collaborations with The New York Times and the Wikimedia Foundation to explore how communities and publishers can use open source resources like TensorFlow to improve their online discussions. Together, we will be creating new open source datasets, machine learning models, and community tools to help improve conversations at scale. This research initiative called “Conversation AI” was created jointly by Jigsaw and Google’s counter-abuse technology team.

Currently The New York Times manually reviews every comment that is submitted, which means that someone literally reads each submission in real-time to make sure a wide range of opinions is represented and that the discussion remains civil. As a result, The Times has incredibly high quality comments, but even with a team of full-time 24/7 moderators, their labor-intensive process means they can only allow comments for about 10% of their articles each day.

Our engineers, researchers, designers, and product managers have been working with The New York Times to explore new ways for community moderators to review comments faster by grouping similar comments together based on machine learning models. We will be testing and developing these tools with The Times in the coming months and aim to open source the results of our collaboration by the end of the year for other publishers to use.

For Wikipedia, we’ve partnered with the Wikimedia Foundation to support their researchers’ efforts to better understand toxicity on talk pages, and to investigate its impact on the Wikipedia contributor community. You can read more about this work and the results so far on the open Wikimedia research page.

This research is just the first step in our ongoing effort to apply machine learning to the global challenge of toxicity online. We’re looking forward to sharing more news of our progress in the coming months.