Since we launched Perspective, our technology that uses machine learning to spot abusive language, we’ve experimented with new ways to leverage this technology. Perspective’s most common applications fit into two categories: helping community managers find and respond to toxic comments, or helping authors improve their contributions when their posts might violate community guidelines. Both of these use-cases are important, but neither directly empowers the largest part of the online community — the readers.
Most of us spend more time reading online comments than writing or moderating them. As we read, a single toxic post can make us give up on a discussion completely and miss out on reading valuable thoughts buried underneath the shouting. Toxicity also has a chilling effect on conversations, making people less likely to join discussions online if they fear their contribution will be drowned out by louder, meaner voices. The Pew Research Center found that 27% of Americans have chosen to not post something online after witnessing harassment.
What if, instead of having to rely on moderators to make comment sections better, people had the ability to control for themselves what kind of comments they want to see?
To test the idea of viewership control, today we are releasing an experimental Chrome extension called Tune that lets users customize how much toxicity they want to see in comments across the internet. Tune builds on the same machine learning models that power Perspective to let people set the “volume” of conversations on a number of popular platforms, including YouTube, Facebook, Twitter, Reddit, and Disqus. We hope Tune inspires developers to find new ways to put more control into the hands of readers to adjust the level of toxicity they see across the internet.
Tune lets you turn the volume of toxic comments down for “zen mode” to skip comments completely, or turn it up to see everything — even the mean stuff. Or you can set the volume somewhere in between to customize the level of toxicity (e.g. attacks, insults, profanity, etc) you’re willing to see in comments.
The machine learning powering Tune is experimental. It still misses some toxic comments and incorrectly hides some non-toxic comments. We’re constantly working to improve the underlying technology, and users can easily give feedback right in the tool to help us improve our algorithms. Tune is completely open source, so you can visit Tune’s Github page to learn more, explore the code, or contribute directly.
Tune isn’t meant to be a solution for direct targets of harassment (for whom seeing direct threats can be vital for their safety), nor is Tune a solution for all toxicity. Rather, it’s an experiment to show people how machine learning technology can create new ways to empower people as they read discussions online. We hope that Tune can inspire platforms and developers to explore viewership controls for readers and enable communities to join discussions without relying solely on comment moderation.
—CJ Adams is a product manager at Jigsaw.