Announcing Experimental Bridging Attributes in Perspective API

Jigsaw
Jigsaw
Published in
4 min readApr 15, 2024

--

An abstract image of hollow and solid three dimensional shapes rendered in pink

In 2017, Jigsaw launched Perspective API to improve conversations online by empowering moderators to identify and address toxicity in comments. Perspective API is now used by over 1,000 partners, including publications like the New York Times and Wall Street Journal and platforms like Reddit, in 18 languages almost two billion times a day. Since then we have constantly worked to explore new applications like reducing toxicity in generative AI models.

However, reducing toxicity is only one way to redress deepening divisions in online conversation. Last fall, researchers Aviv Ovadya and Luke Thorburn wrote about bridging systems, “systems which increase mutual understanding and trust across divides, creating space for productive conflict.” This conclusion drew on a rich body of literature by academics like Jonathan Stray, Lisa Schirch, and Natalie Stroud that has shown these systems can promote civility and help people engage with and learn from differing viewpoints. The findings are compelling: how might such systems bring people together in online conversation?

Today, we are excited to introduce experimental bridging attributes in Perspective API. These attributes detect reasoning, personal stories, curiosity, and other attributes of comments that correlated with more constructive conversation in past research.

In our own research, we found that, when we sorted comments using our attributes instead of chronologically, readers on average thought conversations were not only less hostile but also more informative, respectful, trustworthy, and interesting. We also found that readers were sensitive to any one attribute being used on its own, as seeing only comments with reasoning, for example, could be overwhelming. As a result, we envision that platform developers and moderators can deploy our attributes in combinations that reflect the needs and values of their communities. These attributes may be helpful for sorting comments, identifying comments to pin at the top of a conversation, or flagging notable comments for response. By releasing these experimental attributes, we hope to learn more about their potential uses and limitations for developers, researchers, and communities.

To build these attributes, we built on our previous work developing attributes that recognize constructive content using large language models (LLMs). These models support the development of attributes that can detect more complex concepts with fewer examples than would have previously been feasible. To improve the quality of these examples, we worked with researchers and social scientists at SIFT and the University of Florida, who have previously developed models based on psychosocial theory to train human annotators in new, more in-depth ways to identify and annotate bridging attributes in online comments. This intensive collaboration allowed us to ensure consistency in the data and address in-depth feedback from the annotators as well as support their wellbeing. We have published the resulting ~12,000 comments on GitHub, which comprise a significant proportion of our training data, and will soon release a preprint paper on the process and our learnings.

Finally, we evaluated the attributes by running them over a test set of annotated comments to see how the machine ratings compared to human ratings. To ensure we mitigated as many incorrect associations as possible, we tested comments to ensure our bridging classifiers performed well, no matter who was referenced in the text. We did not see any initial differences and are now exploring how we might evaluate other types of incorrect associations, such as comments of different lengths and prose style.

As we continue our work and conduct further research, we encourage users to test these experimental attributes and share their feedback. We are eager to learn more about how these attributes can be used and improved, especially given the best practices for evaluating the performance and fairness of bridging systems are still emerging. For users with existing access to Perspective API, the experimental attributes are immediately available. More details about requesting access to the API and getting in touch with the team can be found on our developer website. As we find and address new issues and opportunities related to these attributes, our goal remains the same: to help make it possible for more online communities to build bridging systems that best fit their needs. Your feedback helps us to do so!

Contributors: Zaria Jalan, Tech Lead; Alyssa Chvasta, Engineer; Tin Acosta, Sr. Product Manager; Emily Saltz, Sr. UX Manager; Daniel Borkan, Engineering Manager; Jeffrey Sorensen, Engineer; Roelle Thorpe, Engineer; Lucas Dos Santos, Engineer; Thea Mann, Sr. Interaction Designer

--

--

Jigsaw
Jigsaw
Editor for

Jigsaw is a unit within Google that explores threats to open societies, and builds technology that inspires scalable solutions.