What do Perspective’s scores mean?

So, what do Perspective’s toxicity numbers mean? Perspective’s scores are an indication of probability, not severity. Higher numbers represent a higher likelihood that the patterns in the text resemble patterns in comments that people have tagged as toxic. The number is not a score of “how toxic” a particular entry is. These scores are intended to let developers pick a threshold (e.g. most users looking to highlight comments to review choose a point around .9 or above) and ignore scores below that point. Scores in the middle range indicate that the model really doesn’t know if it is similar to toxic comments or not. A score around 0.5 means it might as well just flip a coin.

Scores above 0.9 contain both mildly toxic examples “You smell bad and are stupid” as well as threats “I am going to kill you.” This is because in both cases the model is fairly sure that if you ask 10 people whether these comments represent a rude, disrespectful, or unreasonable comment that may make you leave a discussion most would say yes in both cases. (You can find our public datasets and labelling methodology described in our paper with Wikimedia at WWW’17.) Although both of those comments might be considered toxic, they are clearly very different in terms of severity. That’s why we created a different experimental model that detects more severe toxicity and is less sensitive to milder toxicity.

Numbers are calibrated for stability

The scores returned by Perspective have also been calibrated to provide stability to developers as we update and retrain models. We don’t want developers to have to change their threshold every time we make improvements to the models, so we use a balanced dataset of half toxic and half non-toxic comments, and normalize the scores across models. You can read more of the technical details of how we do calibration on our Github score normalization page.

Everything depends on the model.

Perspective serves many models, each aimed at a different goal. The perspectiveapi.com demo uses the latest version of the toxicity model (currently we’re in alpha and regularly release new versions). There are also several other models (11 at the moment) that are available to developers using Perspective. You can find a list of them in our developer documentation. Some of our new models include “likely to be inflammatory,” “spam,” “likely to reject,” and “attack on another commenter.”

Authors: CJ Adams, Lucas Dixon