Combat toxicity online with the Perspective API and React

Tom Nolan
The Startup
Published in
5 min readMay 27, 2019

There is no denying that the internet has brought millions closer together, but it has also become a breeding ground for toxic and abusive behavior. This article will walk you through building a React application that attempts to prevent comments with a toxic intent from ever being sent using the Perspective API.

What is the Perspective API?

The Perspective API is a service that uses machine learning in an attempt to determine the intent of a comment or other short text snippet. It can be used for any number of use cases to help combat toxicity, abuse, and otherwise poor behavior. It supports a number of categories and reports back the probability of the user’s intent matches that criteria.

What we’re going to build

To explore how the Perspective API works, we’re going to build a simple comment form with React. When the user submits a comment, it will first check the intent with the Perspective API, before allowing it to be submitted.

The app will allow you to specify the threshold and categories for which you want to verify intent — you can be as strict as you want!

We’re going to be utilizing a few tools to help us build out our project:

You can check out a demo of the finished project that is similar to what we’re going to build. Let’s get started!

Requirements:

  • Some basic familiarity with React
  • Node and NPM installed
  • An API key for the Perspective API through the Google Cloud Platform

I’m not going to walk through how to generate an API key through GCP since that’s not the topic for this article, however, there are plenty of other online resources for just that. For example, this one through Google, in Step 4, the API you want to enable is called Perspective.

Part 1: Bootstrapping the App

First, let’s get our app up and running by creating a new project with creat-react-app. If you have not used create-react-app before, you can install it by running:

npm install -g create-react-app

Otherwise, you can just initialize a new project, change into the directory and add the dependencies we’re going to need:

create-react-app comment-form-perspectivecd comment-form-perspectiveyarn add axios

Then you can start up the development environment by running yarn start.

Part 2: Creating the Form

Now that our app is up and running, we can start building out our form. Let’s start by deleting all of the files that are currently in the src directory (but keep the src folder!).

Once that folder is cleared, create an index.js and style.css inside of the src directory. Then create a new directory inside of src called components and place an App.js and CommentForm.js file inside. When all of that is complete, you should have the following folder structure:

.
└── src
├── index.js
├── style.css
└── components
├── App.js
└── CommentForm.js

Digging into src/components/CommentForm.js, we will create the actual comment form that the user will enter text into. The form will:

  1. Utilize the useState hook to hold our comment in state and give us a way to update its value
  2. Contain a form that will use a callback function provided through a prop to send our comment up to the parent where we will later utilize it
  3. Contain a textarea that will be a controlled input for our comment state
  4. Contain a button to submit the form

Now that we have our form built, let’s move to the src/components/App.js where we want to:

  1. Render the CommentForm component we just built
  2. Handle the form submission and console.log our comment
  3. Create a placeholder where we will execute our toxicity check with Perspective later in the tutorial

Great, now lets actually initialize and render our application on the page using the src/index.js file.

That completes the functionality of the form itself minus the Perspective API check, but before we move on, let’s make everything look just a little bit nicer by adding some CSS to the src/style.css file.

And that’s it! That completes the first part. At this point, you should have a form that looks like the one below, that when you click submit, you see the value of your comment in the console!

Part 3: Verify the User’s Intent

Now that we have a form that submits a comment through to a parent component. We can actually utilize the Perspective API to attempt to validate the user’s intent. So, let’s get started by modifying the src/components/App.js file to actually make that request. We will need to:

  • Import the axios library to make an http request
  • Make a POST request to the Perspective API with the comment as the payload
  • Utilize our API key (replace YOUR_API_KEY on line 9 below)
  • Specify any of the intents that we want to verify against

So, in the above snippet, we use axios to make a POST request to the Perspective API. In that request, we pass the comment as well as a set of requestedAttributes. Those requestedAttributes are the intents for which we want to check. There are several more available than the ones I defined, so you can play around with different combinations.

If you check out the response we console.log, you will notice that for each of the intents we requested, we get the following:

{  
"TOXICITY":{
"spanScores":[
{
"begin":0,
"end":14,
"score":{
"value":0.018322915,
"type":"PROBABILITY"
}
}
],
"summaryScore":{
"value":0.018322915,
"type":"PROBABILITY"
}
}
}

With that response, we can map over the probabilities for the intents that we requested, and compare them to a threshold that we deem appropriate. If the probability is below that threshold, you can save the comment off, but it’s not, you can provide feedback to the user that it should be adjusted!

So, now, once the request to Perspective is complete, we go through each of the intents that we requested, and if any of them exceed a probability of 0.75, we mark the comment as invalid. You can modify that threshold to whatever value you want!

The then portion of the function is where you can get as clever as you want with the results. You can set individual thresholds for each of the categories, send that data off to a server, the possibilities are endless!

Summary

This is a fairly contrived example as it simply logs whether or not the intent was valid or not, but this concept can be used for any number of use cases from front-end validation to empowering moderation tools.

You’ll also notice that the intent probability of some queries may not match what you might have expected. Remember that it uses machine learning, so it will get better over time!

If you have any site that allows users to comment and are struggling with some toxic behavior, you might consider implementing something similar!

Again you see a working demo of this here.

--

--