Detecting Online Toxicity with Tensorflow.js

Daniel Boadzie
CodeX
Published in
5 min readFeb 20, 2023

Online toxicity is a major issue that affects millions of people around the world. It can range from cyberbullying to hate speech and can have serious consequences on the mental health of the targets. With the proliferation of social media and other online platforms, detecting and mitigating toxic behavior has become increasingly difficult. However, with the power of machine learning and deep learning, it is possible to build models that can accurately predict and detect toxic behavior in online spaces. In this article, we will explore how to build a toxicity predictor using Tensorflow.js, a library that allows developers to train and run machine learning models directly in the browser. We’ll leverage a pre-trained Tensorflow.js model and develop an app using SvelteKit to detect online toxicity. By the end of this article, you will have a good understanding of how to leverage the power of machine learning to tackle online toxicity.

Before we dive into building our app for detecting online toxicity, let’s set up the necessary tools. We’ll start by creating a SvelteKit app and integrating it with Tailwind CSS to streamline our frontend development.

Setup

We’ll use pnpm to create our project and install the required dependencies.

pnpm create svelte@latest kit-toxic

The code above will prompt you to choose from a few options and generate a basic SvelteKit project based on your selection. The generated project should look something like this:

Once you have selected the options, you will need to install the dependencies for the project, which can be done by running the following command from the project’s directory:

# cd kit-sentiment
pnpm install

Adding Tailwind CSS

In addition, we can also install Tailwindcss to enhance the styling and design capabilities of our project, because why not. Fortunately, there is a useful tool called “svelte-add” that is specifically designed for this purpose.

npx svelte-add@latest tailwindcss

To install these new dependencies, you will need to use the following command:

pnpm install

Now open the project in your preferred editor; mine is vscode.

Toxicity Functionality

To add the toxicity detection functionality to our app, we will first install two essential packages: TensorFlow.js and the pre-trained toxicity model from TensorFlow models.

pnpm add @tensorflow/tfjs

And then the model:

pnpm add @tensorflow-models/toxicity

Now that we have our packages in place, let’s start by building the toxicity functionality in our app. We’ll begin with the +page.svelte file.

<script>
import * as tf from '@tensorflow/tfjs'
import * as toxicity from "@tensorflow-models/toxicity";

let loading = false
const threshold = 0.9;

let pred = null;
let sentence = "";

async function classifyToxicity(threshold, sentence) {
const model = await toxicity.load(threshold);
pred = await model.classify(sentence);
}

async function handleClick() {
loading = true
await classifyToxicity(threshold, sentence);
loading = false
}

const handleInput = (event) => {
sentence = event.target.value;
}

</script>

The script above imports TensorFlow.js and the toxicity model from TensorFlow models. The script defines a set of variables and functions that are used to implement the toxicity classification functionality.

The variables defined include:

  • loading, which is initially set to false and is set to true when the classifyToxicity function is called
  • threshold, which is set to 0.9, the threshold for determining if a sentence is toxic
  • pred, which is initially set to null, and will later be used to store the result of the toxicity classification
  • sentence, which is used to store the input text

The script defines two functions:

  • classifyToxicity, which is an asynchronous function that loads the toxicity model and classifies the input sentence
  • handleClick, which is an asynchronous function that is called when the "Check" button is clicked. This function sets the loading variable to true, calls the classifyToxicity function, and then sets the loading variable to false.

Finally, the script defines a handleInput function which is called every time the user types into the input field. This function updates the sentence variable with the current input value.

Adding the markup

This code generates a web page with a form for predicting the toxicity of text input using TensorFlow.js. The page contains a header, a form, and a result section that can be updated based on the predicted result.

The form contains a textarea input field where the user can enter text, and a "Check" button to trigger the prediction. There is also a loading spinner that appears when the model is being loaded or the prediction is being processed.

The result section displays the prediction results, with the predicted labels and the corresponding probability values. It also contains an emoji icon (👎) before each label, indicating whether the label is toxic or not. The each loop is used to iterate through the prediction results and generate the corresponding HTML elements.

The Svelte code also uses Tailwind CSS classes to style the web page, such as rounded-sm, bg-[#023047], text-red-400, and gap-4.

Now if you run the app with:

pnpm dev

you will see the following:

Please note that the time it takes to load the model will vary depending on the speed of your internet connection.

Enhancing UX with transition animations.

To enhance our user interface, we can add a CSS animation using keyframes in a scoped style tag in the +page.svelte file. Another great feature of SvelteKit

<div class="toxicity-predictor  lg:w-3/4 my-4 flex flex-wrap flex-row items-center">
....
<style>
.toxicity-predictor {
animation: fade-in 1.5s ease-in-out;
}

@keyframes fade-in {
from {
opacity: 0;
transform: translate3d(0, -10%, 0);
}
to {
opacity: 1;
transform: none;
}
}
</style>

Check out the sleek and polished design of our toxicity predictor app! With our newly added animation, the user interface is now smoother and more enjoyable to use.

Conclusion

In this article, we have explored how to detect online toxicity using Tensorflow.js and SvelteKit. We started by setting up a SvelteKit app with TailwindCSS, then installed the necessary packages to classify toxicity in user inputs. We added the toxicity classification functionality to our app and improved the user experience by adding a loading animation and transitioning between the loading state and the classification results. Lastly, we added a CSS animation to enhance the overall user interface of our app. By following along with this tutorial, you can easily integrate Tensorflow.js and SvelteKit to classify toxic comments in any web application. With these tools, you can create a safer online community by proactively moderating online behavior.

The code for this project can be found here

--

--

Daniel Boadzie
CodeX
Writer for

Data scientist | AI Engineer |Software Engineering|Trainer|Svelte Entusiast. Find out more about my me here https://www.linkedin.com/in/boadzie/