Natural Language Processing — Emotion Detection With Multi-class, Multi-label Convolutional Neural Networks and Embedding
I came across a well-prepared dataset provided by Google, with 58 000 ‘carefully curated’ Reddit comments, labelled with one or more of 27 emotions, e.g. anger, confusion, love. Google had used this to train a BERT model, in which they had varying success in emotion detection depending on the type of comment. I thought it would be a great example to learn how to tweak Convolutional Neural Networks (CNNs) and Embedding to a Natural Language Processing (NLP) problem, and obtained decent accuracy for some emotions, but not as good of course as Google’s BERT.
This is not a tutorial — there are plenty of great tutorials on CNNs and Embedding out there. Instead, this is rather a complete code example, tackling multi-class multi-label classification, which was rather hard to find complete & free examples for.
Spoiler: My code doesn’t do as well as Google, who also provide their code in the above link. Their well-oiled BERT solution obtains around 46% F1 score, while I only obtain 42%. However, I didn’t spend too much time optimising my model, so there might easily be another 2 or 3 percent available through some optimisations.
More on the GoEmotions dataset
Firstly, kudos to the Google research team (actual paper here) for such a well-prepared and cleaned dataset, which you can obtain from the link.
The dataset, along with some other metadata, provides a reddit comment (“text”), along with a corresponding set of emotion labels:
Starting intuition
This is a complicated domain — different users display sentiment in different ways, and many of these comments are very short and contain two meanings, e.g. “That’s adorable asf” Secondly, some comments, such as “This video doesn’t even show the shoes he was wearing”, are completely neutral and labelled as such.
However, there are definitely clearly words and phrases which are giveaways to certain emotions. Some, such as love or excitement, are probably quite easy to detect. However, subtle emotions such as pride or curiosity, less so. So I expect to get some success, but certainly don’t expect to do better than Google. After all, these comments aren’t responses to “How do you feel?” — they are seemingly arbitrarily picked from reddit.
The code, explained
The python code obtaining 42% F1 score on the dataset is here. Once you obtain the dataset from Google, you can run it out of the box just by changing the path to the datasets, assuming you have all dependencies installed. However, training the Neural Network will take about 30 minutes to train on 15 epochs depending on your computing power.
Below, I’ll describe each section, excluding the imports.
Fetching the data
This is standard. We are fetching the data CSVs (which are separate by Google’s design), concatenating them, and choosing our input vector, X, and binary output, y. We are then splitting the data into training and test data. The dataset is very large — almost 20,000 comments. Therefore, many other train_test_split splits would have worked, too, but we can go with the standard practice.
After this, we now have roughly 20,000 mappings of comments (“I really love bitterballen!”) with their associated labelled emotions (admiration, amusement, anger, annoyance, approval, caring, confusion, curiosity, desire, disappointment, disapproval, disgust, embarrassment, excitement, fear, gratitude, grief, joy, love, nervousness, optimism, pride, realization, relief, remorse, sadness, surprise).
Tokenizing the data
In order to provide a valid input to the neural network, we need to tokenize each word in each row of the data. This means that rather than a word, we’ll have a mapping to a giant dictionary of Token:Word, e.g. a sentence “I really love bitterballen!” would tokenize 1:”i”, 2: “really”, 3: “love”, 4:”bitterballen”
Side note: Right away, there’s an improvement that could be made here, that I lose out on: The exclamation mark, or capitalisation of text, could be used to predict some strong emotions.
Rather than now represent each sentence in text, it would be a vector: [1, 2, 3, 4]. A second sentence “Bitterballen love really I!” would now vectorize to [4, 3, 2, 1], and a sentence with new words would add new values [5, 6, …] to the token list.
- num_words is the amount of words in the dictionary. This was chosen by looking at the tokenizer results. After the first 9000 most occuring words in the dataset, we start to hit typos, strange acronyms and words that are only used once. These aren’t useful to train the network, so they should be discarded. The result of this is that the amount of words covered by the word embedding (section below), increases from 51% to 95%. We are now working with a good corpus of words that are well-known by word embeddings, therefore mostly commonly used words but with 5% of words that might be reddit specific, hence great for training the network.
- Here, there is also a two row workaround in manually setting the tokenizer’s word index, resolving a known issue with Tokenizer. This does not affect the model output, but does make training time remarkably faster because it manually reduces the tokenizer size to 9000.
- maxlen is the maximum length of the input text from reddit. For our algorithm, we need each post to be a list of tokens of equal length. For instance, if maxlen was 10, a short post of only 5 words would be padded with zeroes: [1 52 3 10 3 0 0 0 0 0], whereas a long post of 15 words would be limited to 10. Here, I have gone with 20 just for safety sake.
Embeddings
By tokenizing, we turn each row into a more mathematical vector of words, e.g. [1 52 3 10 3 0 0 0 0 0]. Being a vector, this is something that we can use as input to a ML model. However, there is an even better way to represent the data.
Embeddings are large pre-trained mappings of related words in an n-dimensional space. For example, as shown below, if we trained a word association model on a large amount of texts, one dimension of associated words it would pick up are on gender. In this dimension, the embedding would be a simple vector indicating that “Man is to woman” as “king is to queen.” When using an embedding, you enable the model you train to quickly make these distinctions. This helps the model train faster and more effectively, eliminating the confusion that synonyms and more abstract text would create. Your model can be more efficiently trained to consider these words x% similar, and produce similar outputs. For a more intuitive, in-depth explanation, try Will’s.
So imagine this image, but with 50 to 300 dimensions. Gender, colour, mood, and so on. That’s what Stanford’s Glove library provides, and is the embedding source used in this example. Google’s BERT uses pre-trained embeddings, too.
In the code below, there is a function create_embedding_matrix to process the embedding file into a matrix. This is shamelessly obtained from an excellent tutorial on text classification with keras.
Here, there are only two choices to explain:
- The embedding package to use. I tried Glove and Fasttext. Both produced similar results, but Glove’s was slightly better, even with fewer dimensions.
- embedding_dim is the amount of dimensions of word relations to consider. Glove has options from 50 through to 300. Using 50 really sped up training, but 300 provided a 2% improvement in F1 score for me.
So, as a result of embedding, rather than an input array mapping some sentences to text
We instead have a huge matrix mapping each word to dimensional values! So rather than just a list of words, we essentially have a matrix of meanings for each word in our corpus.
with each dim_ value being an actual representation of that word’s value in a dimension, e.g. gender.
The neural network
Firstly, we add an embedding layer to accept the embedding matrix we just made. Trainable = True here because 1. the Glove embedding is not trained on the last 5% of words in our corpus, which may be super useful for the model and 2. There might be differences in embedding dimensions in reddit language use, which can be modified in real-time while training.
Secondly, a convolutional layer to turn this into a CNN. This essentially behaves as a sliding layer of 3 words looking for 256 different filters. In other words, it trains the model to look for 256 different ‘meanings,’ sliding over the text bit by bit. Whilst I did try different values for both, this amount worked well probably because:
- 256 filters is close to the 300 dimensions used
- 3 words is a good average amount of words to discern an emotion from in the input data
A dropout layer (Here, a regularizer would have been an option too). I found that the network, with such a large dataset, was overfitting right after one epoch. To prevent this, the dropout layer simply deactivates a random 20% of the nodes per batch, making sure we don’t rely too much on one node in the network. This decreases overfitting, and most models now do their best after around 10 epochs as opposed to 1. This works hand-in-hand with learning rate, and the amount of epochs chosen.
This is standard practice in CNNs, now reducing the amount of incoming feature vectors, taking only the maximum ones for each dimension.
Again, standard practice to have another small dense relu layer before a sigmoid output layer for classification. Finally, the model is combined with an adam optimizer and binary_crossentropy loss function, both which are your bread and butter for binary classification neural networks, and also for multi-label, multi-class classification. A learning rate of 0.0002 was chosen through trial and error. As mentioned above, you can play around with this, the amount of epochs, and the dropout rate to affect the speed of the network’s learning, overfitting, and overall end result.
*Now we add some callbacks to allow us to go back to the best model, and stop training soon, if the neural network starts getting worse with training. *The batch size of 100 was also found through trial and error.
Picking a sigmoid threshold
This outputs the precision and recall, and resultant F1 score (mean of precision and recall) for each threshold. We could be even more fancy here, and optimise a threshold for each emotion, depending on the exact emotion we wanted to detect. But I don’t.
Results
After a modest 12 epochs, the network optimises its loss function, and we obtain an F1 score of almost 42%.
Using the above threshold loop, it looks like at a threshold of 0.25, we get the optimal F1 score:
Using this threshold of 0.25, we can determine the F1 score for each individual emotion:
Some interesting results here, when graphing emotions’ F1 scores:
As anticipated, some emotions (amusement, gratitude, love) are way easier to predict than subtle emotions like disappointment, realization, relief.
Taking this model further
As mentioned through the writeup, there are several ways we could improve on this model:
- Using a grid search to tweak the parameters (especially learning rate, dropout, convolutional hyperparameters)
- Optimising the sigmoid threshold for each emotion, as opposed to the entire network’s output
- Gleaming more information from exclamation marks, all-caps words, etc.
- Copy pasting Google’s BERT-based code :)
Originally published at https://rian-van-den-ander.github.io on July 26, 2020.