Brief Overview of Natural Language Processing with tensorflow2.0

Slim Shady
May 31 · 16 min read

Natural Language Processing (or NLP for short) is a discipline in computing that deals with the communication between natural (human) languages and computer languages. A common example of NLP is something like spellcheck or autocomplete. Essentially NLP is the field that focuses on how computers can understand and/or process natural/human languages.

Recurrent Neural Networks

In this tutorial we will introduce a new kind of neural network that is much more capable of processing sequential data such as text or characters called a recurrent neural network (RNN for short).

We will learn how to use a reccurent neural network to do the following:

  • Sentiment Analysis
  • Character Generation

RNN’s are complex and come in many different forms so in this tutorial we wil focus on how they work and the kind of problems they are best suited for.

Sequence Data

In the previous blog we focused on data that we could represent as one static data point where the notion of time or step was irrelevant. Take for example our image data, it was simply a tensor of shape (width, height, channels). That data doesn’t change or care about the notion of time.

In this blog we will look at sequences of text and learn how we can encode them in a meaningful way. Unlike images, sequence data such as long chains of text, weather patterns, videos and really anything where the notion of a step or time is relevant needs to be processed and handled in a special way.

But what do I mean by sequences and why is text data a sequence? Well that’s a good question. Since textual data contains many words that follow in a very specific and meaningful order, we need to be able to keep track of each word and when it occurs in the data. Simply encoding say an entire paragraph of text into one data point wouldn’t give us a very meaningful picture of the data and would be very difficult to do anything with. This is why we treat text as a sequence and process one word at a time. We will keep track of where each of these words appear and use that information to try to understand the meaning of pieces of text.

Encoding Text

As we know machine learning models and neural networks don’t take raw text data as an input. This means we must somehow encode our textual data to numeric values that our models can understand. There are many different ways of doing this and we will look at a few examples below.

Before we get into the different encoding/preprocessing methods let’s understand the information we can get from textual data by looking at the following two movie reviews.

I thought the movie was going to be bad, but it was actually amazing!

I thought the movie was going to be amazing, but it was actually bad!

Although these two setences are very similar we know that they have very different meanings. This is because of the ordering of words, a very important property of textual data.

Now keep that in mind while we consider some different ways of encoding our textual data.

Bag of Words

The first and simplest way to encode our data is to use something called bag of words. This is a pretty easy technique where each word in a sentence is encoded with an integer and thrown into a collection that does not maintain the order of the words but does keep track of the frequency. Have a look at the python function below that encodes a string of text into bag of words.

This isn’t really the way we would do this in practice, but I hope it gives you an idea of how bag of words works. Notice that we’ve lost the order in which words appear. In fact, let’s look at how this encoding works for the two sentences we showed above.

We can see that even though these sentences have a very different meaning they are encoded exaclty the same way. Obviously, this isn’t going to fly. Let’s look at some other methods.

Integer Encoding

The next technique we will look at is called integer encoding. This involves representing each word or character in a sentence as a unique integer and maintaining the order of these words. This should hopefully fix the problem we saw before were we lost the order of words.vocab = {}

Much better, now we are keeping track of the order of words and we can tell where each occurs. But this still has a few issues with it. Ideally when we encode words, we would like similar words to have similar labels and different words to have very different labels. For example, the words happy and joyful should probably have very similar labels so we can determine that they are similar. While words like horrible and amazing should probably have very different labels. The method we looked at above won’t be able to do something like this for us. This could mean that the model will have a very difficult time determing if two words are similar or not which could result in some pretty drastic performace impacts.

Word Embeddings

Luckily there is a third method that is far superior, word embeddings. This method keeps the order of words intact as well as encodes similar words with very similar labels. It attempts to not only encode the frequency and order of words but the meaning of those words in the sentence. It encodes each word as a dense vector that represents its context in the sentence.

Unlike the previous techniques word embeddings are learned by looking at many different training examples. You can add what’s called an embedding layer to the beggining of your model and while your model trains your embedding layer will learn the correct embeddings for words. You can also use pretrained embedding layers.

This is the technique we will use for our examples and its implementation will be showed later on.

Recurrent Neural Networks (RNN’s)

Now that we’ve learned a little bit about how we can encode text it’s time to dive into recurrent neural networks. Up until this point we have been using something called feed-forward neural networks. This simply means that all our data is fed forwards (all at once) from left to right through the network. This was fine for the problems we considered before but won’t work very well for processing text. After all, even we (humans) don’t process text all at once. We read word by word from left to right and keep track of the current meaning of the sentence so we can understand the meaning of the next word. Well this is exaclty what a recurrent neural network is designed to do. When we say recurrent neural network all we really mean is a network that contains a loop. A RNN will process one word at a time while maintaining an internal memory of what it’s already seen. This will allow it to treat words differently based on their order in a sentence and to slowly build an understanding of the entire input, one word at a time.

This is why we are treating our text data as a sequence! So that we can pass one word at a time to the RNN.

Let’s have a look at what a recurrent layer might look like.

Source: https://colah.github.io/posts/2015-08-Understanding-LSTMs/

Let’s define what all these variables stand for before we get into the explination.

ht output at time t

xt input at time t

A Recurrent Layer (loop)

What this diagram is trying to illustrate is that a recurrent layer processes words or input one at a time in a combination with the output from the previous iteration. So, as we progress further in the input sequence, we build a more complex understanding of the text as a whole.

What we’ve just looked at is called a simple RNN layer. It can be effective at processing shorter sequences of text for simple problems but has many downfalls associated with it. One of them being the fact that as text sequences get longer it gets increasingly difficult for the network to understand the text properly.

LSTM

The layer we dicussed in depth above was called a simpleRNN. However, there does exist some other recurrent layers (layers that contain a loop) that work much better than a simple RNN layer. The one we will talk about here is called LSTM (Long Short-Term Memory). This layer works very similarily to the simpleRNN layer but adds a way to access inputs from any timestep in the past. Whereas in our simple RNN layer input from previous timestamps gradually disappeared as we got further through the input. With a LSTM we have a long-term memory data structure storing all the previously seen inputs as well as when we saw them. This allows for us to access any previous value we want at any point in time. This adds to the complexity of our network and allows it to discover more useful relationships between inputs and when they appear.

For the purpose of this course we will refrain from going any further into the math or details behind how these layers work.

LSTM

And now time to see a recurrent neural network in action. For this example, we are going to do something called sentiment analysis.

The formal definition of this term from Wikipedia is as follows:

the process of computationally identifying and categorizing opinions expressed in a piece of text, especially in order to determine whether the writer’s attitude towards a particular topic, product, etc. is positive, negative, or neutral.

The example we’ll use here is classifying movie reviews as either postive, negative or neutral.

This guide is based on the following tensorflow tutorial: https://www.tensorflow.org/tutorials/text/text_classification_rnn

Sentiment Analysis

Well start by loading in the IMDB movie review dataset from keras. This dataset contains 25,000 reviews from IMDB where each one is already preprocessed and has a label as either positive or negative. Each review is encoded by integers that represents how common a word is in the entire dataset. For example, a word encoded by the integer 3 means that it is the 3rd most common word in the dataset.

More Preprocessing

If we have a look at some of our loaded in reviews, we’ll notice that they are different lengths. This is an issue. We cannot pass different length data into our neural network. Therefore, we must make each review the same length. To do this we will follow the procedure below:

  • if the review is greater than 250 words then trim off the extra words
  • if the review is less than 250 words add the necessary amount of 0’s to make it equal to 250.

Luckily for us keras has a function that can do this for us:

Creating the Model

Now it’s time to create the model. We’ll use a word embedding layer as the first layer in our model and add a LSTM layer afterwards that feeds into a dense node to get our predicted sentiment.

32 stands for the output dimension of the vectors generated by the embedding layer. We can change this value if we’d like!

Training

Now it’s time to compile and train the model.

And we’ll evaluate the model on our training data to see how well it performs.

So we’re scoring somewhere in the mid-high 80’s. Not bad for a simple recurrent network.

Making Predictions

Now let’s use our network to make predictions on our own reviews.

Since our reviews are encoded well need to convert any review that we write into that form so the network can understand it. To do that well load the encodings from the dataset and use them to encode our own data.

while were at it lets make a decode function

now time to make a prediction

RNN Play Generator

Now time for one of the coolest examples we’ve seen so far. We are going to use a RNN to generate a play. We will simply show the RNN an example of something we want it to recreate and it will learn how to write a version of it on its own. We’ll do this using a character predictive model that will take as input a variable length sequence and predict the next character. We can use the model many times in a row with the output from the last predicition as the input for the next call to generate a sequence.

This guide is based on the following: https://www.tensorflow.org/tutorials/text/text_generation

Dataset

For this example, we only need one peice of training data. In fact, we can write our own poem or play and pass that to the network for training if we’d like. However, to make things easy we’ll use an extract from a shakesphere play.

Loading Your Own Data

To load your own data, you’ll need to upload a file in google colab.Then you’ll need to follow the steps from above but load in this new file instead.

Read Contents of File

Let’s look at the contents of the file.

Encoding

Since this text isn’t encoded yet well need to do that ourselves. We are going to encode each unique character as a different integer.

And here we will make a function that can convert our numeric values to text.

Creating Training Examples

Remember our task is to feed the model a sequence and have it return to us the next character. This means we need to split our text data from above into many shorter sequences that we can pass to the model as training examples.

The training examples we will prepapre will use a seq_length sequence as input and a seq_length sequence as the output where that sequence is the original sequence shifted one letter to the right. For example:

input: Hell | output: ello

Our first step will be to create a stream of characters from our text data.

Next we can use the batch method to turn this stream of characters into batches of desired length.

Now we need to use these sequences of length 101 and split them into input and output.

Finally we need to make training batches.

Building the Model

Now it is time to build the model. We will use an embedding layer a LSTM and one dense layer that contains a node for each unique character in our training data. The dense layer will give us a probability distribution over all nodes.

Creating a Loss Function

Now we are going to create our own loss function for this problem. This is because our model will output a (64, sequence_length, 65) shaped tensor that represents the probability distribution of each character at each timestep for every sequence in the batch.

However, before we do that let’s have a look at a sample input and the output from our untrained model. This is so we can understand what the model is giving us.

notice this is a 2d array of length 100, where each interior array is the prediction for the next character at each time step

If we want to determine the predicted character we need to sample the output distribution (pick a value based on probabillity)

now we can reshape that array and convert all the integers to numbers to see the actual characters

So now we need to create a loss function that can compare that output to the expected output and give us some numeric value representing how close the two were.

Compiling the Model

At this point we can think of our problem as a classification problem where the model predicts the probabillity of each unique letter coming next.

Creating Checkpoints

Now we are going to setup and configure our model to save checkpoinst as it trains. This will allow us to load our model from a checkpoint and continue training it.

Training

Finally, we will start training the model.

If this is taking a while go to Runtime > Change Runtime Type and choose “GPU” under hardware accelerator.

Loading the Model

We’ll rebuild the model from a checkpoint using a batch_size of 1 so that we can feed one peice of text to the model and have it make a prediction.

Once the model is finished training, we can find the lastest checkpoint that stores the models weights using the following line.

We can load any checkpoint we want by specifying the exact file to load.

Generating Text

Now we can use the lovely function provided by tensorflow to generate some text using any starting string we’d like.

And that’s pretty much it for this module! I highly reccomend messing with the model we just created and seeing what you can get it to do!

Sources

Geek Culture

Proud to geek out. Follow to join our 1M monthly readers.