Introduction to recurrent neural networks (RNNs)

Neural networks with memory

Tim Sullivan
The Startup
6 min readMay 25, 2020

--

Introduction

A typical feed-forward neural network maps inputs to outputs with no consideration of previous computations or where the current input fits in relation to others. The network applies the same function to each input regardless of sequence. This is fine for many applications, but often the context of an input has some relevance to the target output. One way to address this problem is to use a recurrent neural network (RNN). An RNN is a network with ‘memory.’ The network maintains information about previous inputs allowing its current output to be generated with consideration about the past. There are a wide range of applications for RNNs including language, music, weather forecasting, and stock market prediction. In this article, we’ll take a look at basic RNN structure, building an RNN cell from its components as well as the RNN module in PyTorch.

RNN structure

An RNN cell takes an input vector at some time 𝑥(𝑡) and maps it to an output vector 𝑦̂(𝑡) while also considering something called the network hidden state ℎ(𝑡) which depends on the previous inputs to the network. At every time step the network computes the output vector while also computing the hidden state as a function of the input and previous hidden state, passing the computed hidden state as an input to the next RNN cell. (The functions f and g shown below are just the operations conducted on the input and previous hidden state to generate the new hidden state and output at time t. We’ll go into more detail later.)

These RNN cells can be strung together to form multiple layers of cells and mixed with other NN structures such as feed-forward layers to form the final model.

RNN cell sequence

The actual computation of the RNN cell is shown in the equations below where 𝑊ℎ, 𝑊𝑥, 𝑊𝑦 are the weight matrices corresponding to the hidden state, input vector, and output vector respectively, and 𝑏ℎ and by are the bias vectors. The weight and bias operations will be familiar to anyone who has worked with feed-forward neural networks. The difference is the consideration of the hidden state generated by previous computations. Different activation functions can be used for different network applications.

Computation of the hidden state and output

Here’s one way to construct an RNN cell in PyTorch using tahn and softmax activations. The hidden state output can be used as an input to the next RNN cell.

But PyTorch comes with RNN and RNNCell classes that can create a single RNN cell or a multilayer RNN network allowing you to use RNNs without constructing all of the parameters shown above (Note that Pytorch uses a hidden state bias and input bias vector instead of a single vector). The equivalent call to our cell above is just RNNCell(inputSize, hiddenSize, bias=True, nonlinearity = 'tahn').

Back-propagation

The network parameters are updated in the same way as a normal feed-forward network by taking the gradient of the loss with respect to each parameter in the network, something known as back-propagation through time (BPTT) in RNNs. PyTorch’s standard loss.backward() function accomplishes this for you but to summarize the gradient of the hidden state at the final time τ with respect to the loss L is shown below where o is the output.

For a more detailed explanation see the Deep Learning text.

A simple example

To see the RNN in action, let’s create a basic RNN in PyTorch that will learn to predict the next value in a sine curve given a preceding sequence. First, create the training data.

Next use PyTorch’s Dataset/Dataloader class to create a dataset that will randomly return a sequence from our training data as well as the next data point in the sequence. The Dataset/Dataloader functions in PyTorch are very powerful ways to let PyTorch handle the feeding of inputs and targets to your network for training. The __getitem__ method below will return a vector of data points in sequence and the target next point y.

We’ll use the following parameters for the network and training.

Let’s check that the data loader returns what we expect

Now we’ll create the RNN using PyTorch’s RNN class followed by a linear layer. Since batch_first = True the inputs and outputs of the RNN are (batchSize, sequenceLength, features). Since we’re just predicting a scalar value, our features size is just 1. We’ll use MSELoss as the loss function and an Adam optimizer

A quick check that everything is working as we expected. We should get a vector of size batchSize x features as an output.

Now we can train the model and graph the loss for each of the 100 epochs in our training. We initialize the hidden state as an array of zeros at the beginning of each sequence.

Training loss

This is just a simple example so there’s no test data but we can take a look at the network’s prediction for one sequence in the training data. First print the sequence to be fed to the network as well as the correct next element, then call the model on the sequence and see what the predicted output is.

The correct output should be 0.0510. Let’s see how the model does.

The model predicted 0.0523, not too bad for a simple network!

This covers the basics of an RNN. Next, we’ll look at Long Short-term Memory networks (LSTMs) and why they’re better suited than vanilla RNNs for certain applications. If you want to run the code yourself, I included the gist below.

If you have any questions, leave them in the comments below.

--

--