Coding a Recurrent Neural Network (RNN) from scratch using Pytorch

Diego Velez
6 min readJun 24, 2022

--

This blog was originally posted on Solardevs website https://solardevs.com/blog/coding-a-recurrent-neural-network-rnn-from-scratch-using-pytorch/

In this blog I will show you how to create a RNN layer from scratch using Pytorch.

đź’ˇ You can find the code of this blog in https://gist.github.com/Dvelezs94/dc34d1947ba6d3eb77c0d70328bfe03f

We won’t be using the native RNN layer, instead we will create our own. This is helpful to really understand how RNNs work and the internal computations performed.

For this I will asume you already know what RNNs are, why they are used and what are the limitations of it (LSTM and GRU).

Topics we will cover:

  1. Creation of RNN layer
  2. Shapes and tensor dimensions
  3. Training process with batches

RNN vs Feedforward Architecture

Personally I find it is easier to understand RNNs when I compare it to a feedforward networks because its a known concept, so I just add new concepts to previous knowledge, so I will be comparing them often.

Unlike feedforward networks, RNNs machinery is a bit more complex. Inside a single RNN layer we have 3 weight matrices as well as 2 input tensors and 2 output tensors.

Fig 1. Feedforward vs RNN number of components
Fig 2. Top: Feedforward Layer architecture. Bottom: RNN Layer architecture

People often say “RNNs are simple feedforward with an internal state”, however with this simple diagram we can see it’s not that simple. The components are way more complex in a Recurrent Net, but don’t worry, I will try to explain to you how this works and hopefully by seeing the code you will be able to understand it.

RNN Layer Architecture

Recurrent Nets introduce a new concept called “hidden state”, which is simply another input based on previous layer outputs. But wait, if this is based on previous layer outputs, how do I get it for the first run? Simple, just start it with zeros.

RNNs are fed in a different way than feedforward networks. Because we are working with sequences, the order that we input the data matters, this is why each time we feed the net, we have to input a single item in the sequence. for example if it’s a stock price, we input the stock price for each day. If it’s a text we enter a single letter/word each time.

We enter one step at a time because we need to compute the hidden state on each iteration, so this hidden state will hold previous information so the next sequence we input will have data from previous runs by summing the matrices (see Fig 2 above)

Inputs

Input tensor: This tensor should be 1 single step in the sequence. If your total sequence is for example 100 characters of text, then the input will be 1 single character of text

Hidden state tensor: This tensor is the hidden state. Remember for the first run of each entire sequence, this tensor will be filled with zeros. Following the example above If you have 10 sequences of 100 characters each (a text of 1000 characters in total) then for each sequence you will generate a hidden state filled with zeros.

Weight Matrices

Input Dense: Dense matrix used to compute inputs (just like feedforward).

Hidden Dense: Dense matrix used to compute hidden state input.

Output Dense: Dense matrix used to compute the result of activation(input_dense + hidden_dense)

Outputs

New hidden state: New hidden state tensor which is activation(input_dense + hidden_dense). You will use this as input on the next iteration in the sequence.

Output: activation(output_dense). This is your prediction vector. which means is like the feedforward output prediction vector

RNN Layer Code

class RNN(nn.Module):
"""
Basic RNN block. This represents a single layer of RNN
"""
def __init__(self, input_size: int, hidden_size: int, output_size: int) -> None:
"""
input_size: Number of features of your input vector
hidden_size: Number of hidden neurons
output_size: Number of features of your output vector
"""
super().__init__()
self.input_size = input_size
self.hidden_size = hidden_size
self.output_size = output_size
self.batch_size = batch_size
self.i2h = nn.Linear(input_size, hidden_size, bias=False)
self.h2h = nn.Linear(hidden_size, hidden_size)
self.h2o = nn.Linear(hidden_size, output_size)

def forward(self, x, hidden_state) -> tuple[torch.Tensor, torch.Tensor]:
"""
Returns computed output and tanh(i2h + h2h)
Inputs
------
x: Input vector
hidden_state: Previous hidden state
Outputs
-------
out: Linear output (without activation because of how pytorch works)
hidden_state: New hidden state matrix
"""
x = self.i2h(x)
hidden_state = self.h2h(hidden_state)
hidden_state = torch.tanh(x + hidden_state)
out = self.h2o(hidden_state)
return out, hidden_state
def init_zero_hidden(self, batch_size=1) -> torch.Tensor:
"""
Helper function.
Returns a hidden state with specified batch size. Defaults to 1
"""
return torch.zeros(batch_size, self.hidden_size, requires_grad=False)

Shapes and Dimensions

Fig 3. Inputs, weights and outputs shapes

Dimensions resulting from each matrix dot product (yellow indicators)

  1. batch_size x hidden_units
  2. batch_size x hidden_units
  3. batch_size x output size

Training with batches

Feeding a Neural Network in batches always computes way faster (10x faster easily), and RNNs are no exception. Training with batches will not improve performance in any way tho, so if your NN doesn’t work with a single training example at a time, it won’t work with 10 or 100.

The RNN I show as example is trained with text, one character at a time, so the training function should feed 1 character of the whole text at a time. I save a ton of time by doing this with batches. So I can feed any number of batches for every epoch.

Each epoch of the training goes through the whole text and then I iterate through each character. After I iterated the whole sequence I then compute the loss and the gradients, to then optimize the parameters by doing optimizer.step. It is worth mentioning that cliping is useful on this type of RNNs

After each epoch I also generate text to see how the network is doing and improving.

def train(model: RNN, data: DataLoader, epochs: int, optimizer: optim.Optimizer, loss_fn: nn.Module) -> None:
"""
Trains the model for the specified number of epochs
Inputs
------
model: RNN model to train
data: Iterable DataLoader
epochs: Number of epochs to train the model
optiimizer: Optimizer to use for each epoch
loss_fn: Function to calculate loss
"""
train_losses = {}
model.to(device)

model.train()
print("=> Starting training")
for epoch in range(epochs):
epoch_losses = list()
for X, Y in data:
# skip last batch if it doesnt match with the batch_size
if X.shape[0] != model.batch_size:
continue
hidden = model.init_zero_hidden(batch_size=model.batch_size) # send tensors to device
X, Y, hidden = X.to(device), Y.to(device), hidden.to(device)
# clear gradients
model.zero_grad()
loss = 0
for c in range(X.shape[1]):
out, hidden = model(X[:, c].reshape(X.shape[0],1), hidden)
l = loss_fn(out, Y[:, c].long())
loss += l
# Compte gradients
loss.backward()
# Adjust learnable parameters
# clip as well to avoid vanishing and exploding gradients
nn.utils.clip_grad_norm_(model.parameters(), 3)
optimizer.step()

epoch_losses.append(loss.detach().item() / X.shape[1])
train_losses[epoch] = torch.tensor(epoch_losses).mean()
print(f'=> epoch: {epoch + 1}, loss: {train_losses[epoch]}')
# after each epoch generate text
print(generate_text(model, data.dataset))

Again, I recommend you to check the complete code and also play with it. You can feed it with any text file, and you will see it improving each iteration. Play with the parameters and see how it behaves. Also set the batch_size to 1 so you can see how slow it is compared to 64

Manually coding this really helps to understand the underlying operations and workflow, it is also very satisfying to see how the RNN learns from the text and generates cool text.

--

--

Diego Velez

Trying to find out what is consciousness and how we can apply it in AI