Time series prediction using a simple RNN

Guru Prasad Natarajan
Mindboard
Published in
3 min readMar 19, 2019

For deeper networks, the obsession with image classification tasks seems to have also caused tutorials to appear on the more complex convolutional neural networks. This is great if you’re into that sort of thing, however, if someone is more interested in data with timeframes then recurrent neural networks (RNNs) come in handy.

Now whilst there are lots of public research papers and articles on LSTMs, pretty much all of them deal with the theoretical workings and the math behind them and the examples they give don’t really show predictive look-ahead powers of LSTMs in terms of a time series. Again, all great if you’re looking to know the intricate workings of LSTMs but not ideal if you just want to get something up and running.

Let’s start with the most basic thing we can think of that’s a time series; a standard sin wave function. And let’s create the data we’ll need to model many oscillations of this function for the LSTM network to train over.

Creating a sine function

First, we create a function that generates sin wave with/without noise. Using this function, we will generate a sin wave with no noise. As this sin wave is completely deterministic, we should be able to create a model that can do perfect prediction the next value of sin wave given the previous values of sin waves!

Here we generate period-10 sin wave, repeating itself 500 times, and plot the first few cycles.

Sine wave

Next step is to create training and testing data. Here, the controversial “length of time series” parameter comes into play. For now, we set this parameter to 2 which is the lookback window.

Simple RNN model

As a deep learning model, we consider the simplest possible RNN model: RNN with a single hidden unit followed by a fully connected layer with a single unit.

The RNN layer contains 3 weights: 1 weight for input, 1 weight for the hidden unit, 1 weight for the bias.

The fully connected layer contains 2 weights: 1 weight for input (i.e., the output from the previous RNN layer), 1 weight for the bias.

In total, there are only 5 weights in this model.

Training Result

The validation loss and loss are exactly the same because our training data is a sin wave with no noise. Both validation and training data contain identical 10-period sin waves (with a different number of cycles). The final validation loss is less than 0.001.

Actual vs Predicted graph

The plot of true and predicted sin waves look nearly identical.

Actual vs Predicted

The best way to understand the RNN model is to create a model from scratch. We will be doing that in the upcoming post.

Masala.AI
The Mindboard Data Science Team explores cutting-edge technologies in innovative ways to provide original solutions, including the Masala.AI product line. Masala provides media content rating services such as vRate, a browser extension that detects and blocks mature content with custom sensitivity settings. The vRate browser extension is available for download via the Chrome Web Store. Check out www.masala.ai for more info.

--

--