Experimenting with Neural Networks, JavaScript & the Strava API

An Introduction

The advancement in Artificial Intelligence has been in the news quite frequently lately and inspired me to start checking out the algorithmic backbone behind many of these breakthroughs. Deep learning is a branch of algorithmic machine learning that deals with layers. An artificial neural network, or ANN, is a model based loosely off of the how our brains learn. We interact with these algorithms every time we add a photo to Facebook and it recognizes specific faces, or when we ask Siri who that guy was in that movie.

There are three main parts that make-up an artificial neural network.

  • The input layer which takes data values into the next (hidden) layer. The data gets multiplied by a weight and a bias is added. On the first time through, the weight is completely random. The weight changes over time to better fit the training model.
  • The hidden layer takes those inputs and transforms them into something that the output can use.
  • The output layer collects the predictions made in the hidden layer and produces the final result: the model’s prediction.

To really learn how these algorithms work I wanted to come up with my own prediction model. Currently, I’m in the beginning stages of training for a triathlon, so I thought it would be awesome to predict how good my run would be based on four factors, or inputs. These factors, I decided, would be the time of day I ran, the hours of sleep I got the night before, if I worked out yesterday and what the temperature was outside. I realized I could easily pull my running data from Strava’s API, so I started there and filled in the other data I would need. Below is a very basic visual representation of the neural network.

This is a visual of what the network would look like

To make it easy for now, I’m using two categories for my outputs. The run was either a bad one (0) or a great one (1). The closer the output came to either of these would let me know how good my run was about to be given the current situation I was in.

Overview of everything coming together

The first step is to normalize the inputted data, so everything was on the same scale. To do this, there is a pretty simple formula that will do the trick.

The weights between the first input and hidden layer

To figure out the weights for each of the lines going into the next layer, I used a Gaussian function to randomly distribute the weights.

I also needed a function to activate the movement to the next layer. For this I used a sigmoid function, which looks like an “s” when graphed. The sigmoid function introduces non-linearity in the network and bounds the output to between 0 and 1 so that it can be interpreted as a probability.

The learning part of this comes with the next step called back propagation. This is where we find out how wrong our predictions are based on the training data and adjust the weights accordingly. The first thing we have to do is find how off the output is from the output we gave it. Next we’ll have to get the weight changes by multiplying this by the hidden layer results.

This repeats over and over again (I set the iterations at 2000 times) until your model is accurate enough to make predictions based on given inputs.

Adding in some JavaScript

Steven Miller provides an awesome tutorial in JS on how to go about doing this — http://stevenmiller888.github.io/mind-how-to-build-a-neural-network-part-2/. He uses matrices to perform many of math procedures, so I followed his guide to create my own ANN. Here’s an example of the forward propagation.

The data I’m using is very much just for proof of concept only. In the future, I’m looking to do a much better job of tracking my runs, but in the meantime I wanted to test the training data I had with two real life scenarios.

My first test was a hypothetical situation where I was going for a run super late at night, on about 3 hours of sleep, coming off of a workout from yesterday and it was super hot out. I logged my result to the console and what I got was “First Test — 0.0010355340339641495”. Since this number is super close to 0, the prediction is that, based on this scenario, my run is going to be pretty crappy. As a note, this number will change based on the random initial weights given to the model the first time it runs.

The second test involved a much better running situation, pretty much opposite from above. It is early in the morning, I got my full eight hours of sleep, I didn’t work out the day before and it was a mild 60 degrees out. What I got was “Second Test — 0.9999998243122006”, which means this was about to be a Forrest Gump type run.

Conclusion

While this is all just a proof of concept, I think it would be awesome to know what types of situations will effect the outcome of my workouts more and plan on building this out in the future. In the meantime, you can check out my very rough code here - https://github.com/jzeoli/strava-neural-network.