Predicting Bitcoin prices using linear regression and gradient descent.

On this article I’m going to show how gradient descent combined with linear regression works, using bitcoin prices and its number of searches on Google as data.

Coinmonks
Published in
3 min readAug 10, 2018

--

Let’s assume there is a relationship between how many times bitcoin is searched on Google with its price, if we plot a graph where the x-axis is the the number of searches and y is the price, we can notice a linear pattern for when bitcoin gets more searches the price increases.

y-axis is the price in US$ and x is the number of searches.

The point on this article is to show how gradient descent works. Not to have an accurate bitcoin price prediction.

We could draw a line that follows the linear pattern, but in order to have a accurate prediction the line has to be close from the data points.

Drawing the line with the equation:

y = mx + b

Where m is the line slope and b is where the line intercepts the y-axis. However how can we find optimal values for them? There is a technique called Gradient Descent that will help us find the optimal values.

What is gradient descent

Gradient descent is an optimization technique that helps decrease the error generated by an error function, here the error function is the Sum of the Squared Errors.

Imagine this line was drawn with guessed values for `m` and `b`. The error is the sum of the distance from the points to the line squared and divided by the number of points

How can we decrease the error?

We can blindly guess values for m and b and check if the error decreased, however this approach is not sane.
If we calculate the partial derivate we will know how fast the error value is moving, therefore we can update smartly the values. Think as a ball moving downhill, where the bottom is where we want to reach and the derivate will tell us how to update the value in order to get closer to the bottom.

In our case the x-axis is `m` or `b` not w.

Finding `m` and `b` with Python and numpy.

What we are going to do:

  1. Extract Bitcoin prices and number of Google searches.
  2. Match the price with search date.
  3. Set the hyper parameters: epoch, learning rate and initial values for m and b.
  4. Calculate the error to check our algorithm learning.
  5. Calculate the gradient: The derivate of m and the derivate of b.
  6. Multiply by the learning rate and update with the difference from the current m and b.

Extract Bitcoin prices, number of Google searches and match the price with search date..

Set the hyper parameters: epoch, learning rate and initial values for m and b.

Calculate the error to check our algorithm learning.

Calculate the gradient: The partial derivative of m and the partial derivative of b, then multiply by the learning rate and update with the difference from the current m and b.

Putting everything together

Drawing a line that best fits our data

With optimal values of `m` and `b` we can draw a line that is closer to the points.

Thanks to gradient descent we were able to update the values in the direction we wanted and dropped the error rate resulting in a line that fits our data. You can find the complete notebook on github.

References:

--

--