Implementation of Gradient Descent in Python

Deepak Battini
Coinmonks

--

Every machine learning engineer is always looking to improve their model’s performance. This is where optimization, one of the most important fields in machine learning, comes in. Optimization allows us to select the best parameters, associated with the machine learning algorithm or method we are using, for our problem case. There are several types of optimization algorithms. Perhaps the most popular one is the Gradient Descent optimization algorithm. The first encounter of Gradient Descent for many machine learning engineers is in their introduction to neural networks. In this tutorial, we will teach you how to implement Gradient Descent from scratch in python. But first, what exactly is Gradient Descent?

What is Gradient Descent?

Gradient Descent is an optimization algorithm that helps machine learning models converge at a minimum value through repeated steps. Essentially, gradient descent is used to minimize a function by finding the value that gives the lowest output of that function. Often times, this function is usually a loss function. Loss functions measure how bad our model performs compared to actual occurrences. Hence, it only makes sense that we should reduce this loss. One way to do this is via Gradient Descent.

A simple gradient Descent Algorithm is as follows:

  1. Obtain a function to minimize F(x)
  2. Initialize a value x from which to start the descent or optimization from
  3. Specify a learning rate…

--

--

Deepak Battini
Coinmonks

Programmer and founder of blazorly.com. passionate open-source contributor, loves to combine cutting-edge tech expertise.