TDS Archive

An archive of data science, data analytics, data engineering, machine learning, and artificial intelligence writing from the former Towards Data Science Medium publication.

Member-only story

Implementing the Steepest Descent Algorithm in Python from Scratch

--

Image by author.

Table of contents

  1. Introduction
  2. The steepest descent algorithm
    2.1 The search direction
    2.2 The step size
    2.3 The algorithm
  3. Implementation
    3.1 Constant step size
    3.2 Line search with the Armijo condition
  4. Conclusions

1. Introduction

Optimization is the process of finding the set of variables x that minimize or maximize an objective function f(x). Since maximizing a function is equivalent to minimizing its negative, we may focus on minimization problems alone:

For our example, let us define a quadratic, multivariable objective function f(x) as follows:

Its gradient ∇f(x) is

import numpy as np

def f(x):
'''Objective function'''
return 0.5*(x[0] - 4.5)**2 + 2.5*(x[1] - 2.3)**2

def df(x):
'''Gradient of the objective function'''
return np.array([x[0] - 4.5, 5*(x[1] - 2.3)])

One may leverage the helpful scipy.optimize.minimize function from the popular SciPy library to rapidly find the optimum:

from scipy.optimize import minimize

result = minimize(
f, np.zeros(2), method='trust-constr', jac=df)

result.x

--

--

TDS Archive
TDS Archive

Published in TDS Archive

An archive of data science, data analytics, data engineering, machine learning, and artificial intelligence writing from the former Towards Data Science Medium publication.

Nicolo Cosimo Albanese
Nicolo Cosimo Albanese

No responses yet