Sitemap
TDS Archive

An archive of data science, data analytics, data engineering, machine learning, and artificial intelligence writing from the former Towards Data Science Medium publication.

Member-only story

Back-propagation Demystified [Part 1]

--

Photo by ThisisEngineering RAEng on Unsplash

If you have been reading about deep learning, you must have definitely heard the term back-propagation at least once. In this article, I explain back-propagation and computational graphs.

I would recommend you read about gradient descent optimization here, before proceeding with this article. Now, let’s dive right into the explanation.

Introduction

We will first look at a few key steps involved in network training and terms like forward propagation.

Deep learning network training steps

The overall process for a deep learning network training may be summarized in the following steps.

  1. Data exploration and analysis [This is a huge step involving a lot of sub-steps]
  2. Choice of an architecture based on information obtained in step 1 and building it using a framework such as Chainer, PyTorch, TensorFlow etc.
  3. Choice of an appropriate cost function such as binary cross-entropy or mean squared error etc. depend on the task at hand.
  4. Choice of an optimizer such as Stochastic Gradient Descent, Adam, Adadelta etc.
  5. Training the model to minimize the selected loss function w.r.t. the network parameters using the training examples.

--

--

TDS Archive
TDS Archive

Published in TDS Archive

An archive of data science, data analytics, data engineering, machine learning, and artificial intelligence writing from the former Towards Data Science Medium publication.

Manpreet Singh Minhas
Manpreet Singh Minhas

Written by Manpreet Singh Minhas

DL/CV Research Engineer | MASc UWaterloo | Follow and subscribe for DL/ML content | https://github.com/msminhas93 | https://www.linkedin.com/in/msminhas93

Responses (1)