PyTorch: Introduction to Neural Network — Feedforward / MLP

Andrea Eunbee Jang
BiaslyAI
Published in
7 min readFeb 25, 2019

--

In the last tutorial, we’ve seen a few examples of building simple regression models using PyTorch. In today’s tutorial, we will build our very first neural network model, namely, the feedforward neural network model.

Roadmap for the post

  • Introduction
  • Autograd
  • Single-layer Perceptron
  • So what is the activation function?
  • Feedforward Neural Network
  • Training Example

Introduction

The feedforward neural network is the simplest network introduced. It is an extended version of perceptron with additional hidden nodes between the input and the output layers. In this network, data moves in the only forward direction without any cycles or loops.

Autograd

Before jumping into building the model, I would like to introduce autograd, which is an automatic differentiation package provided by PyTorch. This is a must-have package when performing the gradient descent for the optimization of the neural network models…

--

--

Andrea Eunbee Jang
BiaslyAI

Born in Korea, raised in Vancouver, living in Montreal. 🇨🇦 🇰🇷