Basic Neural Network using PyTorch

Learner1067
Analytics Vidhya
Published in
4 min readJun 21, 2020

--

PyTorch as a framework has lot to offer , with minimal lines of code we can achieve a lot .You can build any Neural Network of your liking. If you know the theory behind Forward propagation and Back propagation , you can understand writing it from scratch will be quite an experience.But certainly not with PyTorch , just with few lines of code and you are done .

For Forward propagation[ I will go in details and explain]

For Back propagation

Lets start with hello world example of Machine learning i.e Iris Data Set. The data can be downloaded from below link.

As first step we will analysis the data with help of matplotlib in graphical format. I have mentioned the code used to draw graphs.

Looking at graph its clear Setosa can be easily differentiated from other two categories.

Code to draw graph can be downloaded from here

Below is the code for Neural Network . I will walk through it systematical in detail.

The code can be checked at below link.

First define class extending nn.Module . In constructor I am passing number of input features , possible outputs in case of iris it can be 3 , number neuron in hiddenlayer1 and number of neuron in hiddenlayer2 . You can choose different numbers of neurons in hiddenlayer. Generally this parameters are selected based on hyper parameters optimization. I have chosen ADAM optimizer and CrossEntropy as loss function. Please check the code below its self explanatory and everything is provided by PyTorch.

In forward propagation, I am simply using Relu as activation function and passing output of one layer to another till it reaches output layer , which is the predicted value. As simple as it looks thanks to PyTorch.

In Train method I am doing back propagation using built in features.Here number of epochs used are 100 by default . This is where magic happens backward() finds gradient using differential calculus chain rule. The step() method adjusts the weights of neurons based on learning rate , remember learning rate 0.01 has been used to initialize optimizer in constructor. The zero_grad() is getting in each iteration to stop compounding. The drawGD method simply plots epoch against loss . It is of great help if you want to know how your Model loss is improving with respect to each epoch.

In order to print bias and weights of model use below method.

When you call this method initially it shows default values. As shown below,

Now come the time to train model using data. As shown in below snipped I am first loading data in dataframe using pandas and then performing data cleanup to distinguish input and output [ Please refer comments in code].Then I am splitting data using test train split method of scikitlearn and supplying it train method of our model. As we have already went through train method . It encapsulate back propagation and drawing graph of loss vs epoch.

Complete code is at below link.

--

--