PyTorch -A Framework for Deep Learning

Aishwarya Ramaswami
Analytics Vidhya
Published in
4 min readOct 8, 2020

Deep learning is a subset of machine learning where artificial neural networks, algorithms inspired by the human brain, learn from large amounts of data. Every once in while, there comes a library or framework that provides us new insights into the field of Deep Learning, that allows attaining remarkable progress.

What is PyTorch?🤔

PyTorch is an AI framework developed by Facebook. It’s a Python-based package for serving as a replacement of Numpy to make use of the power of GPU and to provide flexibility as a Deep Learning Development Platform.

It is surely a framework worth learning. Here I discuss some of its attributes to get started.

How to Install PyTorch?

Initially install python and all basic libraries to work with PyTorch.

Now, you have to go on https://pytorch.org/ to get the installation command of PyTorch.

Here, you have to select your preferred PyTorch build, Operating System, Package, Language, and CUDA. It provides you with a command to install PyTorch in your windows and run the command on your command prompt.

Tensors

Tensors are multidimensional arrays. Pytorch tensors are similar to NumPy arrays with the addition being that Tensors can also be used on a GPU to accelerate computing.

#initializing a tensora = torch.tensor(x,y)

PyTorch supports the various tensor function with different operations like NumPy.

Numpy -Bridge for Arrays and Tensors

Converting a torch Tensor to a NumPy array and vice versa is a breeze.

Converting Numpy array to tensor:

import torch
import numpy as np
a = np.ones(5)
b = torch.from_numpy(a)
print(b)

Tensor to Numpy:

a = torch.ones(5)
b = a.numpy()

So, It is as simple as you see.

PyTorch Modules

Autograd Module

PyTorch provides the autograd package that provides automatic differentiation for all operations on Tensors. It is a define-by-run framework, which means that your backdrop is defined by how your code is run and that every single iteration can be different. It works by recording all the operations we perform and replay it backward to compute gradients.

x = torch.tensor([5],dtype=torch.float32,requires_grad=True)
y = torch.tensor([6],dtype=torch.float32,requires_grad=True)
z = ((x**2)*y) + (x*y)
#Using autogradtotal = torch.sum(z)
print(x.grad,y.grad)
total.backward()

Optim Module

Instead of manually updating the weights of the model as we have been doing, we use the optim package to define an Optimizer that will update the weights for us.

from torch import optim#adam optimizeroptimizer_adam = optim.Adam(model.parameters(),lr = learning_rate)#SGDoptimizer_sgd = optim.SGD(model.parameters(),lr = learning_rate)

Above are examples to use adam and sgd optimizers. PyTorch has many commonly used optimizers reducing our time to write them from scratch.

Some of the optimizers are,

  • SGD
  • Adam
  • Adagrad
  • AdamW
  • Adamax
  • Adadelta
  • ASGD, etc.

nn Module — to define a Network

The above modules help us define computational graphs as we go with our model network. To develop a complex neural network nn module is used.

PyTorch has a standard way for you to create your own models. The entire definition should stay inside an object that is a child of the class nn.Module. Inside this class, there are only two methods that must be implemented. These methods are __init__ and forward.

class nnet(nn.Module):     def __init__(self):          super(nnet, self).__init__()          self.net = nn.Sequential(                    nn.Flatten(),                    nn.Linear(10, 32),                    nn.ReLU(),                    nn.Linear(32, 64),                    nn.ReLU(),                    nn.Linear(64,32),                    nn.ReLU(),                    nn.Linear(64,2),                    nn.LogSoftmax(1)                 )     def forward(self, X):         outs = self.net(X)         return outs
  • As in other python classes, the __init__ method is used to define the class attributes and populate any value that you would want upon instantiation. And in the PyTorch context, you should always call the super() method to initialize the parent class.
  • The forward function computes output Tensors from input Tensors. We can then use our new autograd operator by constructing an instance and calling it like a function, passing Tensors containing input data.

Also, there are pre-trained models available in the torchvision package that can be just imported and used.

Loading Data in PyTorch

Dataset and Data loaders are the tools in PyTorch that can define how to access your data.

from torch.utils.data import Dataset, DataLoader

Torchvision package consists of popular datasets, model architectures, and common image transformations for computer vision. To install torchvision,

pip install torchvision
  • torchvision.datasets consist of many datasets such as MNIST, CIFAR10, etc.
  • torchvision.models consist of many pre-trained models that can be imported and used.
  • torchvision.transforms consist of many functions to build a more complex transformation pipeline.

In this article, we have seen the basics of functions in PyTorch.I would recommend you to use any PyTorch on any dataset and practice it to understand it’s functionalities in a better way.

--

--