credit

Training Deep Learning Model with GPU and Pytorch

Nandan Pandey
Analytics Vidhya
Published in
4 min readJun 23, 2020

--

Welcome! Hope you are doing well. Today we shall discuss about GPU and how to use it to train your deep learning model on it and we all know that training deep learning models on CPU(Central Processing Unit) is very-very time consuming and we all face this situation very often when we are working in this deep learning field.

So without wasting our time let’s move on our topic.

What is GPU?

The CPU (central processing unit) has been called the brains of a PC. The GPU (graphics processing unit) its soul. GPU uses concept of parallel computing that makes it so powerful.

CPU vs GPU

credit

I think now you have got the basic idea of GPU, so let’s move on to train models on GPU.

What you will learn

Here I will not tell how to pre-process data, and train deep learning model but important points related with how to use GPU with your data and model using pytorch, a deep learning framework.

If you want to read whole notebook you can visit here.

Prerequisites

This tutorial assumes that you have at least trained one model with pytorch.

If not then visit here

Approach

Following Steps should be followed for using GPU -

i) Check whether you have GPU:

It returned True that means you have GPU.

Here is another blog that elaborates the basic functions related to cuda, that is a gpu.

ii) Next step is to create a utility function that is used to pick GPU as a working device if it is available and if it is not available then use CPU. So that anyone can work with this program.

iii) Now create a function that will move data/model to GPU.

iv) Now let’s move Dataloaders to GPU.

where train_loader, val_loader and test_loader is defined as follows

where Dataloader is built-in class used to load data in batches.

v) Create model

I have inherited ImageClassificationBase class as it contains some functions that will be used via this model’s object and then created a custom model. This model is multi-layer logistic regression to classify CIFAR10 dataset object.

Where ImageClassificationBase class is as below:

vi) Move model to GPU

CIFAR10Model is defined as class. Here Instantiated that class that is our custom model and moved it to GPU.

vii) Prediction : As this model has been trained on GPU and all data is also present on GPU whether it is of train/validation data or test data which we have to predict so prediction will also be present on GPU so to move it on CPU so call cpu() method on prediction if it is needed.

That’s all.

Points to take away

i) Check whether you have GPU and if it is then pick it else pick CPU. and follow following steps if there is GPU.

ii) Move Dataloader to GPU that will move all of data to GPU batchwise.

iii) Move model to GPU and train it.

iv) Now make predictions and move those predictions to CPU if required.

I hope it has been cleared to you that how to use GPU for model training using pytorch.

Get Notebook

If you want to get notebook you can visit here , also given link above somewhere during tutorial.

Connect with me

If you want to ping me you can do on Linkedin

--

--