Some Basic torch functions that are used frequently in Notebooks

Nandan Pandey
4 min readMay 25, 2020

It is also part of freecodecamp and jovian.ml’s optional assignment . Thanks to Aakash N S Sir to advise us to write a blog post as an assignment although it was optional but the interest and motive to write blogs came because of him.

Now come to the topic :

I was working on kaggle’s kernel with 1 GPU and more than 1 CPU. So keep this in mind and move forward because some functions that will be covered here are related to it.

  1. Find number of GPUs available

import torch

torch.cuda.device_count()

device_count

2. Find name of GPU being used

device_id

In above code I have passed 0 . It is so because I have only one GPU and and count starts from zero. So above code returns the name of my Cuda GPU being used.

3. Moving tensors from CPU to GPU

When we allocate any tensor then by default it is allocated on CPU.

moving cpu to gpu

It can be seen that device type is CPU by default.

But now when we want to move our tensors from CPU to GPU then use torch.to() method. This method is also used for converting data types of tensor.

We will see both data type conversion and moving of tensors from CPU to GPU in one line of code as shown below:

.to() method

In above cell of code tensor’s data type that was previously float32 (see above pic)now has been converted to float64 and also moved to cuda:0 that means GPU whose index is zero.

If You have multiple GPU then you can try with cuda:1(i.e. move to GPU-1 i.e. second GPU)and so on…..

4. torch.randn()

This method generates random numbers that are normally distributed that means it’s mean is zero and variance is 1

5. torch.argmax()

argmax

argmax() returns the index of maximum tensor element

It can take argument ‘dim’ as shown below

argmax_dim

When we pass dim =1 argument then it will work row-wise

When pass dim = 0 then it will work column wise as shown below

argmaxrowwise

In the same way argmin() also works that returns the index of minimum element.

6. torch.argsort()

It tries to index the tensor elements in such a way so that when tensors are ordered in that index then they will be in ascending order. By default it tries to make ascending order.

argsort

When want them to be in descending order pass descending =True as an argument as shown below:

argsort_descending

7. tensor.eq()

It is used to check element wise equality of tensors.

eq() method

In the same way following functions also work:

ge() means greater than or equal to
gt() means greater than
le() means less than or equal to
lt() means less than

8. tensor.equal()

It is used to check whether given tensors are of same size/dimension and elements.

This method returns True if given both tensors are of same dimension and contains same elements.

equal() method

9. torch.clamp()

This method is used to clip tensor values in some range.

clamp() method

Of course there are so many functions are there except it and we shalll discuss most of them in upcoming days. By the way it is my first post and from today there will be regular post on some topic and Next topic we will discuss about Serialization , Parallelization and locally disabling gradient computation in Pytorch.

Resources:

Kaggle Notebook : https://www.kaggle.com/awadhi123/pytorch-basics

Connect with me:

Linkedin : https://www.linkedin.com/in/nandan-pandey/

Thanks,,,,,

--

--