PyTorch🔦101, Part-1: Torch Tensors

somuSan
Analytics Vidhya
Published in
4 min readMay 27, 2020
From Unsplash by Kevin Ku

PyTorch is an amazing python library, which serves an important role in order to build Deep Learning. There is high community support is available for Pytorch just like Tensorflow. There are various utilities available in this Facebook AI product which I would like to shear,

  1. This library is most syntactically similar to python, by that it is leaning towards Python as its primary programming language. You will find so many syntactic similarities with various general library like-Numpy. Now, this point brings us to the next point which is easy to learn.
  2. Easy to learn, as the syntax is pretty much similar to python, it is easier for a developer to learn it. And the documentation is properly organized, it is very easy to find what you are looking for in that documentation, and there is a vast community of PyTorch.
  3. PyTorch has a very useful feature known as data parallelism. Using this feature, PyTorch can distribute computational work among multiple CPU or GPU cores. This feature of PyTorch allows us to use torch.nn.DataParallel to wrap any module and helps us do parallel processing over the batch dimension.

And the list goes on.

This tutorial basically covers different PyTorch tensor functions which are really very essential when you are using Pytorch to perform different Deep Learning tasks. Actually Any PyTorch model(like- ANNs, CNNs) or operation can not process any array which is not a torch tensor. So, to convert those arrays, especially I am considering NumPy arrays we need to use some functions and there are also some fantastic and sssuuupppeeerr simple functions for mathematical operations and some deep learning utilities. So, much os talking, let's get the ball rolling with the table of contents-

Table of Content:

  1. torch.Tensor()
  2. torch.ones()
  3. torch.zeros()
  4. torch.rand()
  5. torch.manual_seed()
  6. Move Torch Tensors from CPU to gup and vice versa
  7. Size conversion of a torch tensors
  8. Mathematical implementations

Importing the required libraries:

torch.Tensor()

This is a PyTorch function which helps us to create PyTorch tensors or arrays. We just need to pass a NumPy array or a list inside the torch.Tensor() and boom, your PyTorch tensor is ready. Here I 1st created a list(CELL NO. 6) and then created a NumPy array(CELL NO. 7), after that, I converted the list and the disarray into torch tensors.

torch.from_numpy() is another method for converting a NumPy array to torch tensors, we just need to pass that NumPy array in that function to make it work.

We covered almost all the aspects of creating torch tensors from NumPy array and lists, what if we need to create a NumPy array from torch tensors? It is very simple, we just need to add .numpy() with the torch tensor.

torch.ones()

This function creates an array of ones and we only need to provide the array size (like- 3x3 or 2x2) to create one array. This function is similar to the function as we have in NumPy, which is np.ones().In the case of torch.ones() as you can see the syntax is also similar.

So, in PyTorch, we are having a function which not only similar in terms of the functionality it is also similar in terms of the syntax and it is not the only in PyTorch which is similar to a NumPy function there are others too.

torch.zeros():

This function creates an array made of zeros and we just need to pass the size of the array. This function is also similar to the NumPy function np.zeros().

torch.rand():

This function creates an array filled with random numbers and we just need to pass the size of the array. It is similar to the NumPy function np.random.rand() which also does a similar job.

torch.manual_seed():

Seeding is so much important element while coding deep learning architectures, changing SEED can change the results or the output of a deep neural network. In NumPy, we use numpy.random.seed() for seeding but in PyTorch, we have torch.manual_seed() which applies SEED, here we just need to pass any random number inside torch.manual_seed() and this will prevent changing the output again and again.

Observe when we are passing the same number for seeding and creating the same array each time, we are getting the same output in each time (look at cell 17 and 18). But, when we are changing the number for seeding the output is changing(cell 19).

Reference:

  1. https://pytorch.org/docs/stable/tensors.html
  2. https://heartbeat.fritz.ai/10-reasons-why-pytorch-is-the-deep-learning-framework-of-future-6788bd6b5cc2
  3. Notebook Link — https://jovian.ml/soumya997-sarkar/torch-basics

Thank you for reading 👩‍💻👨‍💻

--

--