GETTING STARTED WITH TENSORS USING PYTORCH

Beginner’s guide into deep learning.

Aditya Chakraborty
The Startup
5 min readMay 28, 2020

--

Photo by Lukas on Unsplash

What is PyTorch ?

PyTorch is an open source machine learning library based on the Torch library, used for applications such as computer vision and natural language processing, primarily developed by Facebook’s AI Research lab.

It is a deep learning framework and a scientific computing package. The scientific computing aspect of PyTorch is primarily a result PyTorch’s tensor library and associated tensor operations.

Source : Wikipedia and Deeplizard

What are tensors ?

Tensors are basically n-dimensional arrays. They are essential for practicing deep learning because neural networks are built on them. They are the basic data structure used in all kinds of neural networks. All the phases of a neural network are represented via tensors, starting from inputs, to their transformations and finally the output. Other data structures like numbers, arrays, vectors, and matrices can be shown in the form of a tensor.

Image Source

Below are five functions to illustrate some basic creation and usage of tensors using the PyTorch package.

1. torch.rand( )

2. torch.linespace( )

3. torch.reshape( )

4. torch.take( )

5. torch.cat( )

This is important as all of our methods that we are going to use are provided by the torch module.

This generates a rank 1 tensor with size=4. All of the values in the tensor are random values which are in the interval [0,1). (“Rank” basically denotes the dimensionality or axes of a tensor and “size” or “shape” provides the no. of elements present in those dimensions. So in this example, there is just 1 axis, thus rank being 1. And in that axis, there are 4 elements, giving us size=4).

This generates a rank 2 tensor of shape 3x4.

Such an operation is not possible because a tensor cannot be created with a negative dimension. Running this code would throw a RuntimeError.

So torch.rand is a very handy function and can be quickly used to generate tensors of any shape, especially when we dont want to deal with bigger numbers as tensor elements.

This creates a tensor with 10 linear points for (1, 10) inclusively.

This creates 10 linear points between 6 to 8 inclusively.

This example is same as example 2 but with a negative step count which is not possible here. Thats why it throws a RuntimeError.

linspace() method thus can be used to create 1 dimensional tensor of variable sizes within an interval.

Using the torch.rand() method that was illustrated earlier above, we create a rank 2 tensor here with size=4x6, namely 'x'. This tensor 'x' is then reshaped into different dimensions, i.e., 3x8. Note that the elements are not changed in the process of doing so. Its just the shape that is changed (dimensions or axes changed), thus the method name 'reshape'. To keep the no. of elements same, we need to put in factorials of the total no of elements (which is 24 in this case) in the shape parameter of the method.

torch.reshape() makes our lives easier by taking in a single dimension -1, in which case the output is inferred from the remaining dimensions and the number of elements in input. This is one of the most important and useful tricks that saves us a lot of time while writing this function.

This throws a Typeerror because the expected type of input in the shaoe parameter is tuple not list. This would have worked if it was something like torch.reshape(x,([12,2]))

torch.reshape() is one of the most important methods while dealing with Convolutional Neural Networks (CNNs).

Note: flatten() method can be considered as a special case of reshape() method.

This returns a new tensor with the elements of input at indices 0, 2 and 5 which are 5, 7 and 10 respectively. The important thing to note here is that the input tensor is treated as if it were viewed as a 1-D tensor and the result takes the same shape as the indices. Also, what the index parameter takes in here is another method of creating tensors which is a multi-dimensional matrix containing elements of a single data type.

This simply returns the second last element of the input tensor in the form of a new output tensor which is 9.

The above code throws a TypeError because the index parameter of the method can only take one dimensional tensor for checking indices, unlike here where we’ve provided a two dimensional tensor.

In conclusion, torch.take() is very useful for finding out elements in a tensor without worrying about specifying the exact dimensions of their positions while extracting them out. The fact that this method treats the input tensor is treated as if it were viewed as a 1-D tensor and that the result takes the same shape as the indices, makes our work much easier.

Here we are concatinating the same input tensor ‘x’ three times over dimension 0, i.e., the first dimension.

Here we created another tensor ‘y’ with the same dimensions as those of tensor x and then we concatinated x and y. The resulting tensor is the concatinated version of x and y tensors over dimension 1 or the second dimension.

On executing this, an error shows up stating that concatination can only be done with tensors of same shape or same dimensions.

Note: torch.stack() is a special case of torch.cat() as it concatinates the tensors based on their first dimension by default always. Thus it is called stacking of tensors.

Thus it can be inferred that torch.cat() is extremely important when it comes to dealing with CNNs because CNNs often have a lot of tensors concatinated together.

Conclusion

So we have covered some pretty good ground for getting started with PyTorch and tensors. We’ve seen:

  1. What is PyTorch?
  2. What are tensors?
  3. Five basic functions to create and use tensors using PyTorch package.

Tensors are at the heart of programming neural networks and studying deep learning as a whole. That is the reason why this guide would be a good starting point for beginners who are about to jump into the world of neural networks programming.

Reference Links

Below are links to my references of this article:

One amazing resource for people who want to gain more insight into the world of deep learning and neural networks is this book by Hyatt Saleh:https://www.amazon.com/Applied-Deep-Learning-PyTorch-Demystify-ebook/dp/B07MMLWN3Q

--

--

Aditya Chakraborty
The Startup

Aspiring Data Scientist | Neural Networks enthusiast