Tensor Operations using PyTorch — 1

shaistha fathima
6 min readAug 24, 2019

--

Summary: Playing around with the code is the fastest way of learning. In this blog post we will learn to perform different tensor operations using PyTorch like reshaping, squeezing and unsqueezing, concatenating and flattening with code examples. For more posts like these on Machine Learning Basics follow me here or on Twitter — Shaistha Fathima.

Hey guys! This is my third post on Introduction to “Tensors” Series, If you have not seen my previous posts then please do have a look at it before you begin reading this one! Here,

In the previous posts, assuming that you have read it, we have learnt about tensors and its practical examples, and looked into the ways of creating tensors with different PyTorch functions and when to use those functions. In this post, we will dive deeper into the different types of operations that can be performed with tensors in PyTorch.

Lets begin….

The most awaited operation “reshaping” of the tensors! Bet, you were all curious about this one, since I have mentioned it in the first post. ( Ah… you do remember me mentioning something like that right!)

1. Reshaping of tensors

As you may remember…if YOU remember! By reshaping we mean rearranging the rows and columns. Now, there are two ways of doing it — (1) Reshaping without changing the rank of the tensor and (2) Reshaping with the change of tensor rank

Quick recap: “rank” in tensor represents the indexes or the dimensions of the tensor (matrix)

Before we start lets revise and find few things about the tensor in use,

(a) Size or shape of the tensor

(b) Rank of the tensor

Using len() function on shape or size

( c) Finding the total no of elements within the tensor

  • One way to do the product of the rows and columns, i.e, find the product of the t.shape .
  • And the other method is to use a function called numel() to find it directly.

(a) Reshaping without changing the rank of the tensor

On reshaping, the no of elements always remains the SAME! Only the shape changes i.e., no of rows and columns might change.

Notice the no of elements always remain the same i.e., 12

(b) Reshaping with the change of tensor rank

Note: these changes will not change the tensor “t” itself i.e., the original values of tensor “t” remains unchanged.

So, what is the use of the above? You may store these into another variable and use it from there.

the rank has changed from 2 to 3

t.reshape(2, 2, 3) meant splitting the tensor with two new matrices of size 2 x 3 each!

That’s it guys! Now you know how to reshape a tensor. Lets move on to….

2. Squeezing and Un-squeezing of tensors

Squeezing of tensors:

When we say squeezing of tensors, we mean to remove all the 1’s from the dimension of the tensor, say if the dimension of the tensor is [1, 12] then after squeezing that tensor the dimension changes to [12]. Here’s the examples:

Here is another example,

See, how the 1’s from the dimension have been removed!

Unsqueezing of tensors:

This puts it back to its original condition with 1’s in the dimension.

Now for the other example,

As you have noticed, unsqueeze() works on one dimension (dim) at a time!

3. Concatenating a tensor

Concatenation is a fancy word for “combining”, meaning combining two tensors into one!

There are two ways of doing it:

1. Row-wise concatenation

2. Column-wise concatenation

I don’t think any explanation for data is needed, its pretty simple!

4. Flattening of tensors

Flattening of tensor meaning similar to say trying to press in the content of say 3 jars in 1 big jar! or maybe into 2 jars depending upon the content!

(a) Using flatten()

flattened into one or along axis -0
flattened staring from the axis = 1
the original form along axis -2
staring along the axis-0 and end with axis 1

(b) Using reshape()

( c) Using view()

There are more ways of doing it, but the best ones are using flatten() or reshape(). You may want to have a look at this question on stackoverflow for better understanding.

So, what’s the use of flattening!! Its usually used in image flattening! And, why would you want to flatten an image? Hmm… good question!

The reason we do this is that we’re going to need to insert this (flattened) data into an artificial neural network later on. This looks something like this:

Just to understand the image better know that the process of building a Convoluted Neural Network usually involves four major steps: Convolution, Pooling, Flattening and Full connection. I won’t go too deep into it now, just know that in order to pass the data via a neural network, we would want it to be in a simplified format i.e., in one row or one column.

Conclusion

With this, you now know, how to perform basic operations like reshaping, squeezing and unsqueezing, concatenating and flattening of tensors.

The fourth and the last tutorial on this series, we will work on some other basic operations performed using PyTorch tensors such as arithmetic operations, broadcasting of tensors, comparison operations, etc.

You may always ping me or comment below for any doubts. Stay tuned!

For the complete code from this tutorial, you may look here.

--

--

shaistha fathima

ML Privacy and Security Enthusiast | Research Scientist @openminedorg | Computer Vision | Twitter @shaistha24