Basics about Pytorch tensor manipulation

Helik Thacker
The Startup
Published in
3 min readMay 30, 2020

Pytorch is a framework for deep learning which is very popular among researchers. There are other popular frameworks too like TensorFlow and MXNet.

In this article, we will majorly focus on the tensor manipulation in Pytorch and also some other useful functions. The first question that pops up is what is a tensor. It is just like a numpy array and it is used for numerical computations. There are many ways to create a tensor in Pytorch described in https://pytorch.org/docs/stable/tensors

So now we have a tensor. What next? Many times in deep learning we have to have tensors in a particular shape, or dimension. If we do not have the tensor in that particular shape, it will throw an error. So we can instead use the functions provided by Pytorch to manipulate the dimensions of the tensor.

We will look at 3 functions that manipulate the tensor dimensions, tensor.expand(), tensor.view() and tensor.reshape(). We will also look at tensor.contiguous() and tensor.acos()

  1. tensor.expand()

tensor.expand() returns a new view of the self tensor with the singleton dimensions (size of 1 in one of the axis) expanded to a larger size. It will not work if expand is called on a non-singleton dimension.

We have changed the tensor from size [2, 4, 1] to [2, 4, 16]. When we pass -1 instead of the size, we do not change size of that dimension. We can also change the tensors to a larger number of dimensions. The new dimensions are appended to the front. Only expanding the tensor will not allocate new memory. It just creates a new view of the current tensor. If we copy the expanded tensor to a new tensor or the existing one, it will allocate memory.

2. tensor.contiguous()

Some operations like permute or transpose, the tensor loses the contiguity. There can be some operations which are slower if the tensor is not contiguous. We can use tensor.contiguous() to return a tensor having the same data as self but in a memory contiguous form. If the self tensor was already in the contiguous form, it will just return the self tensor.

The above example shows that y which was the transpose of x is not contiguous (using the tensor.is_contiguous() function). We can convert it to contiguous in memory form using tensor.contiguous(). This function will won’t be used that much but it is good to know about such functions.

3. tensor.view()

tensor.view() returns a new tensor with the same data as self tensor, but with a possibly different shape. View doesn’t work on non-contiguous tensors. In such cases, we have to call tensor.contiguous() first and then call tensor.view() or use tensor.reshape().

We can see that the tensor dimensions have been changed. The number of elements should remain the same. We can also pass -1, in which case it will calculate the dimension by itself.

4. tensor.reshape()

tensor.reshape() returns the view whenever it is possible. But tensor.reshape() also works on non-contiguous tensor. So we can always use tensor.view() and when not possible we can call tensor.contiguous() and then call tensor.view().

We have discussed about three functions which can be used to manipulate the tensors. There are other functions too which are used for changing the tensor size.

5. tensor.acos()

This function just takes the element wise cosine inverse or the arc cosine.

This is one of the many trigonometric functions available on Pytorch. When the input value is less than 1 or greater than 1, it returns Not a Number (Nan).

Conclusion

We have discussed about some tensor functions that Pytorch provides. We have majorly focused on the tensor size manipulation functions. We also discussed about a trigonometric function. Many more functions can be found on the official documentation of Pytorch.

References

Official Pytorch Documentation: https://pytorch.org/docs/stable/tensors.html

Shital Shah’s answer on https://stackoverflow.com/questions/48915810/pytorch-contiguous

Difference between tensor.reshape() and tensor.view() — https://discuss.pytorch.org/t/difference-between-view-reshape-and-permute/54157/2

https://discuss.pytorch.org/t/difference-between-2-reshaping-operations-reshape-vs-permute/30749/4

Alex Rile’s answer on — https://stackoverflow.com/questions/26998223/what-is-the-difference-between-contiguous-and-non-contiguous-arrays/26999092#26999092

Difference between tensor.reshape() and tensor.view() — https://stackoverflow.com/questions/49643225/whats-the-difference-between-reshape-and-view-in-pytorch

About leaf and non leaf tensors — https://discuss.pytorch.org/t/valueerror-cant-optimize-a-non-leaf-tensor/21751

Soumith Chintala’s talk about Pytorch — https://www.youtube.com/watch?v=LAMwEJZqesU

--

--

Helik Thacker
The Startup

I love Deep Reinforcement Learning and applying it to various problems especially to games.