PyTorch Zero to Hero (Reshape, Flatten, and Concatenate Tensors)~ 2

Abhishek Selokar
5 min readJun 6, 2024

--

This blog delves into PyTorch tensor operations, including reshaping, flattening, concatenating, squeezing, and unsqueezing. It provides practical implementations and examples to help you understand and manipulate tensor dimensions effectively.

Source

link to the first part of this series can be found here

Tensor Shape

Given the above tensor, which is a 3 x 4 matrix with rank 2, where 3 represents the number of rows and 4 represents the number of columns.

Rank is used to denote the number of dimensions present within the tensor (Number of directions used to describe the tensor)

Let’s dig into those terms in detail

import torch

t = torch.tensor([
[1,2,1,4],
[3,7,1,8],
[5,0,1,1]
])

print(t)
## Output
tensor([[1, 2, 1, 4],
[3, 7, 1, 8],
[5, 0, 1, 1]])

Shape of the tensor

  1. Tensor.size()
  2. Tensor.shape
print("Shape of t using tensor.size:",t.size())

print("Shape of t using tensor.shape:",t.shape)
## Output

Shape of t using tensor.size: torch.size([3,4])

Shape of t using tensor.shape: torch.size([3,4])

Note:

Tensor.size(dim=None) → torch.Size or int

Returns the size of the self tensor. If dim is not specified, the returned value is a torch.Size, a subclass of tuple. If dim is specified, returns an int holding the size of that dimension. — Source

print("size of 1st dimension:",t.size(dim=0))

print("size of 2nd dimension:",t.size(dim=1))
## output

size of 1st dimension: 3
size of 2nd dimension: 4

In PyTorch, size and shape mean the same things.

Finding the rank of the tensor is as easy as finding the tensor’s shape.


print("Rank of the tensor t is", len(t.shape))

## Output
Rank of the tensor t is 2

Finding the number of elements in the tensor

t.numel() returns the total number of elements in the input tensor.


print("Number of elements in the tensor t is", t.numel())

## Output
Number of elements in the tensor t is 12

Reshaping A Tensor

It means changing only the shape of the tensor and not the underlying data present in it.

print(t.reshape([1,12]))

## Output
tensor([[1, 2, 1, 4, 3, 7, 1, 8, 5, 0, 1, 1]])
print(t.reshape([2,6]))

## Output
tensor([[1, 2, 1, 4, 3, 7],
[1, 8, 5, 0, 1, 1]])
print(t.reshape([3,4]))

## Output
tensor([[1, 2, 1, 4],
[3, 7, 1, 8],
[5, 0, 1, 1]])

The same holds true for higher dimensional tensors, i.e. tensors with rank > 2, where we lose the rows and columns concept.

print(t.reshape(4,3,1))

## Output
tensor([[[1],
[2],
[1]],

[[4],
[3],
[7]],

[[1],
[8],
[5]],

[[0],
[1],
[1]]])
print(t.reshape(6,1,2))

## output
tensor([[[1, 2]],

[[1, 4]],

[[3, 7]],

[[1, 8]],

[[5, 0]],

[[1, 1]]])
print(t.reshape(3,2,2))

## output
tensor([[[1, 2],
[1, 4]],

[[3, 7],
[1, 8]],

[[5, 0],
[1, 1]]])

Adjusting Tensor Shape Using Squeeze and Unsqueeze

Squeeze removes size 1 dimensions. It returns a tensor with all specified dimensions of input of size 1 removed. Used to simplify the tensor by removing unnecessary dimensions

t_reshaped = t.reshape(1,12)


print("Previous tensor :", t_reshaped)
print("Previous tensor shape :", t_reshaped.shape)

print()
t_squeezed = t_reshaped.squeeze()
print("Squeezed tensor: ",t_squeezed)
print("Squeezed tensor shape: ",t_squeezed.shape)
Previous tensor : tensor([[1, 2, 1, 4, 3, 7, 1, 8, 5, 0, 1, 1]])
Previous tensor shape : torch.Size([1, 12])

Squeezed tensor: tensor([1, 2, 1, 4, 3, 7, 1, 8, 5, 0, 1, 1])
Squeezed tensor shape: torch.Size([12])
Source

Unsqueeze adds a size 1 dimension at a specified position. It is useful when you need to add a dimension to fit tensors together for operations like concatenation or to match expected input shapes for functions or models

t_squeezed = t_reshaped.squeeze()

print("Previous tensor :", t_squeezed)
print("Previous tensor shape :", t_squeezed.shape)

print()

x_unsqueezed = t_squeezed.unsqueeze(dim = 0)
print("Unsqueezed tensor: ",x_unsqueezed)
print("Unsqueezed tensor shape: ",x_unsqueezed.shape)
## output

Previous tensor : tensor([1, 2, 1, 4, 3, 7, 1, 8, 5, 0, 1, 1])
Previous tensor shape : torch.Size([12])

Unsqueezed tensor: tensor([[1, 2, 1, 4, 3, 7, 1, 8, 5, 0, 1, 1]])
Unsqueezed tensor shape: torch.Size([1, 12])

Flatten the Tensor

It helps in reshaping input into a one-dimensional tensor. The order of elements in input is unchanged. If start_dim or end_dim are passed, only dimensions starting with start_dim and ending with end_dim are flattened.


print("t: ",t)
print("flattend tensor t:",torch.flatten(t))
## output

t: tensor([[1, 2, 1, 4],
[3, 7, 1, 8],
[5, 0, 1, 1]])
flattend tensor t: tensor([1, 2, 1, 4, 3, 7, 1, 8, 5, 0, 1, 1])
Source
t_new = torch.tensor(1)
print("dimension of t_new:",t_new.dim())

t_new_flattened = torch.flatten(t_new)
print("flattend t_new:",t_new_flattened)

print("dimension of t_new_flattened: ",t_new_flattened.dim())
dimension of t_new: 0
flattend t_new: tensor([1])
dimension of t_new_flattened: 1

Concatenating Tensors

Tensors can be combined using the cat() function. The resulting tensor will have a shape based on the shape of the tensors that are being concatenated.

Consider two tensors, t1 and t2 as follows:

t1 = torch.tensor([
[1,2],
[2,4]
])
t2 = torch.tensor([
[2,1],
[8,4]
])

print("shape of combined tensor t1",t1.shape)

print("shape of combined tensor t1",t2.shape)

## Output

shape of combined tensor t1 torch.Size([2, 2])
shape of combined tensor t1 torch.Size([2, 2])

To combine t1 and t2 along axis-0 , that is, row-wise, add dim=0 :

torch.cat((t1,t2),dim=0)
## Output 
tensor([[1, 2],
[2, 4],
[2, 1],
[8, 4]])

To combine t1 and t2 along axis-1 , that is, column-wise, add dim=1 :

torch.cat((t1,t2),dim=1)
tensor([[1, 2, 2, 1],
[2, 4, 8, 4]])

The shape of the resultant tensor gets changed after the concatenation of the tensor, which is very obvious.

print("shape of combined tensor at dim= 0",torch.cat((t1,t2),dim=0).shape)

print("shape of combined tensor at dim= 1",torch.cat((t1,t2),dim=1).shape)

## Output

shape of combined tensor at dim= 0 torch.Size([4, 2])
shape of combined tensor at dim= 1 torch.Size([2, 4])

I hope you found it useful. Let’s meet again with another useful blog in continuation.

Interested in learning about RAG and its implementation?? Do check out my following blog.

--

--

Abhishek Selokar

Masters Student @ Indian Institute Of Technology, Kharagpur || Thirsty to learn more about AI