Every Index based Operation you’ll ever need in Pytorch

Parth Batra
Emulation Nerd
Published in
4 min readJun 7, 2020

Index-Based Operations are very useful while working with Machine Learning frameworks. This blog explains all about Index-Based Operations in Pytorch.

There are two types of operations in Pytorch, In-place Operations & Out-of-place Operations. If you are programming in python for some time, you have certainly used them somewhere but may not be familiar with the terminology.

Tutorials Point tutorial has a very apt definition for In-place Operation,

In-place operation is an operation that directly changes the content of a given linear algebra, vector, matrices (Tensor) without making a copy.

In PyTorch, all operations on the tensor that operate in-place on it will have an _ postfix. Eg. we have both add() and add_(). However, some operations only have one type of version like .narrow has only out-of-place and .fill have only in-place version.

In torch.tensor, we have 10 Index Operations based functions.

  • index_add_
  • index_add
  • index_copy_
  • index_copy
  • index_fill_
  • index_fill
  • index_put_
  • index_put
  • index_select

index_add_

index_add_(dim, index, tensor) → Tensor:

  • dim(int): dimension along which to add
  • index(Long Tensor): indices of tensor to choose from
  • tensor(Tensor): tensor containing values to add

Accumulates the elements of ‘tensor’ into the ‘x’ by adding to the indices in the order given in ‘index.’

Here, we are creating a simple tensor x(5,5), which only consists of ones and another tensor(3,5), whose each row is [1,2,3,4,5]. The index tensor is [0,4,2] from which particular rows(as, dim=0) are added to x in same order.
Here, our index is [0,0,0] and it gives no error and returns the above matrix in which only 1st row is not one. Only the 1st row is changed. So, We can also add multiple indices to the same index of x.
Here, our function breaks giving the RuntimeError when we use dim=1 from the above example. We should have index.size should be equal to tensor.shape[dim].

In the following example, we see the corrected example:

To avoid errors, just remember: The dim’th dimension of tensor must have the same size as the length of index (which must be a vector), and all other dimensions must match self, or an error will be raised.

‘index_add(dim, index, tensor) → Tensor’ is the out-of-place version of the index_add_

index_copy_

index_copy_(dim, index, tensor) → Tensor

  • dim (int) — dimension along which to index
  • index (LongTensor) — indices of tensor to select from
  • tensor (Tensor) — the tensor containing values to copy

Copies the elements of ‘tensor’ into the ‘x’ by selecting the indices in the order given in ‘index’.

This function works the same as index_add_() but instead of adding corresponding rows, it replaces them.
Like add, we can also use the same index multiple times but it will start replacing for each value of the index. So, the final ones will be the final value of that particular index. Similar to add, we can copy in other dimensions too.

Why this error? Similar to index_add_. dim’th dimension of the tensor is not equal to the length of the index

This function can be used to copy some indices to a tensor taking values from another tensor. To avoid the errors, remember:

The dim’th dimension of tensor must have the same size as the length of index (which must be a vector), and all other dimensions must match self, or an error will be raised.

‘index_copy_(dim, index, tensor) → Tensor’ is the out-of-place version of index_copy_

index_fill_

index_fill_(dim, index, tensor) → Tensor

  • dim (int) — dimension along which to index
  • index (LongTensor) — indices of self tensor to fill in
  • val (float) — the value to fill with

Fills the elements of the ‘x’ with value ‘val’ by selecting the indices in the order given in ‘index’.

We can change the value of dim to replace any dimension of our tensor. Also, if you give the same index multiple times, it will return the final tensor with the latest fill.

Be cautious to give indices in range.

This function is quite useful for replacing the value of some indices in Tensors. Also, this function is quite forgiving and doesn’t require much error handing except the data types and index range.

‘index_fill(dim, index, value) → Tensor’ is the out-of-place version of index_fill_

index_put_

index_put_(dim, index, value) → Tensor

  • indices (tuple of LongTensor) — tensors used to index into self.
  • value (Tensor) — tensor of the same dtype as self.
  • accumulate (bool) — whether to accumulate into self (Default = False)

Puts values from the ‘value’ into the ‘target’ using the indices specified in ‘indices’ (tuple of Tensors).

Equivalent to tensor[indices] = value
If accumulate is True, the elements in value are added to the target.
If accumulate is False, the behavior is undefined if indices contain duplicate elements. It just fills in that case

This function can be very useful in updating values in loops. Eg. Updating the weight matrix while training.

‘index_put(indices, value, accumulate=False) → Tensor’ is the out-of-place version of index_put_

index_select

torch.index_select(input, dim, index, out=None) → Tensor

  • input (Tensor) — the input tensor.
  • dim (int) — the dimension in which we index
  • index (LongTensor) — the 1-D tensor containing the indices to index
  • out (Tensor, optional) — the output tensor

Returns a new tensor which indexes ‘x’ along dimension ‘dim’ using the entries in ‘indices’.

This is indexed along with the other dim, here dim =1.
Take care of the range of the indices and the size of x.

dim’th dimension has the same size as the length of ‘indices’; other dimensions have the same size as in the target.

This function is useful to store a large number of values in the same matrix, reducing the total number of variables and we can index concerned values according to our needs.

Conclusion

This notebook contained all the index-based operations in torch.tensor in Pytorch along with examples and where they break along with explanations. After reading this, you will have a very nice idea about these operations and hopefully, you can use them somewhere.

Reference Links

Links to references and other interesting articles about tensors

--

--

Parth Batra
Emulation Nerd

I am just a Multi-Disciplinary Engineer pursuing my M.S in Mathematics from BITS Pilani, who is interested in the field of AI and Data Science. Check my work!