Every Index based Operation you’ll ever need in Pytorch
Index-Based Operations are very useful while working with Machine Learning frameworks. This blog explains all about Index-Based Operations in Pytorch.
There are two types of operations in Pytorch, In-place Operations & Out-of-place Operations. If you are programming in python for some time, you have certainly used them somewhere but may not be familiar with the terminology.
Tutorials Point tutorial has a very apt definition for In-place Operation,
In-place operation is an operation that directly changes the content of a given linear algebra, vector, matrices (Tensor) without making a copy.
In PyTorch, all operations on the tensor that operate in-place on it will have an _ postfix. Eg. we have both add() and add_(). However, some operations only have one type of version like .narrow has only out-of-place and .fill have only in-place version.
In torch.tensor, we have 10 Index Operations based functions.
- index_add_
- index_add
- index_copy_
- index_copy
- index_fill_
- index_fill
- index_put_
- index_put
- index_select
index_add_
index_add_(dim, index, tensor) → Tensor:
- dim(int): dimension along which to add
- index(Long Tensor): indices of tensor to choose from
- tensor(Tensor): tensor containing values to add
Accumulates the elements of ‘tensor’ into the ‘x’ by adding to the indices in the order given in ‘index.’
In the following example, we see the corrected example:
To avoid errors, just remember: The dim’th dimension of tensor must have the same size as the length of index (which must be a vector), and all other dimensions must match self, or an error will be raised.
‘index_add(dim, index, tensor) → Tensor’ is the out-of-place version of the index_add_
index_copy_
index_copy_(dim, index, tensor) → Tensor
- dim (int) — dimension along which to index
- index (LongTensor) — indices of tensor to select from
- tensor (Tensor) — the tensor containing values to copy
Copies the elements of ‘tensor’ into the ‘x’ by selecting the indices in the order given in ‘index’.
Why this error? Similar to index_add_. dim’th dimension of the tensor is not equal to the length of the index
This function can be used to copy some indices to a tensor taking values from another tensor. To avoid the errors, remember:
The dim’th dimension of tensor must have the same size as the length of index (which must be a vector), and all other dimensions must match self, or an error will be raised.
‘index_copy_(dim, index, tensor) → Tensor’ is the out-of-place version of index_copy_
index_fill_
index_fill_(dim, index, tensor) → Tensor
- dim (int) — dimension along which to index
- index (LongTensor) — indices of self tensor to fill in
- val (float) — the value to fill with
Fills the elements of the ‘x’ with value ‘val’ by selecting the indices in the order given in ‘index’.
Be cautious to give indices in range.
This function is quite useful for replacing the value of some indices in Tensors. Also, this function is quite forgiving and doesn’t require much error handing except the data types and index range.
‘index_fill(dim, index, value) → Tensor’ is the out-of-place version of index_fill_
index_put_
index_put_(dim, index, value) → Tensor
- indices (tuple of LongTensor) — tensors used to index into self.
- value (Tensor) — tensor of the same dtype as self.
- accumulate (bool) — whether to accumulate into self (Default = False)
Puts values from the ‘value’ into the ‘target’ using the indices specified in ‘indices’ (tuple of Tensors).
This function can be very useful in updating values in loops. Eg. Updating the weight matrix while training.
‘index_put(indices, value, accumulate=False) → Tensor’ is the out-of-place version of index_put_
index_select
torch.index_select(input, dim, index, out=None) → Tensor
- input (Tensor) — the input tensor.
- dim (int) — the dimension in which we index
- index (LongTensor) — the 1-D tensor containing the indices to index
- out (Tensor, optional) — the output tensor
Returns a new tensor which indexes ‘x’ along dimension ‘dim’ using the entries in ‘indices’.
dim’th dimension has the same size as the length of ‘indices’; other dimensions have the same size as in the target.
This function is useful to store a large number of values in the same matrix, reducing the total number of variables and we can index concerned values according to our needs.
Conclusion
This notebook contained all the index-based operations in torch.tensor in Pytorch along with examples and where they break along with explanations. After reading this, you will have a very nice idea about these operations and hopefully, you can use them somewhere.
Reference Links
Links to references and other interesting articles about tensors
- Official documentation for
torch.Tensor
: https://pytorch.org/docs/stable/tensors.html - Tutorials Point- In place operators: https://www.tutorialspoint.com/inplace-operator-in-python