5 PyTorch Functions for Reduction Operations

7at
The Startup
Published in
5 min readDec 3, 2020

This comprehensive tutorial describes 5 of the most used PyTorch functions in machine learning.

If you are beginner in the field of machine learning, this blog post is for you. Let me introduce you with the beginner friendly, I would say, and pythonic machine learning library.

What PyTorch really is?

PyTorch is an open source library for performing machine learning operations, and was developed by Facebook AI. It has applications in natural language processing and computer vision.

Why is it recommended to beginners?

It is a replacement for NumPy to use the power of GPUs and also very pythonic. If you are familiar with python programming language then you can easily write down code using PyTorch.

Functions

So let’s dive deeply and try to learn some reduction operations that pytorch supports on the tensors.(tensor is simply a term used for arrays in Pytorch).

  • count_nonzero
  • argmax
  • unique
  • nansum
  • std

Firstly, you have to install and import PyTorch in your system.

In Linux/binder install it using the following command.

!pip install numpy torch==1.7.0+cpu torchvision==0.8.1+cpu torchaudio==0.7.0 -f https://download.pytorch.org/whl/torch_stable.html

If you are working in windows, then use this command.

!pip install numpy torch==1.7.0+cpu torchvision==0.8.1+cpu torchaudio==0.7.0 -f https://download.pytorch.org/whl/torch_stable.html

For MacOS users, installation command is as follows;

!pip install numpy torch torchvision torchaudio

Next import it as;

import torch

Function 1 — torch.count_nonzero

This function is used when we need to count non-zero elements. It takes two arguments, one is input tensor and the other is dimension, where the latter is an optional argument. It’s type is int or can be python tuple.

First example is the simple one, we just gave input tensor as argument and the function returned us total number of non zero entries in the whole tensor.

Now, in this example, we gave it another attribute called dimension, it can take either zero or one, zero means count row-wise and 1 means count column-wise.

As we have tensor of shape 2-by-2, it means we can have 2 dimensions, but we gave here dim=2, to count third dimension-wise that’s why it gave us error.

Data Science demands from us that we should know our working data. Let’s say, if we are provided with a csv file, then we have entities in rows and attributes/features in columns. During understanding and manipulating of data, count_nonzero is one of the most common function that you will be using because you need to drop sometimes those entities which have null values, here you will be using this function.

Function 2 — torch.argmax

Argmax function returns the index of the maximum value of whole tensor. It takes three arguments, an input tensor, a dimension, and keepdim. Input tensor is a pytorch tensor, dimension is here to return index dimension wise it’s int, and keepdims is a bool, True if we want to retain the dimension in output tensor.

If there are multiple minimal values then the indices of the first minimal value are returned.

Here we returned the index of maximum value from the entire tensor instead of calculating from rows or columns.

Here we added the second optional argument that is dim, we gave it the value of 1 to return column-wise maximum value index, and then at the last we also added this optional argument keepdims which is for handling dimension of output tensor. Its set to True here to clarify it’s functionality.

As we have tensor of shape 2-by-2, it means we can have 2 dimensions, but we gave here dim=2, to count third dimension-wise that’s why it gave us error same as above function.

This function is used let’s say when we calculate predictions form a model and want to see which entry value did the best, we can use this function there to find out those predictions which are close to our target values.

Function 3 — torch.unique

Returns the unique values from the input tensor. It’s output may also has two optional tensors like inverse_indices and counts.

The parameters for this function include input tensor, sorted bool whether we want our output tensor to be sorted or not, return_inverse bool whether to also return the indices for where elements in the original input ended up in the returned unique list, return_counts bool whether to return the count for each unique element and dim, the dimension to apply this function.

Here all unique elements are returned in one tensor and count of those unique elements is returned in a separate tensor.

There is an additional returned tensor (same shape as input) representing the indices for where elements in the original input map to in the output; otherwise, this function will only return a single tensor.

As we have tensor of shape 2-by-2, it means we can have 2 dimensions, but we gave here dim=2, to count third dimension-wise that’s why it gave us error same as above function.

torch.unique can be useful when we want to return the unique values or tensors from a large input data.

Function 4 — torch.nansum

It returns the sum of values including NaN not a number considering it as zero. The parameters for this function are input tensor, dim int which is the dimension to reduce and keepdims bool whether output tensor had dimension retained or not.

Its the output of flattened tensor.

Here we included the dimension as well.

Here we have not converted the datatype of NaN so giving error.

Sometimes, we are provided with a datasets which has NaN values against some features, that could be troublesome if we want to add the whole row or column and we encounter some non int type. We can use nansum there to return the sum of all values treating NaN as zeros.

Function 5 — torch.std

This function returns the standard deviation of all input values in a tensor. Parameters of this function are input tensor, unbiased bool whether the computation of std should be biased. We can also have dimension based std.

This is the simple example with just one argument.

Here we also included the argument of dimension.

If we give it tensor of integer type, it give us error because we have std value in floating point.

STD stands for standard deviation. It is math related function that is used to find out the dispersion of values in a region.

Conclusion

The complete explanation of 5 functions is covered in this post, as some of us find it difficult to read the documentation of a library. Here, I have explained with two working and one non-working example so that concepts of each function and usage becomes clear to you.

References

Many thanks to jovian platform for notebook support.

--

--