Pytorch: Tensor-operations

Preeti Sharma
Simply Dev
Published in
6 min readJun 4, 2020

What’s Pytorch

PyTorch is an open-source machine learning library based on the Torch library, used for applications such as computer vision and natural language processing, primarily developed by Facebook’s AI Research lab. It is free and open-source software released under the Modified BSD license.

Since Pytorch is an opensource machine learning library it contains various other libraries as sub-domain. So, for the ease of learning, we are using here tensors.

A PyTorch Tensor is fundamentally equivalent to a numpy cluster: it knows nothing about profound learning or computational charts or angles, and is only a conventional n-dimensional exhibit to be utilized for subjective numeric calculation.

The greatest contrast between a NumPy array and a PyTorch Tensor is that a PyTorch Tensor can run on either CPU or GPU. To run the procedure on the GPU, simply cast the Tensor to a Cuda datatype.

Tensor contains some operations and functionalities which make it easier for us to understand the computations happening in the array. But firstly

How tensor is different from other data types of n-dimensions??

Here we are using pytorch library with torch.tensor() functionalities such that it will help to take all the inputs vectors, matrices, 3d-arrays, numpy arrays. And help to apply mathematics on it and convert it in tensor form .

Functions we can incorporate in torch.tensor() matrix.

An short introduction about PyTorch and about the chosen functions.

  • function 1 : math functions like torch.rand(), torch.abs_() and torch.allclose
  • function 2 : torch.as_strided (layout functions)
  • function 3 : In these the functions deals with the individual elements instead of clusters.
  • function 4 : It deals with subtensors such as storage_offset() function.
  • function 5 : symeifg(eigenvalue functions)

Function 1 — Some of the math functions we used here to play with the tensor inputs.

1. tensor = torch.rand ((no.of rows, no. of columns)),

2. torch.abs_(input, alpha=1)

3. torch.allclose(input, other, rtol=1e-05, atol=1e-08, equal_nan=False)

First import all the libraries required .

Example 1:

First here we are using rand() to have random items in the corresponding dimensions and store it in tensor . Then creating a new tensor by function new_tensor() making gradiend_descent value as false. And we are multiplying it with tensor such that current new tensor z will take dimension of previous tensor. We can check the length of tensor z with .shape method. And can also do permutation on indices with .permute.

Example2

Here we are using abs_() function to convert all the tensor value positive , then here we are using .add_ to add input number to each item of tensor with alpha value to be 1.

Example3 :

In allclose function all of this represents: input (Tensor) — first tensor to compare

other (Tensor) — second tensor to compare

atol (float, optional) — absolute tolerance. Default: 1e-08

rtol (float, optional) — relative tolerance. Default: 1e-05

equal_nan (bool, optional) — if True, then two NaN s will be considered equal. Default: False

argsort function is used to output all the elements in tensors in sorted order. And converting the output to asin_ will provides support for the inverse sine function in PyTorch. It expects the input to be in the range [-1, 1] and gives the output in radian form. It returns nan if the input does not lie in the range [-1, 1]. The input type is tensor and if the input contains more than one element, element-wise inverse sine is computed.

Ok, we understand this much we want to understand it visually for better understanding.

Moving on to Next Function

Function 2–1.torch.as_strided , 2.torch.bincount, 3 . torch.diag_embed

  1. In torch.as_strided function it will Create a view of an existing torch.Tensor input with specified size, stride and storage_offset.
  2. In bin count function each tensor value has some weight associated with it such that input will contain the range of tensor and weight means how far the tensor will be that’s why the last element is the dimension associated with weights as a tensor input.
  3. In diag_embed in the tensor, the inputs will be added along the diagonal only.
Embedding of layers across the elements

Example1

Here bernoulli_() function will Fill each location of self with an independent sample from Bernoulli(p). self can have an integral data type.

Sample of randomly generated tensors from Bernoulli

Example2

The multi-aspect streaming tensor analysis will encapsulate bin count reference

Here the function bincount() take two tensors one would be the input which is a tensor tells the range of input values and weights which tells how much max value of the tensor and minlength tells that only 1-d tensor input is to be taken.

Example3

Makes a tensor whose diagonals of certain 2D planes (determined by dim1 and dim2) are filled by input. To encourage making bunched corner to corner frameworks, the 2D planes shaped by the last two components of the returned tensor are picked as a matter of course.

The contention offset controls which diagonal to consider:

Whenever offset = 0, it is the principle diagonal.

Whenever offset > 0, it is over the principle diagonal.

Whenever offset < 0, it is beneath the principle corner to corner.

The size of the new grid will be determined to make the predetermined diagonal of the size of the last information measurement. Note that for balance other than 00 , the request for dim1 and dim2 matters. Taking them is identical to changing the indication of offset.

Function 3–1. torch.erfinv, 2. torch.spilt() 3. torch.sparse_mask

Erfinv will deal with inverse error function of input element. As the name suggest .split function will split the function into chunks and then apply its functionality on them. sparse_mask take input value and convert it along with the mask input into list.

Each input associated with weights

Returns another SparseTensor with values from Tensor info iltered by indices of mask and values are ignored. input and mask must have the same shape.

Parameters

input (Tensor) — an info Tensor

cover (SparseTensor) — a SparseTensor which we channel input dependent on its lists

Function 4 -

  1. storage_offset() : It deals with the subtensors.
  2. stride: It deals with the dimensions present as arguments.
  3. sum: It will sum up the elements in the matrix.
The whole process of PyTorch dealing with subtensors

Example1

Example2

Example3

Function 5 — Here we are using functions taking eigenvalue and eigenvectors

Basically here the concept of an upper triangular matrix is applied on the vector.

Thank you

References:

Google

--

--