Part 1 — Tensor

Nelson Punch
Software-Dev-Explore
4 min readApr 13, 2024
Photo by rashid khreiss on Unsplash

Introduction

This is a basic knowledges about tensor creation, reshape and copy in PyTorch. In addition, see how can we compute tensor on both CPU and GPU. Last use Tensor and Numpy interchangeably.

Tensor manipulation

Tensor

In general a Tensor is an array contain three or more dimensions.

With PyTorch we are able to create tensor easily.

We are able to check its size or shape.

  • scalar: Is a tensor with a single value and it is 0 dimension.
  • vector: Is a tensor and it is 1 dimension(1D).
  • matrix: Is a tensor and it is 2 dimension(2D).
  • tensor: Is a tensor and it is 3 dimension(3D).

They are all type of tensor but with different dimension. To interpret a given shape of a tensor, we can take an example torch.Size([2, 3, 4]) this means in this tensor it has 2 elements in first dimension, 3 elements in second dimension and 4 elements in third dimension, therefore we know the number of dimension is 3 and we also know how many elements in each dimensions. Notice scalar has no dimension.

Reshape

We can reshape a tensor with PyTorch. And there are 2 methods we can use.

  • view(): Return a tensor with desired shape and share underlying data with the original tensor. In other word it does not create a new tensor.
  • reshape(): May or may not create a copy.

The recommend way to reshape a tensor is to use view().

matrix here is a tensor with 1. for all values. And it is from previous section.

We can see that matrix is a 2x3 tensor and it had been reshaped to 1x6 tensor. We change its value to 2. at [0,1] first row and second element in 1x6 tensor and this result in original matrix been changed. They are shared the same underlying data.

Copy

2 ways to copy a tensor.

  • new_tensor()
  • clone()

We can see that change in new tensor do not change original tensor.

We are doing the same thing again except this time we use clone() and detach().

detach() tell PyTorch to remove this tensor from computation graph.

GPU, CPU CUDA

If your computer has NVIDA dedicate graphics card you might be able to put tensor on to GPU for computation. All tensors will be on CPU for computation if you don’t have GPU available.

CUDA

Compute Unified Device Architecture is a parallel computing platform and application programming interface that allows software to use certain types of graphics processing units for accelerated general-purpose processing, an approach called general-purpose computing on GPUs

Parallel computing allow tensor to be computed faster than CPU.

Convert Numpy to Tensor

Numpy is used very often when doing machine learning or deep learning therefore to convert Numpy data to tensor back and forth is important.

2 ways to convet Numpy to tenosr

  • as_tensor() : Share underlying data with original Numpy array.
  • from_numpy() : Share underlying data with original Numpy array.

To convert tensor back to Numpy

  • numpy() : Convert back to Numpy from tensor.

CPU

Following code will only work on CPU.

Tensor changed when original Numpy changed.

Tensor will not change even we change original Numpy.

To change tensor data type. It change from float64 to float32.

To convert back to Numpy from tensor.

GPU

The way to put tensor on to GPU is to use to().

Always use to() during tensor creation in order to put tensor on proper device either CPU or GPU. This way will make sure there is no issue when sharing your code with others.

Let’s setup device and see device information

Create a tensor from Numpy and put it on GPU.

From GPU to Numpy. Remember Numpy only work on CPU not GPU.

Always use cpu() first then numpy() even there is no GPU available. This way will make sure there is no issue when sharing your code with others.

--

--