Why do we need tensor objects over Numpy arrays for building Neural Network?

Dipanwita Mallick
IWriteAsILearn
Published in
2 min readMay 18, 2021

When calculating the optimal weights we vary each weight by a small amount and understand the impact on reducing the overall loss value. The key thing to note is the loss calculation based on the weight update of one weight does not impact the loss calculation of the weight update of the other weights in the same iteration. Hence, this process can be optimized by parallelly computing the weight updates in the different cores. The GPUs have multiple cores as compared to CPUs. And tensors can work on GPUs, so we use tensor objects instead of numpy arrays.

Example of processing speed while doing matrix multiplication using tensor objects and numpy arrays.

the computation has happened in a GPU device in google colab environment

Now registering the same x and y in CPU.

And finally, the same implementation using numpy arrays,

Note, how GPU computation is approximately 30 times faster than normal CPU operation for tensor objects, and with numpy it is almost 60 times faster.

So, which one would you prefer, GPU+tensor or CPU+tensor or numpy+CPU ?

Thank you for reading. :)

--

--

Dipanwita Mallick
IWriteAsILearn

I am working as a Senior Data Scientist at Hewlett Packard Enterprise. I love exploring new ideas and new places !! :)