Image by Pexels from Pixabay

Member-only story

Faster and Memory-Efficient PyTorch models using AMP and Tensor Cores

By just adding a few lines of Code

--

Do you know that the Backpropagation algorithm was given in 1986 in the Nature paper by Geoffrey Hinton?

Also, the Convnets were first presented by Yann le cun in 1998 for digit classification where he used a single convolution layer. It was only later in 2012 that Alexnet popularized Convnets by using multiple convolution layers to achieve state of the art on imagenet.

So what made them so famous just now and not before?

It is only with the vast computing resources at our disposal, were we able to experiment and utilize Deep Learning to its full potential in the recent past.

But are we using our computing resources well enough? Can we do better?

This post is about utilizing Tensor Cores and Automatic Mixed Precision for faster training of Deep Learning Networks.

What are Tensor Cores?

As per the NVIDIA site:

NVIDIA Turing and Volta GPUs are powered by Tensor Cores, a revolutionary technology that delivers groundbreaking AI performance. Tensor Cores can accelerate large matrix operations, which are at the heart of AI, and perform mixed-precision matrix…

--

--

Towards Data Science
Towards Data Science

Published in Towards Data Science

Your home for data science and AI. The world’s leading publication for data science, data analytics, data engineering, machine learning, and artificial intelligence professionals.

Rahul Agarwal
Rahul Agarwal

Responses (1)