Understanding GPU Training And Inference — THG Hosting Blog

Kelly Kirkham
THG Hosting
Published in
3 min readMar 12, 2020

The idea of artificial intelligence (AI) has long excited tech pros and laymen alike. The broadest definition of this concept has been sci-fi trope for almost as long as the genre has existed. Books, TV shows, and movies have all explored the possibility of artificial beings that can feel, act, and learn like humans.

While such things aren’t possible yet, AI on a more limited scale has become a reality in recent years, allowing applications and programs to perform specific tasks like image classification, video analysis, and facial recognition as well as (or even better than) humans. What’s made this possible is a combination of GPU training and inference in a process known as ‘deep learning’.

Deep learning; GPU training and inference

Deep learning is how neural networks are ‘taught’ to perform their prescribed tasks. The process is complex, but at the simplest can be split into two main phases: the training phase and the inference phase.

The training phase resembles a human’s school days. It’s when the neural network is taught everything it needs to be able to do its job further down the road. Like a good education, the training phase takes time and a lot of resources. Inference is the name given to a neural network’s capability to apply what it’s learned to its job. Once it’s been trained, the network can be streamlined to perform its task more efficiently. It doesn’t take as much time or as many resources as the training phase.

How GPU training and inference power AI

GPU training involves introducing a training dataset to the network. Individual neurons in each layer assign a weighting to each input to define how correct or incorrect the input is. For image recognition, each layer may be tasked with looking for specific elements of an image. Once the image has passed through every layer, the weightings are combined to determine a final output. For instance, does the image feature a weapon or not? An algorithm will then inform the network if it was right or wrong.

The network isn’t told why it was right or not. That means that the only way it can improve its efficiency at the trained task is with repetition. A network needs to process as large a dataset as possible to reach the point where it gives the right answer every time. The parallel processing power of GPUs is the only thing that makes this compute-intensive process possible.

GPU inference

At this point, you have a ‘fully trained’ neural network. Unfortunately, it’s a massive database that uses a huge amount of computing power. As such, it couldn’t be put to any practical, real-world use. It needs to be slimmed down and streamlined, able to apply its training to smaller batches of data it’s never seen and quickly deliver results. And that is the process of inferencing.

A good analogy for inferencing is the compression of a digital image. This process reduces the size of an image without impacting how it looks to the human eye. Inferencing simplifies and reduces the size of a neural network without impacting how well it performs its set task. It often involves removing parts of the network which aren’t activated past the training phase. In other cases, it might mean fusing together some layers of a network into a single computational step. Either way, the result is a speedy model or application with an exceptional level of accuracy.

GPU training and inference combine to produce complex neural networks that deliver extraordinary AI processes. They may not pass the Turing Test, but they were science fiction rather than fact only a few years ago…

Originally published at https://thghosting.com on March 12, 2020.

--

--