Crux Of the Graphic Processing Unit(GPU)

Shachi Kaul
Analytics Vidhya
Published in
3 min readDec 16, 2019

--

Howdy readers!

Image from https://www.hpcwire.com/2018/03/27/nvidia-riding-high-as-gpu-workloads-and-capabilities-soar/

Presenting this blog about the smart microprocessor, Graphics Processing Unit (GPU). It is not wrong to say that AI and ML is one of the hot arena since recent years. The burning of time advancement is getting along with many burning innovative and programming technologies. Deep Learning when got introduced, it wasn’t enough for multi-core CPU and sometimes took days to run. Hence, another microprocessor, GPU which earlier was used for gaming technology, also got started using for Deep Learning models.

What is GPU Computing?

  • Graphics Processing Unit(GPU) is a microprocessor having hundreds of cores.
  • GPU is a co-processor of CPU which speeds up the CPU performance by doing heavy computations, leads to faster results..
  • It’s a parallel architectural processor which follows parallel programming model and made possible for running Deep Learning models efficiently and effectively.
  • Some GPU manufacturers are as NVIDIA, AMD, ASUS, Intel etc. The top most is NVIDIA manufacturing its best GPUs for handling heavy computations by introducing its core technology called CUDA and thus CUDA cores got introduced.
  • CUDA and NVIDIA GPUs dominates the market.
  • NVIDIA GPUs: GeForce, Quadro, Tesla.

What is CUDA?

  • NVIDIA’s GPU performs parallel computations by using its CUDA cores. These hundreds of cores in GPU is a parallel computing platform for performing intensive applications on GPU.
  • Hundreds of cores made it superior than CPU in terms of bandwidth and computing time.
  • Many Deep Learning technologies such as tensorflow, torch, CNTK, H2O.ai, Keras, Theano and PyTorch entrust CUDA for GPU support.
  • GPU is been heavily utilized for Deep Learning models and heavy Machine Learning models which require intensive computations. To know more about how to use GPU on Keras and Tensorflow, visit GPU On Keras and Tensorflow .

So, does it mean to stop using CPU and go for GPU? Well, a warrior is still a warrior. Both GPU and CPU are necessary for specific requirement. Hence, choose your side wisely as per your need.

Is GPU superior than CPU?

Still confused about whose side to choose? Again, depends on the above stated factors. Both CPU and GPU are useful in their own way.

For Deep Learning perspective, it’s best to use GPU for heavy computations where dataset may takes longer time for CPU to run. But for smaller dataset, CPU is the best as smaller data won’t be able to take the full benefit of GPU utilization. Also, higher latency in GPU can be suppressed by its higher bandwidth(carry huge data) with hundreds of cores computing in parallel. Thus, it makes GPU lot faster than CPU.

Though GPU seems faster than CPU but huge time is consumed while transferring huge amount of data from CPU to GPU which depends on architecture.

NVIDIA itself provides many command line utility tools to monitor and keep track of your GPUs.
Few are as below:

  1. nvidia-smi: Monitors your GPU. Understand about this utility in detail in explained output of nvidia-smi.

2. nvidia-smi –a: This tool is similar like above except for the information displayed in detail.

3. watch –n 1 nvidia-smi: It monitors your GPU every second, refreshing and tracking the output itself for each second.

4. watch –n 1 free –m: This tool tracks the memory usage of your GPUs while running the model, for every second.

Worth reading article for more detailed information on these utilities.

https://www.microway.com/hpc-tech-tips/nvidia-smi_control-your-gpus/

Happy reading!

Can get in touch via LinkedIn.

--

--

Shachi Kaul
Analytics Vidhya

Data Scientist by profession and a keen learner. Fascinates photography and scribbling other non-tech stuff too @shachi2flyyourthoughts.wordpress.com