Intel® Distribution of OpenVINO™ toolkit — — Optimised Deep Learning

Tamal Acharya
5 min readMar 27, 2022

--

OpenVINO toolkit — — Where to begin. Let’s see. Ah Yes. The day I went to Intel’s office in Bengaluru for the seminar/conference on OpenVINO toolkit and had the best lunch for a long time. That was the day I interacted with many fellow enthusiasts of AI and ML. Took photos, participated in the quizzes and had loads of fun. Made some new friends. Good times!

Since that day, a lot has changed and the toolkit got major upgrades and made it a lot easier for developers and AI enthusiasts and practitioners.

What is OpenVINO?

OpenVINO (Open Visual Inference and Neural Network Optimization) is a toolkit which allows you to run state of the art deep learning models across various Intel specific hardware devices like Intel CPUs (Xeon, Core and Atom), Intel Integrated GPUs (HD Graphics and Iris), VPUs (Movidius Neural Compute Stick 2), Intel FPGAs (Vision Accelerator and Programmable Acceleration Card) etc. with just a few lines of codes.

From Intel’s OpenVINO website:

“The latest version (2022.1) of the Intel® Distribution of OpenVINO™ toolkit makes it easier for developers everywhere to start creating. This is the biggest upgrade since the original launch of the toolkit and offers more deep-learning models, device portability, and higher inferencing performance with fewer code changes. Get started quickly with pretrained models from the Open Model Zoo that are optimized for inference. And since the toolkit is compatible with the most popular frameworks, there is minimal disruption and maximum performance.

The OpenVINO toolkit makes it simple to adopt and maintain your code. Open Model Zoo provides optimized, pretrained models and Model Optimizer API parameters make it easier to convert your model and prepare it for inferencing. The runtime (inference engine) allows you to tune for performance by compiling the optimized network and managing inference operations on specific devices. It also auto-optimizes through device discovery, load balancing, and inferencing parallelism across CPU, GPU, and more.”

(Source: https://www.intel.com/content/www/us/en/developer/tools/openvino-toolkit/overview.html and https://www.intel.com/content/www/us/en/developer/videos/part-1-intel-distribution-of-openvino-toolkit-overview.html )

The OpenVINO has a zoo of models (not the animal zoo!) which you can use to develop State-of-the-Art (SOTA) models for various applications like Computer Vision, Pose Estimation, CT-Scan Live Inference, Action Recognition and other cool stuffs! Check out their Github repo here for examples and notebooks: https://github.com/openvinotoolkit/openvino_notebooks

OpenVINO has a lot of pre-trained models in the model zoo for several purposes like:-

  • Object Detection
  • Object Recognition
  • Segmentation
  • OCR
  • Pose Estimation

Frameworks and Workflows:

Frameworks such as Pytorch, Tensorflow, Caffe, MXNet, ONNX, and Kaldi are supported by OpenVINO. Intel distribution of OpenVINO also provides the well-known computer vision library, OpenCV. The OpenVINO toolkit includes the Deep Learning Deployment Toolkit (DLDT) and Open Model Zoo as its main components.

OpenVINO Frameworks with Hardware Supports

(Source: https://www.intel.com/content/www/us/en/developer/tools/openvino-toolkit/overview.html )

(Deployment Workflow)
(Full Workflow Steps)

(Source: https://medium.com/@rojinebrahimi/introduction-to-openvino-toolkit-and-oneapi-155c07739f7e )

(Inference using a heterogeneous plugin. Source: Introduction to OpenVINO.)

OpenVINO toolkit supports BERT quantization and you can run BERT models for your NLP tasks. Cool right!

The following models are included into the OpenVINO™ toolkit’s Open Model Zoo:

You can find Example conversation with the demo using the Wikipedia entry for the Bert Muppet character from Sesame Street.

OpenVINO toolkit also can be used with Neural Compute Stick 2 (NCS2). You can accelerate your OpenVINO™ code on this device and try running your model with NCS2.

What is OneAPI? #oneAPI

OneAPI is an Intel-developed toolkit which makes CPU and GPU programming simpler; it uses a language called DPC++ for parallelism. It makes the code reusable for the CPU and accelerator (GPU) while using a single source language.

OneAPI Concept

OpenVINO is part of this #oneAPI family and the entire family really makes it so cool to have everything under one roof (pretty much the same concept as a mall!) To reach a high performance, a mixture of SVMS (scalar, vector, matrix and spatial) architectures should be deployed on CPU, GPU, AI, and FPGA and this makes it complicated; but oneAPI will reduce the complexity of maintaining separate codebases, using different languages, tools and workflows.

Conclusions:

OpenVINO accelerates AI workloads, including computer vision, audio and speech, language, and recommendation systems. It Speeds up time to market via a library of functions and preoptimized kernels. It also includes optimized calls for OpenCV, OpenCL™ kernels, and other industry tools and libraries.

With loads of pretrained models, OpenVINO accelerates development and deployment of models and also helps in Edge computing and developing intelligence at Edge.

So what are you waiting for, install the latest OpenVINO toolkit and starts developing and deploying models and make inference easier and reachable and understandable.

Additional References:

Intel® Distribution of OpenVINO™ toolkit Resources

  1. Intel® Distribution of OpenVINO™ Toolkit Overview

Intel® Distribution of OpenVINO™ toolkit Training Modules

  1. Intel® Distribution of OpenVINO™ Toolkit

Articles References

  1. Introduction To OpenVINO Toolkit And OneAPI
  2. Intel OpenVINO with OpenCV
  3. Introducing a Training Add-on for OpenVINO™ toolkit
  4. What’s New in the OpenVINO™ Model Server
  5. Introduction to Intel OpenVINO Toolkit
  6. Natural Language Processing using BERT and OpenVINO™ toolkit
  7. Convert TensorFlow model and run it with OpenVINO™ Toolkit
  8. OpenVINO™ Toolkit with Neural Compute Stick 2 on Linux
  9. Convert a PyTorch Model to ONNX and OpenVINO™ IR

--

--

Tamal Acharya

AI professional and practitioner, AI/ML enthusiast. Part time researcher in AI, AGI and Quantum Computing especially QML, QNN