How to convert TensorFlow model and run it with OpenVINO™ Toolkit

A very simple guide for every TensorFlow Developer wanting to start the OpenVINO journey

Adrian Boguszewski
OpenVINO-toolkit
3 min readFeb 22, 2022

--

Author: Adrian Boguszewski — AI Software Evangelist, Intel

Note: This article was created with OpenVINO 2022.1. If you want to know how to use the newer OpenVINO API please check this notebook.

To run the network with OpenVINO™ Toolkit, you need first convert it to Intermediate Representation (IR). To do it, you will need Model Optimizer, which is a command-line tool from the Developer Package of OpenVINO™ Toolkit. The easiest way to get it is via PyPi:

TensorFlow models are directly supported by Model Optimizer, so the next step is using the following command in the terminal:

It means you’re converting v3-small_224_1.0_float.pb model for one RGB image with size 224x224. Of course, you can specify more parameters like pre-processing steps or desired model precision (FP32 or FP16):

Your model will normalize all pixels to [-1,1] value range and the inference will be performed with FP16. After running, you should see something like this below containing all explicit and implicit parameters like the path to the model, input shape, chosen precision, channel reversion, mean and scale values, conversion parameters, and many more:

SUCCESS at the end indicates everything was converted successfully. You should get IR, which consists of two files: .xml and .bin. Now, you’re ready to load this network to Inference Engine and run the inference. The code below assumes your model is for ImageNet classification.

And it works! You get a class for the image (like this one below — flat-coated retriever). You can try it yourself with this demo.

--

--