Intel OpenVINO: Model Optimizer

Surya Prabhakaran
Analytics Vidhya
Published in
5 min readJul 23, 2020

In my previous article, I have discussed the basics and workflow of the OpenVINO toolkit. In this article, we will be exploring:-

  • What is Model Optimizer?
  • Configuring the Model Optimizer
  • Converting ONNX model to Intermediate Representation
  • Converting Caffe model to Intermediate Representation
  • Converting TensorFlow model to Intermediate Representation

What is Model Optimizer?

Model Optimizer is one of the two main components of the OpenVINO toolkit. The main purpose of Model Optimizer is to convert the model to an Intermediate Representation(IR). The Intermediate Representation(IR) of a model contains a .xml file and a .bin file. You need both files to run inference.

  • .xml -> Contains the Model Architecture other important metadata.
  • .bin -> Contains the weights and biases of the model in a binary format.

Intermediate Representations (IRs) are the OpenVINO Toolkit’s standard structure and naming for neural network architectures. A “Conv2D” layer in TensorFlow, “Convolution” layer in Caffe or “Conv” layer in ONNX are all converted into a “Convolution” layer in an IR. You can find more in-depth data on each of the Intermediate Representation layers themselves here.

Frameworks supported by OpenVINO:-

  • Tensorflow
  • Caffe
  • MXNet
  • ONNX(PyTorch and Apple ML)
  • Kaldi

Configuring the Model Optimizer

To use the Model Optimizer, you need to configure it, configuring the Model Optimizer is pretty straight forward and can be done in the Command Prompt/Terminal.

To configure the Model Optimizer, follow these steps(type the commands in Command Prompt/Terminal):-

  1. Go to the Openvino directory:-

For Linux:- cd opt/intel/openvino

For Windows:- cd C:/Program Files (x86)/IntelSWTools/openvino

I have used the default installation directory in the above command if your installation directory is different then navigate to the appropriate directory.

2. Go to the install_prerequitites directory:-

cd deployment_tools/model_optimizer/install_prerequisites

3. Run the install_prerequisites file

For Windows:-install_prerequisites.bat

For Linux:- install_prerequisites.sh

If you want to configure the model for a particular framework, then run the following command:-

TensorFlow:-

Windows:- install_prerequisites_tf.bat

Linux:- install_prerequisites_tf.sh

Caffe:-

Windows:- install_prerequisites_caffe.bat

Linux:- install_prerequisites_caffe.sh

MXNet:-

Windows:- install_prerequisites_mxnet.bat

Linux:- install_prerequisites_mxnet.sh

ONNX:-

Windows:- install_prerequisites_onnx.bat

Linux:- install_prerequisites_onnx.sh

Kaldi:-

Windows:- install_prerequisites_kaldi.bat

Linux:- install_prerequisites_kaldi.sh

Converting to Intermediate Representation

After successfully configuring the Model Optimizer, we are now ready to use the Model Optimizer. In this article, I will show you how to convert an ONNX, Caffe and TensorFlow to an Intermediate Representation. The conversion of ONNX and Caffe is pretty straightforward, but the conversion of Tensorflow model is a little bit tricky.

Converting ONNX model

OpenVINO does not directly support PyTorch; rather, a PyTorch model is converted to an ONNX format, and then it is converted to an Intermediate Representation by the model optimizer.

I will be downloading and converting “Inception_V1”. You can find other models from this link.

After downloading “Inception_V1” unzip the file and extract it to your desired location. Inside the “inception_v1” directory, you will find “model.onnx” file. We need to feed that file to the Model Optimizer.

Follow the steps:-

  1. Open Command Prompt/Terminal and change your current working directory to the location where you have your “model.onnx” file
  2. Run the following command:-
python opt/intel/opevino/deployment_tools/model_optimizer/mo.py --input_model model.onnx
  • --input_model-> Takes the model we want to convert.

The above command is run in Linux and I have used the default installation directory, if your installation directory is different then use the appropriate path to “mo.py”.

python <installation_directory>/opevino/deployment_tools/model_optimizer/mo.py --input_model model.onnx

After successfully running the command, you will receive the location of the “.xml” and “.bin ” files.

Converting Caffe Model

The process of converting a Caffe model is pretty easy and analogous to that of ONNX model. The difference is that for Caffe models, the Model Optimizer takes some additional arguments specific for Caffe model. You can find more details in the documentation.

I will be downloading and converting the SqueezeNet V1.1 model.

Follow the steps:-

  1. Open Command Prompt/Terminal and change your current working directory to the location where you have your “squeezenet_v1.1.caffemodel” file
  2. Run the following command:-
python opt/intel/opevino/deployment_tools/model_optimizer/mo.py --input_model squeezenet_v1.1.caffemodel --input_proto deploy.prototxt
  • --input_model → Takes the model we want to convert.
  • --input_proto → Takes the file(deploy.prototxt) as input which contains the topology structure and layer attributes.

If the file names of “.caffemodel” and “.prototxt” are same, then the argument “ — input_proto” is not required.

After successfully running the command, you will receive the location of the “.xml” and “.bin ” files.

Converting a TensorFlow Model

The TensorFlow models in the open model zoo are in frozen and unfrozen format. Some models in TensorFlow may already be frozen for you. You can either freeze your model or use the separate instructions in the to convert a non-frozen model.

You can use the following code to freeze an unfrozen model.

import tensorflow as tffrom tensorflow.python.framework import graph_iofrozen = tf.graph_util.convert_variables_to_constants(sess, sess.graph_def, ["name_of_the_output_node"])graph_io.write_graph(frozen, './', 'inference_graph.pb', as_text=False)
  • sess → is the instance of the TensorFlow* Session object where the network topology is defined.
  • [“name_of_the_output_node”] → is the list of output node names in the graph; “frozen” graph will include only those nodes from the original “sess.graph_def” that is directly or indirectly used to compute given output nodes.
  • ./ → is the directory where the inference graph file should be generated.
  • inference_graph.pb → is the name of the generated inference graph file.
  • as_text → specifies whether the generated file should be in human-readable text format or binary.

I will be downloading and converting the Faster R-CNN Inception V2 COCO model. You can find other models from this link.

After downloading “Faster R-CNN Inception V2 COCO” unzip the file and extract it to your desired location. Inside the “faster_rcnn_inception_v2_coco_2018_01_28” directory, you will find “frozen_inference_graph.pb” file. We need to feed that file to the Model Optimizer.

Follow the steps:-

  1. Open Command Prompt/Terminal and change your current working directory to the location where you have your “frozen_inference_graph.pb” file
  2. Run the following command:-
python /opt/intel/openvino/deployment_tools/model_optimizer/mo.py --input_model frozen_inference_graph.pb --tensorflow_object_detection_api_pipeline_config pipeline.config --reverse_input_channels --tensorflow_use_custom_operations_config /opt/intel/openvino/deployment_tools/model_optimizer/extensions/front/tf/ssd_v2_support.json

The above command is run in Linux and I have used the default installation directory, if your installation directory is different then use the appropriate path to “mo.py”.

  • --input_model → Takes the model we want to convert.
  • --tensorflow_object_detection_api_pipeline → path to the pipeline configuration file used to generate a model created with the help of Object Detection API.
  • --reverse_input_channels → TF model zoo models are trained on RGB(Red Green Blue) images, while OpenCV usually loads as BGR(Blue Green Red).
  • --tensorflow_use_custom_operations_config → use the configuration file with custom operation description.

After successfully running the command, you will receive the location of the “.xml” and “.bin ” files.

Thank you so much for reading this article, I hope by now you have a proper understanding of Model Optimizer.

--

--