🤗 Openvino Quickstart 🤗

wrannaman
SugarKubes
Published in
4 min readMar 24, 2019
SugarKubes object detection on openvino! CPU inference in 50ms

Resources

What is openvino?

openvino is to Intel what CUDA is to Nvidia, namely hardware acceleration. Kapish?

Openvino is Intel’s CPU accelerated deep learning inference library. Essentially you get to use the GPUs inside certain Intel CPUs (as well as the movidius chip, movidius USB, or actual intel GPUs).

Learn more about openvino here. Or don’t and just pull the repo you script kitty, come on you know you want to…

TLDR; openvino gets you GPU-like inference speeds on certain intel CPUs, it’s pretty sweet. Just make sure you’re on a supported CPU otherwise the acceleration won’t work.

Openvino Base Image

Building openvino takes a while, so we build a simple base image for you to pull and use immediately or build upon and extend. Also, take a look at the repo for a surprise gift! 🤸‍♂️🤸‍♂️

The base image is up on docker hub so just

docker pull sugarkubes/openvino:latest

We use this base image to build on top of in other docker files, see Dockerfile.quick for an example

Let’s walk through a brief example of how to use this base image

You need to have your models in the openvino format which is a .bin and .xml. It’s kind of a pain to get them into this format using their model converter but the documentation to do that is here.

Pull the base image and add a few packages.

FROM sugarkubes/openvino:latest as RELEASE
RUN apt-get update && apt-get install -y \
wget \
unzip \
libglib2.0–0 \
libsm6 \
libxrender1 \
libxext6 \
vim

Packaging the model

Fortunately, I already converted a model for you guys and gals so feel free to skip this section and grab the model (code is below).

Once you have a converted model, zip the model into a folder. Make sure the following structure is in place once unzipped.

<model-name>/<version-number>/<model-name>.(bin, xml)

So for example our *ssd_mobilenet_v2_oid_v4_2018_12_12* has one folder inside named *1*. This *1* is the version number. Inside the *1* folder are two files:

ssd_mobilenet_v2_oid_v4_2018_12_12/1/ssd_mobilenet_v2_oid_v4_2018_12_12.binssd_mobilenet_v2_oid_v4_2018_12_12/1/ssd_mobilenet_v2_oid_v4_2018_12_12.xml

Since all this is done for you just grab the models. The openvino base image expects them under */opt/ml/ssd_mobilenet_v2_oid_v4_2018_12_12/1/ssd_mobilenet_v2_oid_v4_2018_12_12.bin*.

Pull the model

RUN wget -P /opt/ml https://s3.us-west-1.wasabisys.com/public.sugarkubes/repos/sugar-cv/intel-object-detection/ssd_mobilenet_v2_oid_v4_2018_12_12.zipRUN cd /opt/ml && unzip ssd_mobilenet_v2_oid_v4_2018_12_12.zip

Adjust configs

Now go into the model_configuration_file.json included in this repo. Make sure for new models you change the name, but here it is done for this ssd model.

{
"model_config_list":[
{
"config": {
"name":"ssd_mobilenet_v2_oid_v4_2018_12_12",
"base_path":"/opt/ml/ssd_mobilenet_v2_oid_v4_2018_12_12",
"batch_size": "auto",
"model_version_policy": {"all": {}}
}
}
]
}

openvino model server allows for various models to be loaded at the same time, as well as different versions of the same model loaded at the same time. Just add another object with the same structure to the array to serve multiple models.

{
"model_config_list":[
{
"config": {
"name":"model1",
"base_path":"/opt/ml/model1",
"batch_size": "auto",
"model_version_policy": {"all": {}}
}
},
{
"config": {
"name":"model2",
"base_path":"/opt/ml/model2",
"batch_size": "auto",
"model_version_policy": {"all": {}}
}
}
]
}

Add a simple api

Finally, copy all the code from the repo into the docker container including our configs, and api.py to run the python server.

COPY . /var/sugar/
RUN . .venv/bin/activate && \
pip3 install -r /var/sugar/requirements.txt
# In this directory, modify the model_configuration_file.json to refer to your models. See README
RUN mv /var/sugar/model_configuration_file.json /opt/ml/config.json
# Start script if that’s how you want to do things
RUN chmod +x /var/sugar/start.sh
EXPOSE 9090
CMD [“/var/sugar/start.sh”]

Now you should be able to run the image!

Wrapping up

Build it!

docker build \
-f Dockerfile.quick \
-t registry.sugarkubes.io/sugar-cv/intel-object-detection:latest .

Run it!

docker run — rm -dti \
-p 9090:9090 \
registry.sugarkubes.io/sugar-cv/intel-object-detection:latest

Call It

curl -X POST \
http://0.0.0.0:9090/predict \
-H ‘Content-Type: application/json’ \
-d ‘{ “url”: “https://s3.us-west-1.wasabisys.com/public.sugarkubes/repos/sugar-cv/object-detection/friends.jpg" }’

Free SugarKube!

SugarKubes Object Detection

Inside this repo is all the code and a docker file needed to run the intel-object-detection SugarKube.

It is an SSD model trained on openimages v4 and can detect 601 classes with ~50ms inference times. As with all SugarKubes, it has a simple, well documented api and is ready to use!

List of 600 objects

Visit http://0.0.0.0:9090/tester/index.html to test the object detection api.

# Example Output# (x1, y1) are top left of bounding box
# (x2, y2) are lower right of bounding box
{
"objects": [
[
"Woman", // label
"0.65", // confidence
673, // x1
188, // y1
832, // x2
730 // y2
],
[
"Woman",
"0.49",
1012,
128,
1192,
800
],
[
"Woman",
"0.41",
512,
173,
671,
728
],
[
"Man",
"0.63",
356,
155,
526,
721
],
[
"Man",
"0.62",
204,
171,
376,
716
],
[
"Man",
"0.62",
831,
100,
1025,
737
],
[
"Man",
"0.54",
40,
189,
226,
697
],
[
"Human face",
"0.44",
1064,
158,
1120,
229
],
[
"Jeans",
"0.46",
1035,
399,
1176,
746
],
... truncated for your sanity, but there are more ... ],
"image_size": [
1200,
800
]
}

Join our mailing list for updates, free code, and more!

Sugarkubes is a container marketplace. Want to start running AI at the edge?Need some sweet machine learning models that work out of the box? Check us out at https://sugarkubes.io.

--

--