Witness the power of Intel® iGPU with Azure IoT Edge for Linux on Windows(EFLOW) & OpenVINO™ Toolkit

Ragesh Hajela
OpenVINO-toolkit
Published in
6 min readMay 16, 2022

Authors: Narasimha Lakamsani, Devang Aggarwal, Sesh Seshagiri, Jian Sun

In a previous blog, we talked about how developers, like yourself, can utilize Windows Subsystem for Linux (WSL 2), a product that allows developers to run a Linux environment directly on Windows, and leverage the power of the Intel® iGPU on your laptop for inferencing of your applications.

In this blog, we will talk about a similar product, Azure IoT Edge for Linux on Windows (EFLOW). The similarities between both EFLOW and WSL is that they both use the same underlying technology and type of containerized workloads. However, they are created with different audiences and use cases in mind. WSL is meant to build, test, and run Linux code and applications. Whereas EFLOW is used for deployment of Azure IoT Edge modules and docker images running in containers.

In fact, you can use WSL for development and bundle your developed application in an Azure IoT Edge container to deploy it in EFLOW.

How EFLOW Works

Azure IoT Edge for Linux on Windows allows you to run containerized Linux workloads by running a curated or systematically organized Linux virtual machine on windows. EFLOW is primarily meant for running production services in IoT deployment, including processes that run round the clock for years.

Unleash the power of Intel® iGPU with the OpenVINO™ Toolkit

Intel’s Integrated Graphics Processing Unit (iGPU) is included in the same die as your CPU. You can use this added power on your machine to accelerate your AI inferencing workloads. The graphics processing unit comes in two flavors: integrated (built into the CPU) and discrete (stand-alone graphics card), branded as Intel® Iris® Xe MAX.

To unleash the full potential of Intel hardware for AI workloads you can use the OpenVINO™ toolkit, available in open-source version here: https://github.com/openvinotoolkit/openvino

Now, you can access Intel® iGPU through Azure IoT Edge for Linux on Windows (EFLOW) with the help the OpenVINO™ Toolkit.

System Setup for EFLOW

  1. Our system(works on other versions): Windows 10/11 Insider preview, Windows Internet of Things Enterprise LTSC 2H21 (Build 19044)
  2. Enable Hyper-V: Open PowerShell with administrator privileges and run the following command and press “y” to reboot and complete the operation.
Enable-WindowsOptionalFeature -Online -FeatureName Microsoft-Hyper-V -All

3. Install Intel® Graphics Driver:

4. WSL Installation:

Open PowerShell with administrator privileges and run the following command and restart the system to finish the installation

wsl --install

EFLOW Installation

Download EFLOW

Open PowerShell with administrator privileges and run the following commands:

$msiPath = $([io.Path]::Combine($env:TEMP,'AzureIoTEdge.msi'))
$ProgressPreference = 'SilentlyContinue'
Invoke-WebRequest "https://aka.ms/AzEflowMSI" -OutFile $msiPath

Install EFLOW

Start-Process -Wait msiexec -ArgumentList "/i","$([io.Path]::Combine($env:TEMP, 'AzureIoTEdge.msi'))","/qn"

EFLOW Deployment

Deploy EFLOW

Open PowerShell with administrator privileges and run the following commands:

Set-ExecutionPolicy -ExecutionPolicy RemoteSigned -Force
Import-Module AzureEFLOW
cpu_count = 4
$memory = 4096
$hard_disk = 30
$gpu_name = (Get-WmiObject win32_VideoController | where{$_.name -like "Intel(R)*"}).captionDeploy-Eflow -acceptEula yes -acceptOptionalTelemetry no -headless -cpuCount $cpu_count -memoryInMB $memory -vmDiskSize $hard_disk -gpuName $gpu_name -gpuPassthroughType ParaVirtualization -gpuCount 1

SSH into EFLOW VM

Connect-EflowVM

Ensure that GPU-PV is enabled

  • Within the EFLOW VM, run the following command
ls -al /dev/dxg
  • The expected output should be similar to
crw-rw-rw- 1 root 10, 60  Sep  8 06:20 /dev/dxg

Deploy a Docker container with OpenVINO™ Runtime and Run Benchmarking Using the Intel® iGPU

Create the Docker File

mkdir -p openvino_docker_dir && cd openvino_docker_dir
vi Dockerfile

Use the sample Dockerfile listed below to update your file

FROM ubuntu:20.04 

RUN rm /bin/sh && ln -s /bin/bash /bin/sh


ENV DEBIAN_FRONTEND noninteractive

ENV INTEL_OPENVINO_DIR /opt/intel/openvino_2021

ENV MO_DIR ${INTEL_OPENVINO_DIR}/deployment_tools/model_optimizer

RUN apt update && apt install -y \

libgtk-3-0 \

python3 \

python3-dev \

python3-setuptools \

python3-pip git \

wget \

libgl1 \

git \

build-essential \

cmake \

vim \

apt-utils \

wget \

clinfo \

gnupg gpg-agent \

software-properties-common \

nfs-common

# Install the Open Model Zoo Accuracy Checker

WORKDIR /root

RUN git clone https://github.com/openvinotoolkit/open_model_zoo.git

WORKDIR /root/open_model_zoo

RUN git checkout 2021.4.2

WORKDIR /root/open_model_zoo/tools/accuracy_checker


#RUN apt install -y libgl1 wget

RUN python3 -m pip install virtualenv && python3 -m virtualenv -p `which python3` checker && \

source checker/bin/activate

RUN pip3 install imageio~=2.3.0

RUN python3 -m pip install torch==1.8.2+cpu torchvision==0.9.2+cpu torchaudio==0.8.2 -f https://download.pytorch.org/whl/lts/1.8/torch_lts.html

RUN python3 -m pip install --upgrade pip && \

python3 -m pip install opencv-python

#RUN pip3 install -r requirements.in && python3 setup.py install

# For accuracy checker

RUN wget https://www.cs.toronto.edu/~kriz/cifar-10-python.tar.gz && tar xvf cifar-10-python.tar.gz -C sample && \


sed -i 's/CPU/GPU/g' sample/sample_config.yml

#RUN apt install -y clinfo wget gnupg gpg-agent software-properties-common nfs-common #ocl-icd-opencl-dev

# Openvino installation as documented at https://docs.openvinotoolkit.org/latest/openvino_docs_install_guides_installing_openvino_apt.html

RUN pip3 install openvino defusedxml


RUN unset no_proxy && \

wget https://apt.repos.intel.com/openvino/2021/GPG-PUB-KEY-INTEL-OPENVINO-2021 && \

apt-key add GPG-PUB-KEY-INTEL-OPENVINO-2021 && \

echo "deb https://apt.repos.intel.com/openvino/2021 all main" | tee /etc/apt/sources.list.d/intel-openvino-2021.list && \

apt update && \

apt install -y intel-openvino-runtime-ubuntu20-2021.4.752 \

intel-openvino-ie-samples-2021.4.752 \

intel-openvino-model-optimizer-2021.4.752 \

intel-openvino-pot-2021.4.752 \

intel-openvino-opencv-generic-2021.4.752 \

intel-openvino-opencv-lib-ubuntu-focal-2021.4.752

# Install the Model optimizer dependencies

WORKDIR ${MO_DIR}/install_prerequisites

RUN unset no_proxy && \

source ${INTEL_OPENVINO_DIR}/bin/setupvars.sh && \

sed -i 's/sudo -E//g' install_prerequisites.sh && \

./install_prerequisites_tf2.sh && \

./install_prerequisites_caffe.sh && \

./install_prerequisites_onnx.sh && \

./install_prerequisites_mxnet.sh

WORKDIR /root/open_model_zoo/tools/accuracy_checker

RUN pip3 install -r requirements.in && sync && python3 setup.py install





# Environment Setup

ENV PATH=/opt/intel/openvino_2021.4.752/deployment_tools/inference_engine/lib:$PATH;

RUN echo "source /opt/intel/openvino_2021/bin/setupvars.sh" | tee -a /root/.bashrc

ENV LD_LIBRARY_PATH=/opt/intel/openvino_2021.4.752/deployment_tools/inference_engine/libi/intel64:/usr/local/lib/python3.8/dist-packages/openvino/libs:/opt/intel/openvino_2021.4.752/deployment_tools/inference_engine/external/tbb/lib/:/opt/intel/openvino_2021.4.752/deployment_tools/inference_engine/lib:/opt/intel/openvino_2021.4.752/deployment_tools/ngraph/lib/:/usr/local/lib/python3.8/dist-packages/openvino/libs

ENV PYTHONPATH=/opt/intel/openvino_2021.4.752/deployment_tools/inference_engine/lib/intel64/:/opt/intel/openvino_2021.4.752/python/python3.6/openvino/inference_engine:~/open_model_zoo/tools/accuracy_checker:$PYTHONPATH;

#Install Intel OpenCL 21.40 release and dependencies

# Release Info: https://github.com/intel/computeruntime/releases/tag/21.40.21182

WORKDIR /root/neo



RUN wget https://github.com/intel/compute-runtime/releases/download/22.05.22297/intel-gmmlib_22.0.0_amd64.deb && \

wget https://github.com/intel/intel-graphics-compiler/releases/download/igc-1.0.9933/intel-igc-core_1.0.9933_amd64.deb && \

wget https://github.com/intel/intel-graphics-compiler/releases/download/igc-1.0.9933/intel-igc-opencl_1.0.9933_amd64.deb && \

wget https://github.com/intel/compute-runtime/releases/download/22.05.22297/intel-opencl-icd_22.05.22297_amd64.deb && \

wget https://github.com/intel/compute-runtime/releases/download/22.05.22297/intel-level-zero-gpu_1.3.22297_amd64.deb

RUN dpkg -i /root/neo/*.deb

Build the Docker Image

  • Within the directory where the docker file is located, run the following command:
sudo docker build -t ovbuild

Launch the Docker Container with access the GPU-PV functionalities

sudo docker images

Use the output to find the <IMAGE ID> for the container you built

sudo docker run -it --device /dev/dxg:/dev/dxg -v /usr/lib/wslusr/lib/wsl <IMAGE ID> bash

Run OpenVINO™ Benchmarking Using the iGPU

Within the deployed container, run the following commands:

cd /opt/intel/openvino_2021/inference_engine/samples/cpp && 
./build_samples.sh
cd ~/inference_engine_cpp_samples_build/intel64/Releasewget https://download.01.org/opencv/2021/openvinotoolkit/2021.2/ope
n_model_zoo/models_bin/3/face-detection-retail-0004/FP32/face-detection-retail-0004.xml
wget https://download.01.org/opencv/2021/openvinotoolkit/2021.2/ope
n_model_zoo/models_bin/3/face-detection-retail-0004/FP32/face-detection-retail-0004.bin
./benchmark_app -m face-detection-retail-0004.xml -d GPU

The output should look something like the image shown below with clear indication of iGPU usage in the task manager

Azure IoT Hub Deployment

Learn how to deploy Azure IoT Edge modules from Azure cloud to your Windows device here.

Conclusion

We hope that with this blog, we were able to show you just how easy it is to:

  • Setup and deploy your application or docker container in Linux VMs using EFLOW on a windows device.
  • Offload your AI workloads by harnessing the power of iGPU.
  • Get a significant performance boost by simply using OpenVINO™ Toolkit.

Learn More

Notices & Disclaimers

Performance varies by use, configuration and other factors. Learn more at www.Intel.com/PerformanceIndex .

Performance results are based on testing as of dates shown in configurations and may not reflect all publicly available updates. See backup for configuration details. No product or component can be absolutely secure.

Your costs and results may vary.

Intel technologies may require enabled hardware, software or service activation.

© Intel Corporation. Intel, the Intel logo, and other Intel marks are trademarks of Intel Corporation or its subsidiaries. Other names and brands may be claimed as the property of others.

This post was originally published on Intel.com.

--

--