Deep Learning Inference in Java with OpenVINO™ Runtime

Rajat
OpenVINO-toolkit
Published in
7 min readAug 29, 2023

--

As AI application becomes increasingly ubiquitous, the demand for enabling a wide range of devices- from low-end, resource-constrained platforms to high-end CPUs and GPUs- to deliver these applications to end-users is on the rise. Deep learning models are generally computationally expensive to train, requiring access to significant computing power and a lot of data. However, the increasing availability of state-of-the-art pre-trained models means that for most practical applications, you can easily find and customize a pre-trained model saving you the time and resources needed to train a model from scratch.

Once a model is prepared, the challenge becomes efficiently deploying the model for inference and minimizing computational overheads for best performance. Here, tools like OpenVINO allow you to optimize, deploy and run inference for various applications including computer vision, natural language processing, and more. OpenVINO is a powerful open-source toolkit provided by Intel for optimizing and deploying deep neural networks for inference on a variety of hardware platforms including CPUs, GPUs, VPUs and FPGAs. Further, it is optimized out-of-the-box to run deep learning algorithms efficiently on Intel hardware.

In addition to the C and Python bindings offered in the official distribution of OpenVINO, the OpenVINO Java extra module provides an API for Java developers to leverage the capabilities of the OpenVINO API within your applications. This article aims to guide you through the process of setting up OpenVINO for Java development, using the OpenVINO Model Zoo, and running a sample application for face detection. The following steps are outlined

  1. Build the OpenVINO Java module from source files

2. Use OpenVINO Model Tools to download models from the Open Model Zoo

3. Import the project into IntelliJ IDEA

4. Run the Face Detection sample application

The provided Java samples are simple console applications that demonstrate the usage of the OpenVINO Java API. The Face Detection sample application loads an input image and uses a pre-trained face-detection network to detect faces and predict their bounding boxes. The image is output with bounding boxes drawn around the detected faces.

Setup

To run the sample application, we need to install OpenCV and OpenVINO. Download the OpenCV installer .exe for version 4.6.0 from SourceForge and unpack the self-extracting archive to C:\lib. This will install the OpenCV components to C:\lib\opencv.

To set up OpenVINO, you can either build from the source files available on GitHub or download one of the pre-built binaries available here. The following steps outline the process of building the Java module for the 2023.0.1 release of the OpenVINO toolkit on Windows x86 64-bit machines.

Prerequisites

  • OpenJDK 8
  • CMake 3.13 or higher
  • Microsoft Visual Studio 2019 or higher, version 16.3 or later
  • Windows 10 x86 64-bit or higher
  • Git for Windows

Note: Validated on

  • OpenJDK 1.8.0_382
  • CMake 3.27.1
  • Microsoft Visual Studio Community 2022 version 17.6.5
  • Windows 11 64-bit

Steps

  • Download the zip file from the package downloads page and extract the archive to C:\lib such that the OpenVINO components are installed to C:\lib\w_openvino_toolkit_windows_2023.0.1.11005.fa1c41994f3_x86_64
  • Clone the OpenVINO Extra Modules repository from GitHub which contains the source files for the Java bindings module with the latest changes
git clone https://github.com/openvinotoolkit/openvino_contrib.git -b master
cd openvino_contrib/modules/java_api
  • Set up the OpenVINO environment variables by running the following batch script in Command Prompt. Please note that “setupvars.bat” works correctly only for Command Prompt, not for PowerShell.
C:\lib\w_openvino_toolkit_windows_2023.0.1.11005.fa1c41994f3_x86_64\setupvars.bat
  • Next, create the build directory and run cmake to fetch project dependencies and generate a Visual Studio solution.
mkdir build && cd build
cmake -G "Visual Studio 17 2022" ..

Then run the following command to build from the command line.

cmake --build . --config Release --verbose 

The library files will be installed to the .\Release folder. Copy the inference_engine_java_api.dll file to C:\lib\w_openvino_toolkit_windows_2023.0.1.11005.fa1c41994f3_x86_64\runtime\bin\intel64\Release and delete the build directory with the following commands to complete the setup process.

copy Release\inference_engine_java_api.dll C:\lib\w_openvino_toolkit_windows_2023.0.1.11005.fa1c41994f3_x86_64\runtime\bin\intel64\Release
cd ..
rd /s build

Preparing the Model

The sample application uses a pre-trained face-detection network- either face-detection-retail-0004 or face-detection-adas-0001 model- to detect faces in images and predict their bounding boxes. These models are available in the Open Model Zoo, a repository of free, pre-trained deep learning models and demo applications licensed under Apache License Version 2.0.

To facilitate usage of these models, OpenVINO toolkit provides automation scripts that let you download, convert and optimize these models to prepare for inference. These utilities are available as part of the OpenVINO Development Tools provided as a Python package via PyPI.

When installing through pip, we can also pass in an optional parameter to install the OpenVINO-validated version of deep learning frameworks. One or more of the following frameworks separated with “,” can be passed: caffe, kaldi, mxnet, onnx, pytorch, tensorflow, tensorflow2. The input models used in this example are based on the caffe framework.

Note that to use the development tools, you need to have Python 3.7 or higher installed.

  1. Create and activate a new Python virtual environment to avoid dependency conflicts
python -m venv openvino_env
openvino_env\Scripts\activate

2. Ensure that pip is installed and upgraded to the latest version

python -m pip install --upgrade pip

3. Using pip, install the developer tools package version 2023.0.1

pip install openvino-dev[caffe]==2023.0.1

4. To download the face-detection-adas-0001 model using the Model Downloader tool omz_downloader, execute the following command

omz_downloader --name face-detection-adas-0001 --precisions=FP32

Generally, pretrained models are available in multiple floating-point precisions with performance-accuracy trade-offs. You can select one or more available model precisions separated with “,” using the optional --pretrained parameter.

When running on an integrated GPU device, prefer FP16 models since Intel-integrated GPUs support FP16 computation natively. Models compressed to FP16 occupy about half the space as the original model and perform much faster, with a minor drop in accuracy negligible for most models.

Import the project into IntelliJ IDEA

Now that we have installed the required dependencies and downloaded the face-detection model, we can proceed with setting up the development environment using IntelliJ IDEA- a popular IDE that comes with several useful integrated tools that helps speed up development. You can download the installer for IntelliJ from the downloads page. The following steps describe the process of importing the project into IntelliJ and running the sample application.

Prerequisites

  • IntelliJ IDEA version 2023.1 or higher

Steps

  • Before getting started, make sure that the Gradle IntelliJ plugin is installed and enabled by navigating to Settings > Plugins. Search for “gradle” and ensure that it is enabled
  • Select File > Open and locate the Java API module directory at <openvino_contrib>\modules\java_api. Import this directory into IntelliJ.
  • Once the project is imported, go to Settings > Project Structure (Ctrl + Alt + Shift + S). Under Project tab in the Project Structure dialogue, click on the SDK dropdown. If you have a local OpenJDK 8 installation, select Add SDK > JDK and locate the install directory. Otherwise, select Add SDK > Download JDK. In the Download JDK dialogue, select version 1.8 and click on Download. With the JDK set up, save the project settings by clicking on Apply.
  • Open the Run/Debug Configurations dropdown and select Edit Configurations. Click on Add New Configuration from the dialog box and select Gradle from the dropdown menu.
  • Give the new configuration a name: “FaceDetectionJavaSample”. In the Tasks and Arguments input box, enter the following. Replace <path_to_model> with the path to the downloaded model xml file and <path_to_image> with the input image
:samples:face_detection_java_sample:run --args='-m <path_to_model> -i <path_to_image>' -Pbuild_java_samples=true
  • Set the OpenVINO and OpenCV environment variables by selecting Edit environment variables and add the following environment variables:
    INTEL_OPENVINO_DIR=C:\lib\w_openvino_toolkit_windows_2023.0.1.11005.fa1c41994f3_x86_64
    OpenCV_DIR=C:\lib\opencv\build
  • Next, we need to add the libraries to the system PATH variable. To do this in IntelliJ, ensure that the Include system environment variables option is checked, scroll to the Path variable, and append the following directories to its value
    C:\lib\w_openvino_toolkit_windows_2023.0.1.11005.fa1c41994f3_x86_64\runtime\3rdparty\tbb\bin;C:\lib\w_openvino_toolkit_windows_2023.0.1.11005.fa1c41994f3_x86_64\runtime\bin\intel64\Release;C:\lib\opencv\build\java\x64;
  • Click OK to save the configuration.

Face Detection Sample

To run the sample application, select the saved configuration “FaceDetectionJavaSample” from the Run/Debug Configurations dropdown and click on the Run button in the top right corner. Alternately, click on the Debug button to run in debug mode.

The application reads the model and input image paths as command line parameters. The model is loaded on a device for inference (device in this context refers to a CPU, Intel GPU, etc used to run inference). When inference is done, it outputs the source image with the detected faces enclosed in rectangles in a new window. The confidence value and coordinates of the bounding boxes are output to the standard output stream.

Additional Resources

--

--