How to run TensorFlow on C++. Machine Learning models in production

Ildar Idrisov, PhD
vicuesoft-techblog
Published in
6 min readJun 26, 2019
Photo by Greg Rakozy on Unsplash

As you all know the final stage of any software system development is deploying a solution on a production environment. And as usual during deploying you can faced a lot of tricky issues even more when you work on Machine Learning project. Here I want to share my experience running TensorFlow C++ which was gained during creation our Content Detection Solution at ViCue Soft.

Below I will show you step by step instruction how to run TensorFlow model as a simple C++ application on Windows 10 x64 and NVIDIA GTX 1050 graphics card.

Are you ready? Let’s go!..

Step 1. Set the GPU environment

Our current solution is running on NVIDIA GPU (GTX 1050) what increased its performance up to three times compared with the old solution on CPU (i7–7700HQ).

If you will use CPU, this part can be skipped.

1.1. Driver installation

To install the driver, you need to download it from the official website. Please, follow the link:

A. Choose your video card model and operating system:

B. Download the driver:

C. Install the driver:

The installation process is fairly straightforward, so I will not describe it in detail.

1.2. CUDA installation

Next, you need to install the CUDA package. At the time of writing, CUDA 10.1 has already been released, but the work with this version has not been checked, and the correct work is not guaranteed. In my projects I used CUDA ver 9.0. So, download the CUDA package from the official website. To get an early version you need follow the link:

Run the downloaded package. Select Custom (Advanced) Installation. Be sure to choose the installation of CUDA and remove the installation of drivers. Because the drivers we have installed separately and newer. Other components are installed at your request, for our task there is no need for them.

1.3. cuDNN installation

Please follow the link:

Select the version of cuDNN according to the version of CUDA installed earlier. I used version of cuDNN V7.4.1 (Nov 8, 2018) for CUDA 9.0. Next, you need to choose cuDNN library for Windows 10. To download, create an account on the NVIDIA website.

The downloaded package should be unzipped and all folders with files copied to the directory with CUDA installed. The default directory is “C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v9.0”

Restart the computer and check the correctness of the installation. To do this, enter the following command into the console:

> nvcc --version

As a result, we should get the following:

nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2017 NVIDIA Corporation
Built on Fri_Sep__1_21:08:32_Central_Daylight_Time_2017
Cuda compilation tools, release 9.0, V9.0.176

Step 2. Prepare TensorFlow C++ Library

I used TensorFlow version 1.13.1.

2.1. Prebuilt package

If you don’t need a specific TensorFlow build, you can just download the prebuilt binaries. To do this, follow the link:

Select Windows GPU only and download the archive.

Or just follow the link:

https://storage.googleapis.com/tensorflow/libtensorflow/libtensorflow-gpu-windows-x86_64-1.13.1.zip

After unpacking, we get the necessary files:

  • c_api.h
  • LICENSE
  • tensorflow.dll

2.2. TensorFlow building

If you are not looking for easy way and want to build TensorFlow yourself, you need to do a bit more actions.

First you need to install bazel, without it we will not be able to build TensorFlow. The most important thing is to choose the right version of bazel. It may differ for your specific configuration. Version 0.20 is suitable for the above actions.

To do this, download bazel and write the path to it in the environment variable PATH.

Note: tool wget is used for convenience, you can download bazel in any convenient way.

> mkdir c:\bazel
> cd c:\bazel
> wget https://github.com/bazelbuild/bazel/releases/download/0.20.0/bazel-0.20.0-windows-x86_64.exe
> rename c:\bazel\bazel-0.20.0-windows-x86_64.exe bazel
> set PATH=%PATH%;c:\bazel

All versions of bazel you can find here:

Next you need to download TensorFlow from Github to any folder.

> d:
> git clone https://github.com/tensorflow/tensorflow.git
> cd tensorflow

Go to branch with version 1.13:

> git checkout r1.13

And run the configuration using a python script. During the process you will be asked questions. For most of them you can leave the default answer by simply pressing Enter:

> python3 configure.pyYou have bazel 0.20.0 installed.Please specify the location of python. [Default is C:\Program Files\Python36\python.exe]:Found possible Python library paths:
C:\Program Files\Python36\lib\site-packages
Please input the desired Python library path to use. Default is [C:\Program Files\Python36\lib\site-packages]Do you wish to build TensorFlow with XLA JIT support? [y/N]:
No XLA JIT support will be enabled for TensorFlow.
Do you wish to build TensorFlow with ROCm support? [y/N]:
No ROCm support will be enabled for TensorFlow.
Do you wish to build TensorFlow with CUDA support? [y/N]: Y
CUDA support will be enabled for TensorFlow.
Please specify the CUDA SDK version you want to use. [Leave empty to default to CUDA 9.0]:Please specify the location where CUDA 9.0 toolkit is installed. Refer to README.md for more details. [Default is C:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v9.0]:Please specify the cuDNN version you want to use. [Leave empty to default to cuDNN 7]:Please specify the location where cuDNN 7 library is installed. Refer to README.md for more details. [Default is C:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v9.0]:Please specify a list of comma-separated Cuda compute capabilities you want to build with.
You can find the compute capability of your device at: https://developer.nvidia.com/cuda-gpus.
Please note that each additional compute capability significantly increases your build time and binary size. [Default is: 3.5,7.0]: 6.1
Please specify optimization flags to use during compilation when bazel option "--config=opt" is specified [Default is /arch:AVX]:Would you like to override eigen strong inline for some C++ compilation to reduce the compilation time? [Y/n]:

Next, you need to run the TensorFlow building:

> bazel build --config=monolithic --config=opt --config=cuda //tensorflow/tools/lib_package:libtensorflow

TensorFlow library will be built and zipped into an archive which must be unzipped for example by 7z.

> cd d:\tensorflow\bazel-bin\tensorflow\tools\lib_package
> 7z e libtensorflow.tar.gz && 7z x libtensorflow.tar

After unpacking, we will get the required library and header with API description.

Note: bazel generate output library with .so extension, but it’s a Windows dynamic library. So, you need to rename it to .dll.

Step 3. Run C++ Sample

Now let’s try to run TensorFlow in C++ and call the function to get version. Since we have only the dynamic library tensorflow.dll without import library, we will load it runtime. For this we will write the bootloader.

Let’s include necessary headers with Windows API and with TensorFlow API.

#include <Windows.h>
#include <c_api.h>

Describe the prototype of TensorFlow function:

typedef const char* (__cdecl *TF_Version)(void);

Next, load the dynamic library TensorFlow. It should be placed into your project executed file directory.

HMODULE handle = LoadLibrary(“tensorflow.dll”);

Call the function:

TF_Version get_version = (TF_Version)GetProcAddress(handle, “TF_Version”);
std::cout << “Version of TensorFlow: ” << get_version() << std::endl;

And do not forget to free resources.

FreeLibrary(handle);

Compile the code and run it. After initialization at the end of output you will see:

...
Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 3009 MB memory) -> physical GPU (device: 0, name: GeForce GTX 1050, pci bus id: 0000:01:00.0, compute capability: 6.1)
Version of TensorFlow: 1.13.1

Do you see it? Great! Please take my congratulations! The environment is set, the library is ready and the application is running well. Now you are ready to go deeper in ML using TensorFlow on C++.

--

--