Tensorflow C++ API for Android

Leeor Langer
3 min readDec 30, 2018

--

Let’s say you want to develop a mobile app which includes deep learning functionality. However, deep learning is only part of the software stack. Google’s Tensorflow deep learning framework is ideal for such usage. It is written in c++, with an API in c++, but surprisingly there is no example of c++ usage on Android… Google currently only supports a Java API on Android via the JNI (libtensorflow_inference.so).

We found a seemingly innocent comment regarding Tensorflow c++ on Android here. Pete Warden from the Tensorflow team points to the benchmark tool as an example for cross platform usage (Linux PC and Android build support). So, we refactored the benchmark tool, removed all the unnecessary code and left only an API. The API includes 2 functions: Init and Run. These functions are described in the header file and the code is compiled as a dynamic library for ARM architecture (.so file).

The difficult part in creating such a library are the build tools. In particular, Bazel is not very well documented outside of Google (more on the subject) so it takes some effort to understand how to compile and link such a library, in addition to the code itself. The implementation of the library supports diverse usage scenarios, including multiple inputs, multiple outputs, multiple model instances and logging.

In order to use the API you can link against libTensorflowInference.so in c++. In order to change and compile the library, do the following:

  1. Download Tensorflow from source and checkout release 1.9
  2. Edit the WORKSPACE file in Tensorflow root to include the SDK and NDK details (see example WORKSPACE file in our repo).
  3. Copy tfwld directory into tensorflow\tensorflow\tools.
  4. Run the Bazel build command with the appropriate flags (such as as “arm64-v8a”).

Run the following command line invocation (Step 4):

bazel build -c opt — copt=”-fPIC” — cxxopt=’-std=c++11' — crosstool_top=//external:android/crosstool — cpu=arm64-v8a — host_crosstool_top=@bazel_tools//tools/cpp:toolchain — config monolithic tensorflow/tools/tfwld:libTensorflowInference.so

The BUILD file is kind of tricky, let’s go over it:

The idea is to compile an “internal” library that packages only the necessary components for Android and on Windows it packs in parts of the core functionality (which apparently is not supported on current ARM architectures). We wrap this library with our own interface for simple inference usage, exposing only 2 functions. Note that in order to strip all unnecessary symbols, we use “-Wl, — version-script=…”. Change this to your local path. Also, note the usage of “-shared” and the command line argument “-fPIC”. These commands enable the usage of a shared library (instead of executable).

So at the the end of the day, your c++ code will look like the following example:

We are Wearable Devices, a startup company which develops hardware and software solutions to interact with computers. Our vision is to transform interaction and control of computers to be as natural and intuitive as real-life experiences. We imagine a future in which the human hand becomes a universal input device for interacting with digital devices, using simple gestures.

If you are a developer, check out www.getmudra.com. You can find the code for this example on Github. You can also read my previous Medium post on Mudra — Pushing the Boundaries of Deep Learning: Making Sense of Biopotentials.

--

--