DeepLab Image Segmentation on Android with Tf Lite — part 2

Tf Lite Model Conversion & deploy on Android App

B M Abir
Chowagiken Tech Blog
4 min readAug 18, 2020

--

In Part 1 of this series, we learned how we can train a DeepLab-v3 model with pasal-voc dataset and export that model as frozen_inference_graph.pb file with an input size of 257x257. In this part, we will find out how we can convert this frozen graph into a TfLite model that can be used in an android app for image segmentation.

1. Install Prerequisites

  • TensorFlow v2.2.0
  • Numpy

In a python 3.6.8 environment install them with pip

Now we will use the trained pb model from Part 1. Or you can also download the same model using this bash script. Download the script and run it in your project root.

This downloads the zip file containing the model and extracts it in the project root.

2. Load Model for Conversion

Now let’s create a python script named convert.py and import the packages required

import required packages

Then Specify the model path we downloaded on the last step.

We will use the TensorFlow TFLite Converter to convert our graph Def frozen_inference_graph_257.pb TensorFlow model into model.tflite

Let’s load the TensorFlow model into the converter using the following function

Load TensorFlow model into TFLiteConverter

The input_arrays is set sub_2 as it is the input layer name of the model we trained. For the conversion to work properly, we have to specify the input layer name and also the output layer name, which is ResizeBilinear_2 for our model. We can also inspect our model’s input and output layer name using Netron Model Viewer. (If you want to know why we are specifying it, this issue on TensorFlow Repo discusses at length)

input_shapes of our model is set to [1,257,257,3] where 1 signifies the batch size of the input, 257 is the input image shape, 3 is the number of color channels in the input image. 257x257 input image size is chosen because the tflite android interpreter currently throws buffer overflow error if a larger image is given as input for inference.

TensorFlow frozen_inference_graph_257.pb model input layer

3. Convert and Save Model

After the model is loaded, we can now convert the model and save it as deeplabv3_mnv2_custom_257.tflite

convert and save the model as tflite

While conversion, we have chosen the default parameter for the post-training optimization and quantization. If you want to learn more, please go through the following docs:

After the converted model is saved, use Netron to inspect if the input and output layer names and model input_shapes are correct.

Our Converted Model for TFLite

Now our model is ready to be deployed as an android app for inference.[converted model link]

The following Notebook contains the whole conversion code [alternate github gist link]along with a python inference & visualization code for testing the model.

4. Build Android App

To simply and easily deploy the model for inference, we are going to use an existing code base that the TensorFlow team has already provided in the following link:

Step 4.1 Clone the TensorFlow examples source code

Clone the TensorFlow examples GitHub repository to your computer to get the demo application.

Step 4.2 Import the sample app to Android Studio

Open the TensorFlow source code in Android Studio. To do this, open Android Studio and select Import Projects (Gradle, Eclipse ADT, etc.), setting the folder to examples/lite/examples/image_segmentation/android

Step 4.3 Run the Android app

Connect the Android device to the computer and be sure to approve any ADB permission prompts that appear on your phone. Select Run -> Run app. Select the deployment target in the connected devices to the device on which the app will be installed. This will install the app on the device.

To test the app, open the app called TFL Image Segmentation on your device. Re-installing the app may require you to uninstall the previous installations.

As we can now verify that the app runs properly with the default model provided with the app, we will now add our custom trained and converted model in this app.

5. Deploy our Trained and converted Model to the app

Put the converted deeplabv3_mnv2_custom_257.tflite model file into the following directory [app_root]/app/src/main/assets

The default model in the app is named as deeplabv3_mnv2_dm10_257.tflite, as our converted model has a different file name, it has to be changed to match the name in the app.

If your model has different classes of trained objects, change Class label text and Number of class. Afterward, build the application and test run it.

Image Segmentation Model, inference Result

Our app can now capture images with the camera and show the inference result in the screen with class labels and their respective color-coded masked regions.

References

  1. https://github.com/tensorflow/models/tree/5d36f19bd3556606e6d294d5690cc7e96679b929/research/deeplab
  2. https://www.jianshu.com/p/dcca31142b99
  3. https://github.com/tensorflow/tensorflow/issues/23747#issuecomment-562964513
  4. https://github.com/tensorflow/models/tree/master/research/deeplab/g3doc

--

--

B M Abir
Chowagiken Tech Blog

Enthusiast of Image processing and machine learning