Tensorflow Lite Model Deployment!

Maheshwar Ligade
techwasti
Published in
3 min readDec 16, 2019

Here you go — — Introduction Story of Tensorflow Lite

In the above article, we introduced TensorFlow lite. What is TensorFlow lite and what is the purpose of it and what is TensorFlow lite is not.

In this article, we will dig deeper and steps involved in the TensorFlow lite model deployment.

The above diagram states the deployment flow of Tensorflow lite mode at the edge devices.

Let us go through the steps from the top of the diagram.

Very high level convert this diagram into two functionality first step is converter and second, is the interpreter or inference the model.

  1. Train Model:-

Train your model using TensorFlow. We can train our model using any high-level TensorFlow API such as Keras or either you have a legacy TensorFlow model. You can train our model using high-level API like Keras or low-level API. You can develop your own model or use TensorFlow inbuilt model.

If you have any other model also you can convert those models into TensorFlow using ONNX and use it. Once the model is ready you have save that model. We can save our model in a different format based on APIs such as HDF5, SavedModel or FrozenGraphDef.

2. Convert Model:-

In this step, we are actually using the Tensorflow Lite converter to convert the TensorFlow model into the TensorFlow lite flatbuffer format.

FlatBuffers is a special data serialization format that is optimized for performance. Tensorflow Lite flatbuffer aka TF Lite model. The TensorFlow Lite converter takes a TensorFlow model and generates a TensorFlow Lite FlatBuffer file (.tflite). The converter supports SavedModel directories, tf.keras models, and concrete functions. Now our TFLite model is ready.

You can convert a model using the Python API or command-line tool. CLI support very basic models.

Python API example:-

//export_dir is the path of your TF model is saved.converter = tf.lite.TFLiteConverter.from_saved_model(export_dir)
tflite_model = converter.convert()

CLI example

bazel run //tensorflow/lite/python:tflite_convert -- \
--saved_model_dir=/tmp/mobilenet_saved_model \
--output_file=/tmp/mobilenet.tflite

3. Deploy Model:-

Now our model is ready and we have '.tflite' file. We can deploy this to IoT devices, embedded devices or mobile devices. We can

4. Deploy model:-

To perform inference with a TensorFlow Lite model, you must run it through an interpreter. TensorFlow Lite model serves on a device using an interpreter. TensorFlow Lite interpreter provides a wide range of interfaces and supports a wide range of devices. The TensorFlow Lite interpreter is designed to be lean and fast. We can run models locally on these devices using the Tensorflow Lite interpreter. Once this model gets loaded into devices such as embedded devices, Android or iOS devices. Once a device is deployed then take inference.

The inferencing model goes through the below steps in generally.

a. Loading a model:- You must load .tflite model file into memory.

b. Transforming data:- Raw input data for the model generally does not much input data format expected by the model. You need to transform the data.

c. Running inference:- Execute inference over transformed data.

d. Interpreting output:- When you receive results from the model inference, you must interpret the tensors in a meaningful way that’s useful in your application.

Example:-

In this example, we will use the pre-trained TensorFlow "imagenet" model.

#import the statements
import tensorflow as tf
import pathlib
#load the Mobilenet model tf.kerasmodel = tf.keras.applications.MobileNetV2(weights=”imagenet”, input_shape=(224,224,3))
#convert the model using TF converter
converter = tf.lite.TFLiteConverter.from_keras_model(model)
tflite_model = converter.convert()
#convertion done now save the model tflite_model_files = pathlib.Path(‘/tmp/liteexample.tflite’)
tflite_model_file.write_bytes(tflite_model)

PreTrained Models

For more such stories

Let’s connect on Stackoverflow , LinkedIn , Facebook& Twitter.

--

--

Maheshwar Ligade
techwasti

Learner, Full Stack Developer, blogger, amateur #ML,#DL,#AI dev in the quantum moment. I run https://techwasti.com/ to post all my articles.