Deploying Object Detection Model with TensorFlow Serving — Part 1
Object detection models are some of the most sophisticated deep learning models capable of localizing and classifying objects in real time both in images and videos. But what good is a model if it cannot be used for production?!
Thanks to the wonderful guys at TensorFlow, we have TensorFlow serving that is capable of serving our models in production. Now there are some really good articles on TensorFlow serving to get you started such as this and this. This article will focus on how can we serve Object Detection Models specifically with TF Serving. It is motivated (in my opinion) by the non-presence of a good resource online that explain how to create production-ready object detection models, create TF-serving environment using Docker, serve the model and create a client-side script to access the model. In the spirit of not reinventing the wheel, I have taken help from resources from the object detection API for this tutorial. Assuming you have cloned the object detection API from TensorFlow, if not do the following,
# Clone tensorlfow models repo
This is a multi-part tutorial with this first part focussing on creating production ready model for TF-Serving.
Create production-ready model for TF-Serving
Assuming you have trained your object detection model using TensorFlow, you will have the following four files saved in your disk,
These files can be used for inference directly or we can use freeze_graph.py script to convert the model into a frozen graph consisting of the model architecture and weights in one file. This is useful for testing purposes on your local machine but not suitable for a production environment.
To create models ready for serving, we will tweak the exporter.py file available on the object detection API Github. The original script available on the repo does not save the Variables that are required for serving. Use the following exporter.py script instead of the original TensorFlow one.
Following changes have been made to the above exporter.py,
- Change to _write_saved_model method. This is required as the original python script does not save variables which are required for serving the model. Now instead of using the frozen_graph_def, we use the trained_checkpoint_prefix that has the weights of the model as variables. (thanks to this Github issue)
2. Change to the calling function from frozen_graph_def to trained_checkpoint_prefix by following,
3. Comment out code that saves files to disk not required during serving
Now you are all ready to create your model that can be used for serving. Following code can help you achieve that,
Explanation of the above code,
- Each object detection model has a configuration which needs to be passed to the export_model.py. This consists of information regarding the model architecture. For more information, refer to this link.
- get_configs_from_pipeline_file method creates dictionary from the configuration file & create_pipeline_proto_from_configs method creates a proto buffer object from this dictionary.
- input_checkpoint is the path to model.ckpt of the trained model.
- model_version_id is an integer for the current version of the model. This is required by TF-serving for versioning of models.
- object_detection.exporter will save the model in the following format,
1/ is the model version, saved_model.pb contains the model architecture and the variables directory has the weights for the model. This model is ready to be served.
In the next part, I will describe how to create your own TF-Serving environment using Docker and serve the above-created model.
About the author: Gaurav is a data science manager at EY’s Innovation Advisory in Dublin, Ireland. His interests include building scalable machine learning systems for computer vision applications. Find more at gauravkaila.com