Easiest way to serve Tensorflow models in production using Docker/Kubernetes

Swapnil Pote
Nov 16, 2020 · 6 min read
Image for post
Image for post

One of the easist way to put your model from experiment to production is using TF serving using Docker. In this blog post, we are going to explore how to build you own docker image of TF serving which we are going to push into Docker registery like docker hub.

Why one should build its own docker image?? Let’s explore pros and cons of using following method…
Cons
1. Hmmm… Difficult to figure out.. May be you like to installing stuffs again and again on each server of your pipeline.
2. You might have to invest in paid account of docker hub or any other platform in case of keeping those images private.
3. You don’t know about Docker yet… Ok it’s 2020 Are you sure about it???

Pros
1. No, Need to install specific version or any version of tf serving on any machine either local development and cloud server.
2. It’s easy to maintain different version of the same model at one place like docker hub.
3. We can use this images in kubernetes cluster.

Here, I am not going to discuss the benefits of TF Serving that requires separate blog post all together…
So stay tune for it. I will show you how to put preprocessing into TF Serving with your model.

Steps involve in building your own image:
1. Install docker
2. Save tensorflow(keras) model in tf serving format
3. Write configuration in dockerfile to build image
4. Build your image
5. Push image into docker hub

Now it’s time for detailing
1. Install Docker

Use official docker installation guide… (https://docs.docker.com/engine/install/) OR Watch docker video on (https://www.youtube.com/watch?v=3c-iBn73dDE)

2. Save tensorflow(keras) model in tf serving format
Now it’s time to save your model… Don’t worry the training process will remain the same. Once you are satified with model perform just use following line of code to store model into TF Serving format.

version=1
export_path=f"./tf_serving/{version}/"
tf.keras.models.save_model(model_object,
export_path,
overwrite=True,
include_optimizer=True,
save_format=None,
signatures=None,
options=None)

Keep in mind that you might have different version of the models that’s why always prefer to upgrade the “version number” in export path which help you select previous version of the model when required.

3. Write configuration in dockerfile to build image
Finally time has arrived where we are going to use dockerfile to build own docker image using just saved model to host on docker hub or other docker registery.

Before moving to writing configuration inside dockerfile let’s understand what’s other methods available with docker image of TF Serving.

docker run -d -p 8501:8501 --mount type=bind,source="/mnt/c/Users/Scooby Doo/Projects/malaria_detection/tf_serving",target=/models/malaria -e MODEL_NAME=malaria -t tensorflow/serving

So what’s the problem with this approach???
1. It is not applicable to kubernetes setup because it need proper image to be used not something attach to original image of tensorflow/serving according to current knowledge.
2. Another problem is to ship tf serving files separately then attach to original image. Some how figure out a way to maintain those tf serving files separately which can be bigger challenge once project start growing which required more and more model and respective tf serving files.

Now move to our solution…
Write dockerfile which will build own docker image without maintaining tf serving file separately.

# Pull latest image of tensorflow serving
FROM tensorflow/serving:latest
COPY . /# Expose ports
# gRPC
EXPOSE 8500
# REST
EXPOSE 8501
ENTRYPOINT ["/usr/bin/tf_serving_entrypoint.sh"]CMD ["--model_name=malaria", "--model_base_path=/tf_serving"]

Now it’s time to decode this file so it is easy repeat it for every new project.

FROM tensorflow/serving:latest

This line of code represent what will be our base image for building brand new image. This base image contains all the required dependency installed for running TF Serving models on any machine.
See you don’t need to install anything…

COPY . \

It copies tf_serving folder with files/folders of it into new image we are building.

# Expose ports
# gRPC
EXPOSE 8500
# REST
EXPOSE 8501

The job of expose command is to open ports for communication with outside world with single runtime modification while building it’s container. We will see those modification when we will run this docker image. So don’t worry about it…

ENTRYPOINT ["/usr/bin/tf_serving_entrypoint.sh"]CMD ["--model_name=malaria", "--model_base_path=/tf_serving"]

Moving on to final stage of out dockerfile.
Before that let’s understand differences between ENTRYPOINT, CMD AND RUN

  • ENTRYPOINT configures a container that will run as an executable.
  • CMD sets default command and/or parameters, which can be overwritten from command line when docker container runs.
  • RUN executes command(s) in a new layer and creates a new image. E.g., it is often used for installing software packages.

Now what’s inside tf_serving_entrypoint.sh file??? Basically this file contains all the necessary configuration & commands to run TF Serving model. Generally when you are following my approach for build image don’t touch file as it is not required…

In any case you need to do modification use CMD command to apply those modification as we have done in this case. Our 2 modification in this case are:

  1. We should always apply unique “model_name” using “ — model_name” parameter
  2. As we have moved entire tf_serving folder inside new image. That’s why are mentioning “ — model_base_path” as “tf_serving”. You have to be careful over here whatever folder in which you have store original tf serving file inside local machine. You should always use that folder name as parameter value.

4. Build your image
Now time has come when we will have our own image…Hmmm…but how???

Just navigate using terminal of your os inside a folder where above Dockerfile and tf_serving folder is present. Make sure both this file and folder inside same folder and at same level.

docker build -t swapnilpote/malaria:1.0

Once you run above command it will create own docker image with image name malaria and version tag 1.0.
Let’s understand this command more…

“docker build” is used for building image as you might have guessed it correctly but what is “-t” or alternatively “— tag=” this extra parameter is used for giving specific name to you image in our case is “malaria:1.0”.

Ok…But what’s the point of mentioning “swapnilpote/” with imagename:verstiontag. As mentioned we are going to push this image into docker hub and to give idea where we are going to push this image it is very important to mention that inside image name. Always replace “swapnilpote/” with your username of docker hub.

To see list of docker images into your local machine type following command.

docker images

If you see your docker image inside that list that means it has been successfully created.

5. Push image into docker hub
If you don’t have account on docker hub. Visit this url to create account (https://hub.docker.com/). Open terminal window again to authenticate user and then push image on it.

docker login

Type your username and password one by one. Once it is authenticated you will see “Login Succeeded” message. Time for push…

docker push swapnilpote/malaria:1.0

You will on terminal image getting push. Once it is done check you docker hub profile if image is present or not.

Now everything is done…
But wait how will use this image or cross check it before using inside Kubernetes cluster???

Type this command on any machine to build container of docker image and start using it locally or on cloud environment. You don’t need kubernetes to start using this image in production. It’s choice will be made by system architecture.

docker run -d -p 8501:8501 --name=malaria swapnilpote/malaria:1.0

If you have watch the video mentioned inside 1st point. You might already pretty much clear idea about this command.
Still, If anyone needs clear idea on it mention inside comments will edit this last part of blog post to clear doubts.

Final step prediction using docker container or kubernetes pod check this file…
https://github.com/swapnilpote/malaria_detection/blob/master/run.py

Find above blog post complete code inside this github repo (https://github.com/swapnilpote/malaria_detection)

Kindly post your thought and suggestion in comments. Stay tune for more such blog post on Machine Learning from development to production with respective best practices.

Medium is an open platform where 170 million readers come to find insightful and dynamic thinking. Here, expert and undiscovered voices alike dive into the heart of any topic and bring new ideas to the surface. Learn more

Follow the writers, publications, and topics that matter to you, and you’ll see them on your homepage and in your inbox. Explore

If you have a story to tell, knowledge to share, or a perspective to offer — welcome home. It’s easy and free to post your thinking on any topic. Write on Medium