CATS vs DOGS + FLASK + DOCKER + KUBERNETES

Ever wondered on how to build a Machine learning web app and deploy it scale it and manage it for everyone in the cloud? You have come to the right place. In this post I’ll go through the basic concepts of Containerizing your ML web app and deploying it in Google cloud using kebernetes engine.

You can find the complete code here. For a quick Demo visit http://130.211.229.36/.(upload only jpg images)

Prerequisites = Understanding of Dockers

First what is kubernetes? Kubernetes is an open source orchestration system for docker containers, it handles scheduling onto nodes in a compute cluster and actively manages workloads to ensure that their state matches the user’s declared intentions, using the concept of labels and pods it groups the containers which make up an application into logical units for easy management and discovery.

Kubernetes Architecture

The overview of Kubernetes cluster with Master and worker nodes. All thecluster activities are controlled from Master node which has a API running. Kubectl is a command line interface for running commands against Kubernetes clusters.

Each Nodes are labeled and given tags. Your containerized app runs inside a pod in a node and is deployed in master node.

A NODE

This is a Node with many pods with their pod IP addresses.

Pods- These are the basic unit of the architecture usually contains 2 containers. Each pod in Kubernetes is assigned a unique pod IP address within the cluster and can be managed manually through the Kubernetes API, A pod can define a volume, such as a local disk directory or a network disk, and expose it to the containers in the pod.

And finally comes Exposing your app using a service. When a worker node dies, the Pods running on the Node are also lost. A Replica Set might then dynamically drive the cluster back to desired state via creation of new Pods to keep your application running. This is done via service.yaml file.

For better understanding the kubernetes infrastructure I recommend a video which explains all the concepts in a simplified way link .

Let’s begin our web app.

I trained my Neural network model and saved it to JSON and also saved the weights to a h5 file. I wrote a use_model.py to load the trained model from JSON and to predict the new image.

I used FLASK to create a web app. The application is simple it takes the image and predicts cat or dog using use_model.py and returns whether “You are a DOG” or “You are a CAT”. The app.py goes like this (I changed host = 0.0.0.0 when creating the container.)

Then the most important part is to write the Dockerfie to be able to build the docker image.

Run commands apt-get update && install python3 …

Copy your current directory. pip install requirements.txt

EXPOSE port(where your app.py is serving)

Then run the command python3 app.py. CMD will always be appended to Entrypoint to give you the final command to run.

You can build the container locally and test your containerized flask app (run these commands in your project directory)

docker build -t image_classifier:latest .
docker run -it -p 5500:3000 image_classifier

This will run your app.py and with port forwarding in place you can access the webapp in your browser at http://localhost:5000.

Now comes the most anticipated part.

Create a account in cloud.google.com enable payments to be able to access Kubernetes engine. Navigate to kubernetes engine and click activate cloud shell button at the top of the console window. You will get a console at the bottom wher you can run commands this console comes pre installed with gcloud, docker, and kubectl. Once in console:

git clone <your_project>
cd your_project

Set the PROJECT_ID environment variable in your shell by retrieving the pre- configured project ID on gcloud by running the command below:

export PROJECT_ID="$(gcloud config get-value project -q)"

The value of PROJECT_ID will be used to tag the container image for pushing it to your private Container Registry.

Now you can build the container image:

docker build -t gcr.io/${PROJECT_ID}/<image_name>:1.0.0
docker push gcr.io/${PROJECT_ID}/<image_name>:1.0.0

It’ll take some time to build, after build you can verify using “Docker images” command. Now you can create your cluster:

Building the container image

Creating a container cluster:

Now that the container image is stored in a registry, you need to create a container cluster to run the container image. A cluster consists of a pool of Compute Engine VM instances running Kubernetes.

gcloud container clusters create <any_name>--zone=us-central1-f --num-nodes=2

It’ll take a while to complete, after completion you can verify with “ gcloud compute instances list” command.

Deploying your application:

kubectl run <some name> --image=gcr.io/${PROJECT_ID}/<image_name>:1.0.0 --port 3000

kubectl get pods” command to see the Pod created by the Deployment.

Exposing your application to the Internet:

kubectl expose deployment <some name> --type=LoadBalancer --port 80 --target-port 3000

The kubectl expose command above creates a Service resource, which provides networking and IP support to your application’s Pods.The — port flag specifies the port number configured on the Load Balancer, and the — target-port flag specifies the port number that is used by the Pod created by the kubectl run command from the previous step.

kubectl get service

Output will give you an external IP(at the bottom of the below image):

Get your external IP

Once you’ve determined the external IP address for your application, copy the IP address. Point your browser to this URL (such as http://130.211.229.36) to check if your application is accessible.

On visiting external IP

NOTE:1. Wherever I have used <> feel free to add your desired names. 2. I have written yaml files also in my github if you are using Digitalocean or any orher cloud platform.

This blog is helpful. For any doubts and queries comment below.