Step by step guide to deploy Python Application using Kubernetes

Somansh Kumar
Akeo
Published in
6 min readDec 27, 2019

As a developer, you probably want to run your application in Kubernetes but probably don’t know how to do it. So, we decided to solve this issue for you and wrote this blog on how to deploy the Python-flask web application using Kubernetes and automated deployment through Jenkins.

Flask is web framework written in Python. The Flask framework provides you with tools, libraries and technologies that allow you to build a web application which can have web pages, a blog or a simple web application.This framework does not require any external tools or libraries making it a light framework. However, Flask does support extensions that can add features to the web application.

Before beginning the deployment, do ensure that a sample Python-Flask application is ready with you. The sample Nginx and wsgi configuration files is required for deployment to host on Kubernetes.

Note: We assume that you have basic knowledge of how to setup dockers, Kubernetes for container orchestration and Jenkins. (if not, you can refer to our blog How to setup Dockers, Kubernetes and Jenkins)

Step 1: Create simple dockerfile

  • In order to host and orchestrated our application on Kubernetes, we have to build a docker image to build and run the application from this repo.
  • Create a dockerfile with the following contents and place it in the root location of the project repository.
##DockerFile
#Pulling base python image from dockerhub
FROM python:3.6
# Create app directory as working directory
WORKDIR /app
# Copy code from root repo to the working directory
COPY . /app
#Run python commands to build and serve the application
RUN apt-get update && \
apt-get install -y python3-dev && \
apt-get install build-essential -y && \
pip install -r requirements.txt && \
apt-get install nginx -y
#Copying nginx conf file to nginx configuration path
COPY nginx.conf /etc/nginx
#Giving executable permission
RUN chmod 755 start.shCMD [“./start.sh”]

Step 2. Sample Nginx configuration file

So, now we have geared up the docker file, we should write the modifications in Nginx.conf to serve the application.

user www-data;worker_processes auto;
pid /run/nginx.pid;
events {
worker_connections 768;
# multi_accept on;
use epoll;
}
http {
## Basic Settingssendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;
include /etc/nginx/mime.types;
default_type application/octet-stream;
### SSL Settings##ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
# Dropping SSLv3, ref: POODLE
ssl_prefer_server_ciphers on;
### Logging Settings##
access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log;
server {
listen 80 default_server;
listen [::]:80 default_server;
server_name localhost;
root /var/www/html;location / {
include uwsgi_params;
uwsgi_pass unix:/tmp/uwsgi.socket;
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For
$proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Host $server_name;
proxy_set_header X-Forwarded-Proto https;
}
}
}

Step 3. Sample Uwsgi.ini configuration file

[uwsgi]
#WSGI module and Callable app
module = sample:app
#User ID for nginx config
uid = www-data
#Group ID for nginx config
gid = www-data
#Serve as Master
master = true
#processes = Number of Process
processes = 3
#Socket path of WSGI
socket = /tmp/uwsgi.socket
#Modified Permissions
chmod-sock = 664
#Graceful reloading
lazy=true
#Auto cleanup the socket
vacuum = true
#For expected process signals at startup
die-on-term = true

Step 4. Creating requirements file to install the required packages of Python and Start.sh file to serve our application

We’ve got all configuration files equipped with us now we can create a requirements file.

Requirements file

Flask==1.0.2
uWSGI==2.0.17.1

Start.sh file configuration

#!/usr/bin/env bash
service nginx start
uwsgi — ini uwsgi.ini

We can build the image from the following command.

docker build -t <image_name> .

After this we will create Kubernetes template,we will build the image for Kubernetes to host the application without any downtime.

Step 5: Creating Kubernetes yaml file

  • Create a Kubernetes yaml file (python-flask.yaml) with the following contents and place it in the root location of the project repository.
apiVersion: apps/v1
kind: Deployment
metadata:
name: sample-python
labels:
app: sample-python
spec:
selector:
matchLabels:
app: sample-python
template:
metadata:
labels:
app: sample-python
spec:
containers:
- image: ip_address_of_the_machine:5000/sample-python-flask:latest
name: python-flask
imagePullPolicy: Always
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: sample-python-svc
spec:
ports:
- name: “sample-python”
targetPort: 80
port: 80
nodePort: 30275
protocol: TCP
selector:
app: sample-python
type: NodePort

Lets, discuss the Kubernetes terminology from the above snippet:

  • ApiVersion — which version of the Kubernetes API being used to create this object.
  • Kind — what kind of object is being created.
  • Metadata — data that helps uniquely identify the object, including a name string, UID, and optional namespace.
  • Spec — What state is being desired for the object.
  • The selector field defines how the deployment find the pods to manage, it is defined in the template (app: sample-python).
  • The template.spec defines the docker image being used to create the pod. Replace the ip_address_of_the_machine:5000 with the private docker registry details or the global dockerhub repository.
  • The containerport defines the port on which application will run inside the pod.

The Kubernetes service section: The Kubernetes services is responsible for enabling network access to a set of pods.

  • Port: defines the port on which application is running inside the pod.
  • NodePort: defines the port to be bind to the base machine.
  • Protocol: TCP default protocol for network communication.
  • Selector: to identify the deployment created above.

Post all the configurations our repository will look like this:

Step 6: For automated Jenkins Configuration

  • Creating Jenkins job
  • Open the Jenkins url and click on “New Item”
  • Provide a name for the job and select freestyle job.
  • Provide the git repository url and credentials for cloning the project also specify the branch.
  • Select “Execute Shell” from the add build step:
  • Provide the steps as shown below:
  • Save the job and execute it
  • This will create docker image and push to the private docker registry. Also, the kubectl apply command will configure the pod with python-flask application running inside it. The purpose of pushing the docker image to the docker registry is that this image can be used on any other Ubuntu machine to run the application.

Note: You can push images to dockerhub.com account too just need to replace with the docker private registry.

docker push <registry_name.dockerhub.com/sample-python-flask:latest>

Access the sample application on the url http://ip_address_of_machine:30275

Here we have walked you through how to deploy the Python-flask web application using Kubernetes and automated deployment through Jenkins. Hopefully, you have gained enough knowledge to do the same and will take it to the next level by launching your own app with Kubernetes.

--

--