Microservices Running in Kubernetes Powered by Jenkins — Part 2

David Curran
Aug 31, 2018 · 6 min read

In Part 1 I showed how I utilised Jenkins to test some code, build it into a Docker image, push that image to Docker Hub and finally run the container to push data into MongoDB. Now it’s time to consume that data some how.

Consume the data

I decided to create an “API server” to fetch data from MongoDB and return it as a JSON object. I decided to build this using Django simply because I have experience in it and didn’t want to spend ages building an API from scratch when building an API is not the purpose of the project. I’ll only be using a tiny fraction of what Django has to offer but speed of build was the prime reasoning here. To that end the actual API application is very small and simple, here is the entirety of the views.py:

import os
import json
from pymongo import MongoClient
from django.http import JsonResponse
version = "v0.2"def mongodata(request):
response = {}
response['status'] = 200
response['version'] = version
client = MongoClient(os.getenv('MONGO_CLIENT'), int(os.getenv('MONGO_PORT')))
db = client['data']
db.authenticate(os.getenv('MONGO_USER'), os.getenv('MONGO_PASS'))
collection = db['data']
response['data'] = list(collection.find({}, {'_id': 0}))
print(type(response))
return JsonResponse(response)def statusHandler(request):
response = {}
response['status'] = 200
response['version'] = version
response['message'] = "API is running"
return JsonResponse(response)

Again I’ve used environment variables for the connection details and these have the same security flaws as previously mentioned, although this time I am using a read only user to connect to MongoDB. There are two available urls /status and /data. /status calls statusHandler() which just returns that the API is running along with a 200 status and a version number. The main function is mongodata() which does a little bit more. It connects to the MongoDB instance and pulls all documents from the “data” collection and returns them in response[‘data’] along with a 200 status and the version.

Microservice the API

Now there is an API available to pull the data that was generated in Part 1. It just shows some fairly simple JSON data with no method to view it in a user friendly way. It also isn’t yet a microservice, as i defined previously. Very similar steps were followed to turn this into a container as in Part 1; use Jenkins to clone the git repo, build the docker image and push that image to docker hub. There are no tests run this time due to the small size and scope of this codebase.

This should look familiar

I’ve used the same Kubeconfig but this time a different YAML file.

Same but different

Deploy.yml specifies everything I want for a deployment as this time we do want to have features such as Replica Sets so that if a pod goes away, it comes back again. It also specifies a service to expose port 80 in the pods on port 8000 on the worker.

apiVersion: v1
kind: Service
metadata:
name: django-app
spec:
ports:
- port: 8000
selector:
app: django-app
clusterIP: 10.103.65.76
externalIPs:
- 10.0.0.52
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: django-app
spec:
selector:
matchLabels:
app: django-app
replicas: 1
template:
metadata:
labels:
app: django-app
spec:
containers:
- name: django-app
image: schizoid90/data-django-app
ports:
- containerPort: 8000
env:
- name: MONGO_CLIENT
value: ${MONGO_CLIENT}
- name: MONGO_PORT
value: ${MONGO_PORT}
- name: MONGO_USER
value: ${MONGO_USER}
- name: MONGO_PASS
value: ${MONGO_PASS}

As before, the ${X} variables are replaced by Jenkins when I run the deploy job. Now if I forward a port of my choice to port 8000 on the worker I can access the API

curl -k "http://213.39.39.178:2035/status/"
{"status": 200, "version": "v0.2", "message": "API is running"}

Display the data

Now we can access and consume the API but I wanted to show how this can be used to show the data in a pretty way. For that I decided to use a simple website running under Nginx with bootstrap and Jquery, again as I have experience with these and like Datatables for quickly creating pretty tables and Jquery for easily using Ajax to get data.

Like with MongoDB I used an out of the box image, “nginx:1.14-alpine” and created a service to access port 80 on the pods via port 8085 on the host. Obviously I couldn’t make do with the default Nginx page if I want to display my own data so I created a persistent volume and associated claim that the Nginx deployment could use with its containers. Below are the configurations for this:

service.yml

apiVersion: v1
kind: Service
metadata:
name: web-http
spec:
ports:
- protocol: TCP
port: 8085
targetPort: 80
selector:
app: web
externalIPs:
- 10.0.0.52

vol.yml

kind: PersistentVolume
apiVersion: v1
metadata:
name: nginx-vol
labels:
type: local
spec:
storageClassName: manual
capacity:
storage: 10G
accessModes:
- ReadWriteOnce
hostPath:
path: "/mnt/kubevols/nginx"

claim.yml

kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: nginx-claim
spec:
storageClassName: manual
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 3Gi

deploy.yml

apiVersion: apps/v1
kind: Deployment
metadata:
name: web
labels:
app: web
spec:
selector:
matchLabels:
app: web
strategy:
type: Recreate
template:
metadata:
labels:
app: web
spec:
containers:
- image: nginx:1.14-alpine
name: web
ports:
- containerPort: 80
name: web
volumeMounts:
- name: web-storage
mountPath: /usr/share/nginx/html
volumes:
- name: web-storage
persistentVolumeClaim:
claimName: nginx-claim

The deploy.yml file sets up the deployment as usual and at the end specifies which claim to use as persistent storage. Now all I had to do was place my web files into /mnt/kubevols/nginx and the site will work.

/mnt/kubevols is an extra disk attached to the VM I am using as my worker node. This could just as easily be a Samba share, GlusterFS or any other NFS for distributed file storage in situations where there is more than 1 worker node.

As with everything in this project, the web files are simple. The index.html just sets up the table, JavaScript and CSS. Index.js uses Ajax to get the data from the API container and then display it in the table.

Full Web Code

Some extra config

Something to note about CORS, a properly configured web server should not allow a Cross Origin Resource Sharing by default. This causes the following error when using Ajax against an external URL:

Cross-Origin Request Blocked: The Same Origin Policy disallows reading the remote resource at

In order to allow Jquery to get the data from my API node I installed django-cors-headers in the django-app image by adding it to requirements.txt and added the following to the settings.py:

INSTALLED_APPS = (
...
'corsheaders',
...
)
MIDDLEWARE = [
'corsheaders.middleware.CorsMiddleware',
...
]
CORS_ORIGIN_ALLOW_ALL = True

The last line allows all hosts to query the server, this was done to allow for the jsfiddle above to work correctly but in a production environment it would be prudent to only allow known hosts/IPs with the CORS_ORIGIN_REGEX_WHITELIST or CORS_ORIGIN_WHITELIST options. More information can be found from the github page linked to above.

Conclusion

So there we have it, a simple display of “microservices” in action running on distributed container platform Kubernetes with builds and deployments controlled by automation platform Jenkins.

As I have said several times, this is a simple project on a small scale that showcases some of the features that can be used to create and manage larger scale applications. The main aim was to build a tool that creates and stores some data, exposes that data through a REST API and consumes the data though a web interface.

One of the huge advantages of microservices not touched upon here is the use of “versioned services”. By that I mean keeping multiple versions of the same service running and directing front ends and other consumers to a specific version to keep backwards compatibility and allow for easier roll back. Perhaps something to look into as I keep learning. As well as this, I used a pre-existing Jenkins VM that I had left over from a previous play around with it. It’s entirely feasible that this would be part of the “microservices” model and be deployed as a container as well.


If you’re interested you can see the code in my bitbucket account and the resulting docker images in my Docker hub account:

Code

Images

Welcome to a place where words matter. On Medium, smart voices and original ideas take center stage - with no ads in sight. Watch
Follow all the topics you care about, and we’ll deliver the best stories for you to your homepage and inbox. Explore
Get unlimited access to the best stories on Medium — and support writers while you’re at it. Just $5/month. Upgrade