How to setup CI/CD workflow for Node.js apps with Jenkins and Kubernetes

by Anas El Barkani

Introduction

Continuous Integration and Continuous Delivery are two of the practices that shape DevOps philosophy today. Basically it includes implementing development and integration workflows where developers commit their changes to the central repository frequently, ensuring that their commits are functional and ready to be deployed to production.

These workflows use automation engines adapted to the tasks. In this tutorial I will explain how to set up a CI/CD workflow of a Node.js application hosted on Kubernetes. For the automation part I will use Jenkins, arguably one of the most popular CI/CD tools today.

I will assume that you already have a Kubernetes cluster with an installed helm/tiller server. I will also assume that you are familiar with Git and that you have a repository. In this article we will be using Bitbucket, but you can use any other Git platform like Github or Gitlab for instance.

Workflow and architecture

We’ll use to the following workflow:

Notice that we will use Kubernetes namespaces as deployment environments. In order to simplify this tutorial I will use a 2-tier deployment architecture with integration and production environments, but you can extend the system to a 3-tier or 4-tier architecture easily.

To create these namespaces, run the commands in kubectl:

kubectl create namespace myapp-integration
kubectl create namespace myapp-production

Check that the namespaces have been created:

kubectl get namespaces

Setting up the workflow

The application we are going to launch will consist of two parts:

  • Nginx reverse proxy
  • NodeJS app

The application code is hosted in a Git repository with the following directory structure:

/
   -src/
     index.js
   -tests/
     integration-tests.sh
     production-tests.sh
   -deploy/
     nginx-reverseproxy.yaml
     nodejs.yaml
   Jenkinsfile

Let’s look at these files in details.

The src directory contains a basic Node.js application which creates an http server on port 8080 and returns a message depending on the visited path:

  • “This is homepage” when visiting “/”
  • “Welcome to dir1, how can I help you ?” when visiting “/dir1”
  • “The information about person with id 1 is X” when visiting “/dir2/person/1”
// index.js
var http = require('http');
var url = require('url');
var server = http.createServer(function(req, res) {
var page = url.parse(req.url).pathname;
console.log(page);
res.writeHead(200, {"Content-Type": "text/plain"});

if (page == '/') {
res.write('This is homepage');
}
else if (page == '/dir1') {
res.write('Welcome to dir1, how can I help you ?');
}
else if (page == '/dir2/person/1') {
res.write('The information about person with id 1 is X');
}
res.end();
});
server.listen(8080);

Tests directory contains two scripts with tests to perform in order to ensure that the application is up and running. For this tutorial and for the sake of simplicity, the same tests will be performed in the integration and the production environments, but you can (and should) have different tests depending on the environment.

In this case we will use curl to test the three paths on the Node.js app. If the server returns the right response, then the test succeeds. Otherwise, the script will exit with an error.

Integration tests.sh:

#!/bin/bash
# integration-tests.sh
echo "Starting integration tests..."
echo "Testing root path..."
res1=$(curl -s http://$1/)
if [ "$res1" != "This is homepage" ]; then
echo "Path / test failed. Aborting..."
exit 1
fi
echo "Testing path /dir1 ..."
res2=$(curl -s http://$1/dir1)
if [ "$res2" != "Welcome to dir1, how can I help you ?" ]; then
echo "Path /dir1 test failed. Aborting..."
exit 1
fi
echo "Testing root path /dir2/person/1 ..."
res3=$(curl -s http://$1/dir2/person/1)
if [ "$res3" != "The information about person with id 1 is X" ]; then
echo "Path /dir1 test failed. Aborting..."
exit 1
fi
echo "Integration tests succeeded."

Production tests.sh:

#!/bin/bash
# production-tests.sh
echo "Starting production tests..."
echo "Testing root path..."
res1=$(curl -s http://$1/)
if [ "$res1" != "This is homepage" ]; then
echo "Path / test failed. Aborting..."
exit 1
fi
echo "Testing path /dir1 ..."
res2=$(curl -s http://$1/dir1)
if [ "$res2" != "Welcome to dir1, how can I help you ?" ]; then
echo "Path /dir1 test failed. Aborting..."
exit 1
fi
echo "Testing root path /dir2/person/1 ..."
res3=$(curl -s http://$1/dir2/person/1)
if [ "$res3" != "The information about person with id 1 is X" ]; then
echo "Path /dir1 test failed. Aborting..."
exit 1
fi
echo "Production tests succeeded."

Deploy directory contains all the yaml files necessary to deploy the app on Kubernetes. The yaml below contains Kubernetes resources to deploy the Nginx reverse proxy.

#nginx-reverseproxy.yaml
apiVersion: v1
kind: Service
metadata:
name: nginx-reverseproxy-service
spec:
selector:
app: nginx-reverseproxy
type: LoadBalancer #LB to expose the service and get an external IP address
ports:
- name: http
port: 80
protocol: TCP
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
labels:
app: nginx-reverseproxy
name: nginx-reverseproxy-deployment
spec:
replicas: 1
template:
metadata:
labels:
app: nginx-reverseproxy
spec:
containers:
- image: nginx:1.13
name: kubecont-nginx
ports:
- containerPort: 80
volumeMounts:
- name: config-volume
mountPath: /etc/nginx/conf.d
volumes:
- name: config-volume
configMap:
name: nginx-reverseproxy-config
---
apiVersion: v1
kind: ConfigMap
metadata:
name: nginx-reverseproxy-config
data:
default.conf: |-
server {
server_name yourhostname.com;
listen 80;
#deny access to .htaccess files, if Apache's document root
#concurs with nginx's one
#
location ~ /\.ht {
deny all;
}
location / {
proxy_pass http://nodejs-service:8080; #this is the service described in nodejs.yaml
}
}

And finally, here’s the yaml for launching Node.js on Kubernetes. Note that the CongigMap referenced by nodejs-deployment is created dynamically during the pipeline execution as I will expain below.

# nodejs.yaml
apiVersion: v1
kind: Service
metadata:
name: nodejs-service
spec:
selector:
app: nodejs
ports:
- name: http
port: 8080
protocol: TCP
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
labels:
app: nodejs
name: nodejs-deployment
spec:
replicas: 1
template:
metadata:
labels:
app: nodejs
spec:
containers:
- image: node:9.11
name: kubecont-nodejs
command: ["node", "/usr/src/app/index.js"]
ports:
- containerPort: 8080
volumeMounts:
- name: app-volume
mountPath: /usr/src/app
volumes:
- name: app-volume
configMap:
name: nodejs-app

And finally, we have a Jenkinsfile which describes the CI/CD workflow in Jenkins. The workflow consists of three stages:

  • Preparation stage: kubectl is installed and the app repository is cloned
  • Integration stage: a ConfigMap is created out of the Node.js app, and the Kubernetes resources are created. Then the application is tested and finally the environment is cleaned
  • Production stage: the same steps as in the integration stage are performed, except for cleaning since the Kubernetes resources should be kept in production.

So this is the Jenkinsfile:

//Jenkinsfile
node {
stage('Preparation') {
//Installing kubectl in Jenkins agent
sh 'curl -LO https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/linux/amd64/kubectl'
sh 'chmod +x ./kubectl && mv kubectl /usr/local/sbin'
//Clone git repository
git url:'https://bitbucket.org/advatys/jenkins-pipeline.git'
}
stage('Integration') {

withKubeConfig([credentialsId: 'jenkins-deployer-credentials', serverUrl: 'https://104.155.31.202']) {

sh 'kubectl create cm nodejs-app --from-file=src/ --namespace=myapp-integration -o=yaml --dry-run > deploy/cm.yaml'
sh 'kubectl apply -f deploy/ --namespace=myapp-integration'
try{
//Gathering Node.js app's external IP address
def ip = ''
def count = 0
def countLimit = 10

//Waiting loop for IP address provisioning
println("Waiting for IP address")
while(ip=='' && count<countLimit) {
sleep 30
ip = sh script: 'kubectl get svc --namespace=myapp-integration -o jsonpath="{.items[?(@.metadata.name==\'nginx-reverseproxy-service\')].status.loadBalancer.ingress[*].ip}"', returnStdout: true
ip=ip.trim()
count++
}

if(ip==''){
error("Not able to get the IP address. Aborting...")
}
else{
//Executing tests
sh "chmod +x tests/integration-tests.sh && ./tests/integration-tests.sh ${ip}"

//Cleaning the integration environment
println("Cleaning integration environment...")
sh 'kubectl delete -f deploy --namespace=myapp-integration'
println("Integration stage finished.")
}

}
catch(Exception e) {
println("Integration stage failed.")
println("Cleaning integration environment...")
sh 'kubectl delete -f deploy --namespace=myapp-integration'
error("Exiting...")
}
}
}
stage('Production') {
withKubeConfig([credentialsId: 'jenkins-deployer-credentials', serverUrl: 'https://104.155.31.202']) {

sh 'kubectl create cm nodejs-app --from-file=src/ --namespace=myapp-production -o=yaml --dry-run > deploy/cm.yaml'
sh 'kubectl apply -f deploy/ --namespace=myapp-production'


//Gathering Node.js app's external IP address
def ip = ''
def count = 0
def countLimit = 10

//Waiting loop for IP address provisioning
println("Waiting for IP address")
while(ip=='' && count<countLimit) {
sleep 30
ip = sh script: 'kubectl get svc --namespace=myapp-production -o jsonpath="{.items[?(@.metadata.name==\'nginx-reverseproxy-service\')].status.loadBalancer.ingress[*].ip}"', returnStdout: true
ip = ip.trim()
count++
}

if(ip==''){
error("Not able to get the IP address. Aborting...")

}
else{
//Executing tests
sh "chmod +x tests/production-tests.sh && ./tests/production-tests.sh ${ip}"
}
}
}
}

Install Jenkins

In order to install Jenkins we will use the Helm chart available on the official stable repository. To deploy Jenkins in Kubernetes with the necessary plugins use the following command:

helm install --name my-jenkins-deployment stable/jenkins --version 0.16.1 --values jenkins-params.yaml

Where jenkins-params.yaml is as follows:

#jenkins-params.yaml
Master:
Image: jenkins/jenkins
ImageTag: 2.121
ServiceType: LoadBalancer
ServicePort: 80
AdminPassword: admin_313
InstallPlugins:
- kubernetes:1.5.2
- workflow-aggregator:2.5
- workflow-job:2.21
- credentials-binding:1.16
- git:3.9.0
- kubernetes-cli:1.0.0
- custom-tools-plugin:0.5
- bitbucket:1.1.7
rbac:
install: true
apiVersion: v1
Agent:
Image: jenkins/jnlp-slave
ImageTag: 3.19-1
volumes:
- type: EmptyDir
mountPath: /usr/local/sbin

Once you run the command you will obtain the instructions to get the password. In our case it was to execute the following command:

printf $(kubectl get secret --namespace default my-jenkins-deployment -o jsonpath="{.data.jenkins-admin-password}" | base64 --decode);echo

If everything is ok, you should see the helm release deployed:

helm ls

After a while (approximately 30 seconds to 1 minute), run the command below to get an external IP address where you can access your Jenkins instance.

kubectl get services

In this case it’s 35.189.215.166. Navigate to this IP address in your browser and login as admin with the password you got previously (don’t forget to change these credentials :)).

Configuring kubectl in Jenkins for Continuous Deployment

Now, let’s configure the Kubernetes credentials so Jenkins can deploy to our Kubernetes cluster.

We have to create a ServiceAccount in Kubernetes that will be used by Jenkins for deployment.

kubectl create sa jenkins-deployer
kubectl create clusterrolebinding jenkins-deployer-role — clusterrole=cluster-admin — serviceaccount=jenkins-deployer

Then run the command

kubectl get secrets

You have to select the secret starting with “jenkins-deployer” and get the credentials associated with it:

kubectl describe secret jenkins-deployer-token-jvdmf

Go to Credentials in the left menu of the main page, then choose System, and Add domain. You can add the name of your company for example. Then click on Add credentials in the left menu.

Fill in the form as follows:

  • Kind: Secret text
  • Scope: Global
  • Secret: the token copied from jenkins-deployer-token-jvdmf (long string)
  • ID: jenkins-deployer-credentials (same as indicated in the function withKubeConfig in the Jenkinsfile)

Creating Jenkins Job

Go to the main page of Jenkins, click on New Item in the left menu. Then indicate a Job name and select Pipeline as Job type.

In the next screen, check the “Build when a change is pushed to Bitbucket” option. This will be used to automate pipeline triggering as explained later (although this feature is optional).

Finally, go to the Pipeline section and configure it as follows:

  • Definition: Pipeline script from SCM
  • SCM: Git
  • Repositories: Repository URL: your repository URL

And that’s it, just save your settings.

Launching the workflow

To launch the workflow select the recently created pipeline and click on “Build now” in the left menu. The pipeline will start in a few seconds.

It is also possible to trigger the workflow automatically when a user commits a change to the git repository. This is a recommended practice according to CI/CD principles.

In order to do that, you have to set up a webhook in your git repository. For Bitbucket, you must follow the instructions explained here. Notice that the URL you have to indicate in Bitbucket is JENKINS_URL/bitbucket-hook/

In our case it is:

http://35.189.215.166/bitbucket-hook/

Conclusion

I have demonstrated a simple CI/CD workflow with Jenkins and Kubernetes. The main benefit of this stack is flexibility since it allows you to implement practically any type of workflow. This workflow can be extended or complexified depending on your development needs. In any case, the process will be far more efficient than the traditional way.

I hope you find this post useful. Please comment and ask questions — I’d love to get your feedback.


Containerum.com is your source of knowledge on Kubernetes and Docker.

Like what you read? Give Containerum a round of applause.

From a quick cheer to a standing ovation, clap to show how much you enjoyed this story.