Scaling Jenkins with Kubernetes

Gaurav Vashishth
7 min readSep 21, 2018

--

In this article, I will be discussing on how to scale Jenkins with Kubernetes, what are the different components required and how to fit those components together to have complete scalable solution.

Note: I will be taking an example of AWS and will be using its terminology but concepts can be easily applied to other cloud vendors too. Basic understanding of kubernetes, would be required like what is pod, deployment, service, ingress, and its basic commands. This article will give you fair idea but won’t go into very deep in steps. I recommend to read official documentation for deeper understanding.

Jenkins has been popular choice for CI/CD and it has become great tool in automating the deployments across different environments. With modern microservices based architecture, different teams with frequent commit cycle need to test the code in different environment before raising pull request. So we need Jenkins to work as fast as possible. Below are few important components which we need to consider before we start designing the solution on top of kubernetes

  1. Setting up Jenkins in kubernetes cluster
  2. Jenkins access from outside the cluster
  3. Configure kubernetes plugin in Jenkins
  4. Pod scheduling in kubernetes cluster
  5. Capacity and cost management

Step 1: Setting up Jenkins in kubernetes cluster

Before starting, we should have kubernetes cluster running in separate VPC, separate vpc is not mandatory but we can have all devops related tool which are common to different environment running in its own vpc and then use vpc peering connections to allow access of each other. Below is reference diagram

For setting up Jenkins inside kubernetes, create jenkins-deploy.yaml file with below content

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: jenkins-master
spec:
replicas: 1
template:
metadata:
labels:
app: jenkins-master
spec:
containers:
- name: jenkins-leader
image: jenkins
volumeMounts:
- name: jenkins-home
mountPath: /var/jenkins_home
- name: docker-sock-volume
mountPath: /var/run/docker.sock
resources:
requests:
memory: "1024Mi"
cpu: "0.5"
limits:
memory: "1024Mi"
cpu: "0.5"
ports:
- name: http-port
containerPort: 8080
- name: jnlp-port
containerPort: 50000
volumes:
- name: jenkins-home
emptyDir: {}
- name: docker-sock-volume
hostPath:
path: /var/run/docker.sock

Now expose Jenkins as service by creating another file jenkins-svc.yaml with below content

apiVersion: v1
kind: Service
metadata:
name: jenkins-master-svc
labels:
app: jenkins-master
spec:
type: NodePort
ports:
- port: 80
targetPort: 8080
protocol: TCP
name: http
- port: 50000
targetPort: 50000
protocol: TCP
name: slave
selector:
app: jenkins-master

Now we need to apply this into kubernetes cluster with following commands

kubectl create -f jenkins-deploy.yaml
kubectl create -f jenkins-svc.yaml

We have Jenkins running inside the cluster, you can access it by using kubectl proxy command but since we need to access Jenkins from outside the cluster too, lets set this up

Step 2: Jenkins Access from outside

For setting up Jenkins access from outside the cluster if we define service type as loadbalancer in Jenkins service file, it will spin up ELB instance in cloud and you can access Jenkins by IP of ELB. Problem with this approach, if we you want to have some other service exposed from cluster, and follow same approach you will end up in another ELB instance and that does increase cost. To avoid this, kubernetes support feature named as ingress

Ingress: This is collection of rules by which outside traffic can reach to the services deployed in kubernetes and to support ingress we also need to have ingress controller. We will be using nginx-ingress controller which is supported by nginx. Below is the sample file which can be deployed in kubernetes as deployment

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: ingress-nginx
spec:
replicas: 1
template:
metadata:
labels:
app: ingress-nginx
spec:
containers:
- image: gcr.io/google_containers/nginx-ingress-controller:0.8.3
name: ingress-nginx
imagePullPolicy: Always
ports:
- name: http
containerPort: 80
protocol: TCP
- name: https
containerPort: 443
protocol: TCP
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
args:
- /nginx-ingress-controller
- --default-backend-service=$(POD_NAMESPACE)/nginx-default-backend

and now expose it as service. We will use the type as loadbalancer, which would mean it will spin a ELB in AWS. ELB endpoint is what we can use for outside access

apiVersion: v1
kind: Service
metadata:
name: ingress-nginx
spec:
type: LoadBalancer
selector:
app: ingress-nginx
ports:
- name: http
port: 80
targetPort: http
- name: https
port: 443
targetPort: https

Now, we need to define some rules so ingress controller can decide which service to call. Before defining rules, we need to create sub-domain mapped to elb end point, lets say we mapped jenkins.yourcompany.com to elb end point. Now, lets write up ingress and use this domain as host name

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: jenkins-ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
nginx.ingress.kubernetes.io/cors-allow-credentials: "true"
nginx.ingress.kubernetes.io/cors-allow-headers: Authorization, origin, accept
nginx.ingress.kubernetes.io/cors-allow-methods: GET, OPTIONS
nginx.ingress.kubernetes.io/enable-cors: "true"
spec:
rules:
- host: jenkins.yourcompany.com
http:
paths:
- backend:
serviceName: jenkins-master-svc
servicePort: 80

With this ingress in place, whenever request comes to jenkins.yourcompany.com, it will go to ELB first, elb send this to nginx controller which read the ingress and send the traffic to jenkins-master-svc service. You can define more ingress and map them with your services, so we can use single elb to manage the traffic to all services hosted in kubernetes cluster

Step 3: Configuring Kubernetes plugin:

You should be able to access the Jenkins by your sub-domain, initially you need to set this up as you normally do and configure kubernetes plugin

This link has all the information on how to setup this plugin in Jenkins. Since Jenkins has been installed inside the kubernetes cluster, we will be able to access the kubernetes by https://kubernetes.default.svc.cluster.local. If you install Jenkins outside kubernetes cluster then proper endpoint has to be defined. Below is the configuration, only three things need to be added up, Kubernetes URL, Jenkins URL and credentials for kubernetes. We don’t need to set up pod template as we will create them dynamically in next step

Step 4: Pod scheduling in cluster

To plan for scaling so it can handle all jobs which keeps on increasing with time. We can plan for

a. Vertical Scaling: Adding more cores and memory to jenkins master

b. Horizontal Scaling: Adding more slave nodes which can coordinate with master and run the jobs

While both approaches does solve the scaling issue but cost will also increase with these approaches. This is where we can use kubernetes by doing on demand horizontal scaling of Jenkins. In kubernetes, we setup Jenkins in master slave mode where each job can be assigned to run in specific agent. Agent in our case would be pod running on slave node. So when job needs to run it start creating their pod and execute the job in it and once done the pod gets terminated. This solve the problem of on-demand scaling, below will show example on how to set this up

For defining pipeline, Jenkins supports two types of syntax

a. Scripted

b. Declarative

Declarative syntax is improved version should be preferred in defining pipeline. In plugin setup we only need to add kubernetes and Jenkins end points and rest we configure in pipeline itself on what type of pod we want to run in which job will execute. In most of the cases you would want you own slave image rather than public one so assuming you have hosted that image in registry, below is what can be used

You can create one shared library with all common function which are being used in pipeline. Example, below function is to return content of yaml file to run pod on kubernetes cluster

def call(){
agent = """
apiVersion: v1
kind: Pod
metadata:
labels:
name: jenkins-slave
spec:
containers:
- name: jenkins-slave
image: xxx.com/jenkins:slave
workingDir: /home/jenkins
volumeMounts:
- name: docker-sock-volume
mountPath: /var/run/docker.sock
command:
- cat
tty: true
volumes:
- name: docker-sock-volume
hostPath:
path: /var/run/docker.sock
"""
return agent
}

and use these functions in the pipeline to use them. Below is sample pipeline which will run the pod in kubernetes cluster based on custom Jenkins slave image and all the defined steps will get executed in that container

pipeline {
agent {
kubernetes {
label 'jenkins-slave'
defaultContainer 'jenkins-slave'
yaml getAgent()
}
}
stages{
stage ('stage1'){
steps{
// Define custom steps as per requirement
}
}
}
}

Step 5: Capacity and cost management

So far, we have Jenkins installed on kubernetes and each Jenkins job will create container and run its code in it and terminate. Other important aspect we need to plan for is, containers need nodes to run and we need to setup system where nodes are created on demand and gets removed when not in use. This is where cluster autoscaler is helpful

Purpose of cluster autoscaler is to keep looking for event when pod has failed to start due to insufficient resources and adding node in cluster so pods can run. It also keep monitoring for nodes which doesn’t have any pods running on it so those nodes can be removed from cluster. This solves the problem of on-demand scale out and scale in very well and all we need to do is configure this in our cluster, in configuration we will define the min and max nodes count so the cluster scale out operation stays in limit and we will always have minimum number of nodes ready to execute the jobs faster. We can also use spot instances to create these nodes instead vs on-demand nodes which will save our cost further

So, this is it, we have scalable Jenkins cluster in place with each trigger of Jenkins job, pod gets created in kubernetes cluster and gets destroyed when it is done. Scaling of cluster is handled by autoscaler and ingress is used to expose Jenkins to outside of cluster. One part we haven’t covered is usage of helm, package manager for kubernetes, once we get comfortable in these concepts, we should be deploying in kubernetes as helm charts only, more on this later

Any questions or suggestions, most welcome !

--

--

Gaurav Vashishth

Microservices | Distributed System | Scalability | Containers