DevOps Project — CI/CD -4

In this blog ,we are going to build and deploy artifacts on Kubernetes.

jabir ahammed
7 min readAug 10, 2023

In the previous blog, we have completed this entire CI/CD pipeline and we could able to deploy application on Docker container successfully.

By this time, we have to similar kind of environment. But instead of Docker host, we are going to deploy our applications on Kubernetes Cluster.

Link to last blog : https://medium.com/@ahammed.jabirp/devops-project-ci-cd-3-9fedd470e3cc

Github : https://github.com/jabir000/hello-world.git

However, in case that Docker container goes down, there is no way to recover.

So to overcome this problem we can use Docker native service called Docker Swarm, or else we can use container management service like Kubernetes.

But using Kubernetes gives a lot of advantages comparatively.
That is the reason we are not going to deploy our applications as a Docker container.

We are going to deploy it as a pod on our Kubernetes environment.

We already have this environment :-

Now we need to set up our Kubernetes environment on AWS because everything is in place so that we can deploy.

Setup CI/CD with Github, Jenkins, Maven , Ansible and Kubernetes

1- Setup Kubernetes (EKS)

2- Write pod, service and deployment manifest files

3- Integrate Kubernetes with Ansible

4- Ansible playbooks to create deployment and service

5- CI/CD job to build code on Ansible and deploy it on Kubernetes Cluster

To initiate , the first step involves setting up a Bootstrap image.

For that we need to setup an ec2 instance.

Connect to the terminal

aws --version
curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip"
unzip awscliv2.zip
sudo ./aws/install
aws --version
curl -O https://s3.us-west-2.amazonaws.com/amazon-eks/1.27.1/2023-04-19/bin/linux/amd64/kubectl
chmod +x kubectl
mv kubectl /usr/local/bin
echo $PATH
curl --silent --location "https://github.com/weaveworks/eksctl/releases/latest/download/eksctl_$(uname -s)_amd64.tar.gz" | tar xz -C /tmp
cd /tmp
ll
mv eksctl /usr/local/bin
eksctl version

Now we need to create an IAM Role in AWS.

AWS -> Search for IAM -> Dashboard -> Access management -> Roles -> Create Role -> Select Aws service -> Use case : EC2 -> Next -> Add the following policies : AmazonEC2FullAccess , AWSCloudformationFullAccess , IAMFullAccess , AdministrativeAccess -> Next ->Give a Role Name -> Create Role

The Role is created successfully.

Next we need to attach this Role to our Bootstrap EC2 Instance.

AWS -> EC2 -> Select the Bootstrap server instance -> Actions -> Security -> Modify IAM Role -> Choose the Role that we created -> Save

Next we need to create the Cluster.

For that goto terminal.

eksctl create cluster --name cluster1  \
--region < Bootstrap server Region > \
--node-type t2.small

This process take around 20 to 25 minutes to complete.

Now the Cluster is Ready.

*Delete the cluster when the activity is completed*

cat /root/.kube/config

Now lets create Manifest files.

vim regapp-deployment.yml

apiVersion: apps/v1
kind: Deployment
metadata:
name: jabirdocker-regapp
labels:
app: regapp

spec:
replicas: 3
selector:
matchLabels:
app: regapp

template:
metadata:
labels:
app: regapp
spec:
containers:
- name: regapp
image: jabirdocker/regapp
imagePullPolicy: Always
ports:
- containerPort: 8080
strategy:
type: RollingUpdate
rollingUpdate:
maxSurge: 1
maxUnavailable: 1

vim regapp-service.yml

apiVersion: v1
kind: Service
metadata:
name: jabirdocker-service
labels:
app: regapp
spec:
selector:
app: regapp

ports:
- port: 8080
targetPort: 8080

type: LoadBalancer

Now lets Integrate Kubernetes with Ansible.

Goto Bootstrap terminal.

useradd ansadmin
passwd ansadmin
visudo
( ansadmin ALL=(ALL) NOPASSWD: ALL )
vim /etc/ssh/sshd_config
( PasswordAuthentication yes )
systemctl restart sshd
passwd root

Goto Ansible Server terminal.

sudo su - ansadmin
cd /opt/docker
ll -a
mv regapp.yml create_image_regapp.yml
mv deploy_regapp.yml docker_deployment.yml
vim hosts
[ansible]
< ansible private ip >

[kubernetes]
< kubernetes private ip >
ssh-copy-id root@< kubernetes private ip >
ansible -i hosts all - a uptime
cd /opt/docker
ll
vim kube_deploy.yml
---
- hosts: kubernetes
user: root

tasks:
- name: deploy regapp on kubernetes
command: kubectl apply -f regapp-deployment.yml
- name: create service for regapp
command: kubectl apply -f regapp-service.yml
- name: update deployment with new pods if image updated in docker hub
command: kubectl rollout restart deployment.apps/jabirdocker-regapp

Goto Jenkins

Jenkins -> New item -> give name: RegApp_CD_job -> freestyle project -> OK

Jenkins -> Dashboasrd -> RegApp_CD_job -> General : give description -> Post-build Actions -> Add post build actions -> Send build artifacts over SSH -> name : ansible-server (select the one that you have added) -> Exec Command : enter the following command

ansible-playbook -i /opt/docker/hosts /opt/docker/kube_deploy.yml

Now we need to create the CI job

Jnkins -> New item -> name : RegApp_CI_JOB -> Copy from : copyartifactsontoansible -> OK

Jenkins -> Dashboasrd -> RegApp_CI_JOB -> General : give description -> Post-build Actions -> Exec command : modify the existing commands as following:

ansible-playbook /opt/docker/create_image_regapp.yml

Apply -> Save

Now disable Poll SCM from the job copyartifactsontoansible

Next we need to Integrate CI Job with CD Job . so that whenever the CI Job is successfull it should initialize the CD Job.

Jenkins -> Dashboard -> RegApp_CI_JOB -> Configure -> Add post-build actions -> Build other projects -> projects to build -> RegApp_CD_job -> Select Trigger only if build is stable -> Apply -> Save

Now if we make a change in source code and commit it into the Github repository, then this is going to initialize the jenkins CI job and this job will create a docker image and commit it in Dockerhub.

Also the jenkins job will initialize the CD Job and this job is able to take the latest image from Dockerhub and do the deployment over the Kubernetes Cluster.

Now lets modify the source code.

Open Gitbash

cd hello-world/webapp/src/main/webapp/
vim index.jsp

make some changes and save it

git status
git add .
git status
git commit -m "change name"
git push origin master

Now we can see the CI Job is triggered.

Its triggering the CD Job also.

Lets check in the Dockerhub.

The latest image is committed in the Dockerhub.

Goto Kubernetes Cluster.

kubectl get all

Goto AWS Console -> Load Balancere -> you will see a load Balancer created -> copy the DNS Name from there -> load it (dns:8080/webapp)

In this blog , We have configured our Jenkins job in such a way that if somebody modified code, it should automatically build the code, create the image, create the container , do the deployment over the Kubernetes Cluster and we could able to access those changes from the browser.

Linkedin : www.linkedin.com/in/jabir-ahammed

THANK YOU !

--

--