DevOps Capstone Project 2 — How to Automate the CI/CD Pipeline Without Modify The Docker Containers
Project Requirements are as given below:
Description of the Project & What to Do?
You are hired as a DevOps Engineer for Analytics Pvt Ltd. This company is a product based organization which uses Docker for their containerization needs within the company. The final product received a lot of traction in the first few weeks of launch. Now with the increasing demand, the organization needs to have a platform for automating deployment, scaling and operations of application containers across clusters of hosts. As a DevOps Engineer, you need to implement a DevOps lifecycle such that all the requirements are implemented without any change in the Docker containers in the testing environment.
Up until now, this organization used to follow a monolithic architecture with just 2 developers. The product is present on: https://github.com/hshar/website.git
Following are the specifications of the lifecycle:
1. Git workflow should be implemented. Since the company follows a monolithic architecture of development, you need to take care of version control. The release should happen only on the 25th of every month.
2. CodeBuild should be triggered once the commits are made in the master branch.
3. The code should be containerized with the help of the Dockerfile. The Dockerfile should be built every time if there is a push to GitHub. Create a custom Docker image using a Dockerfile.
4. As per the requirement in the production server, you need to use the Kubernetes cluster and the containerized code from Docker Hub should be deployed with 2 replicas. Create a Node-Port service and configure the same for port 30008.
5. Create a Jenkins Pipeline script to accomplish the above task.
6. For configuration management of the infrastructure, you need to deploy the configuration on the servers to install necessary software and configurations.
7. Using Terraform, accomplish the task of infrastructure creation in the AWS cloud provider.
Architectural Advice:
Software’s to be installed on the respective machines using configuration management.
Worker1: Jenkins,
Worker2: Docker, Kubernetes
Worker3: Java, Docker, Kubernetes
Worker4: Docker, Kubernetes
Proposed Solution:
Note: Starting the Solution from the last or seventh point & Ends to the second point.
Check the Git Hub Repository for this Assignment to Copy the Commands & Code: https://github.com/visaltyagi/DEVOPS-PROJECT2.git
1. Create an Instance Manually on AWS
Step 1: Go to the “Services” section & search “EC2” here. Put the cursor over “EC2” & click on “Instances” here.
Step 2: Click on “Launch Instances”.
Step 3: Choose “Name” as “Machine-1 (Main)” in the “Name and tags” section.
Step 4: Choose “AMI” as “ubuntu”.
Step 5: Choose the “Instance type” as “t2.medium”.
Step 6: Choose the “key pair (name) — required” as “Demo”.
Step 7: In the “Network Settings”, choose the “Select existing security group” in the “Firewall (security groups)” & While choosing the “Common security groups” as “default”.
Step 8: Click on “Launch Instance” in the “Summary” section.
Step 9: The “Instance” will be successfully launched, click on “hyperlink (i-0c9c9e04596b415b3).
Step 10: The Instance “[Machine-1 (Main)]” will be in the “Running” State.
Step 11: Select the Instance & Click on “Connect”.
Step 12: Again, click on “Connect”.
Step 13: First, update the machine using the below-given command:
sudo apt-get update
Step 14: The machine [Machine-1 (Main)] will be successfully updated.
2. Install Terraform on Instance [Machine-1 (Main)]
Step 1: First, we will install the “gnupg”, “software-properties-common” & “curl” packages to verify the “Hashicorp GPG signature” & install “Hashicorp’s Debian package repository”.
Paste the below-given command & press “enter” from the keyboard.
sudo apt-get install -y gnupg software-properties-common
Step 2: Install the “Hashicorp GPG Key”. Paste the below-given script & press “enter” from the keyboard.
wget -O- https://apt.releases.hashicorp.com/gpg | \
gpg - dearmor | \
sudo tee /usr/share/keyrings/hashicorp-archive-keyring.gpg > /dev/null
Step 3: Verify the Key’s Fingerprint using the below-given command:
gpg --no-default-keyring \
--keyring /usr/share/keyrings/hashicorp-archive-keyring.gpg \
--fingerprint
Step 4: Add the official “Hashicorp repository” to your system. The lsb_release –cs command finds the distribution release codename for your current system such as “buster”, “groovy” or “sid”.
Paste the below-given command & press “enter” from the keyboard.
echo "deb [signed-by=/usr/share/keyrings/hashicorp-archive-keyring.gpg] \
https://apt.releases.hashicorp.com $(lsb_release -cs) main" | \
sudo tee /etc/apt/sources.list.d/hashicorp.list
Step 5: Again, update the machine using the below-given command:
sudo apt-get update
Step 6: Now, install “Terraform” using the below-given command:
sudo apt-get install terraform –y
Step 7: Check whether “Terraform” has been successfully installed or not. Use the below-given command :
terraform -help
It will display the “Terraform Commands”.
Step 8: Check the “Terraform Version” using the below-given command:
terraform --version
It will show the “Terraform Version”.
3. Run “Terraform Script” to Create Other Three Instances
Step 1: Create a“main.tf” file using the below-given command:
nano main.tf
Step 2: Paste the below-given terraform script in the file:
provider "aws" {
region = "us-west-2"
}
resource "aws_instance" "Kubernetes_Master" {
ami = "ami-0aff18ec83b712f05"
instance_type = "t2.medium"
subnet_id = "subnet-013d8f5f0f962f5fc"
key_name = "Demo"
tags = {
Name = "Machine-3"
}
}
resource "aws_instance" "Kubernetes_Slave1" {
ami = "ami-0aff18ec83b712f05"
instance_type = "t2.micro"
subnet_id = "subnet-013d8f5f0f962f5fc"
key_name = "Demo"
tags = {
Name = "Machine-2"
}
}
resource "aws_instance" "Kubernetes_Slave2" {
ami = "ami-0aff18ec83b712f05"
instance_type = "t2.micro"
subnet_id = "subnet-013d8f5f0f962f5fc"
key_name = "Demo"
tags = {
Name = "Machine-4"
}
}
Note: Here, you must take that subnet ID where you have created your “Machine-1 (Main) Instance”.
Do “CTRL+X” to “Exit” & Press “Yes” to“Save”. Press “enter” from the keyboard to completely exit.
Step 3: Run the below-given command for initializing the “Terraform”:
terraform init
It will initialize the “Terraform” & install the needed plugins.
Step 4: Now, run the below-given command before executing the “Terraform Script”:
aws configure
Put the “AWS Access Key” & “AWS Secret Key” with the region name here.
The default Output Format will be “json” here.
Step 5: Now, use the below-given command to add the “plan”:
terraform plan
The plan will be successfully added
Step 6: Now, use the below-given command to apply the changes:
terraform apply
Type “Yes” to continue.
The instance will be successfully created.
Step 7: All the machines will be successfully created.
Step 8: Now, we will rename these created machines:
Machine-2 as Machine-2(KBSV1) — KBSV1 means Kubernetes Slave1
Machine-3 as Machine-3(KBM) — KBM means Kubernetes Master
Machine-4 as Machine-4 (KBSV2) — KBSV2 means Kubernetes Slave2
4. Install “Ansible” on Machine 1 (main)
Step 1: Go to the “Machine-1” & first update the machine using the below-given command:
sudo apt-get update
Step 2: Run the below-given command:
sudo apt install software-properties-common
All the packages will be already installed here.
Step 3: Now, add the repository to install the“Ansible” machine using the below-given command:
sudo apt-add-repository --yes --update ppa:ansible/ansible
Step 4: Install the Ansible using the below-given command:
sudo apt-get install ansible
Type “Y” to continue the “Ansible Installation”.
Step 5: Type the below-given command to check the “Ansible” version:
ansible --version
If the version is shown here, it means “Ansible” has been successfully installed.
Step 6: Go to the .ssh directory using the below-given command:
cd .ssh/
Type the below-given command to create the public & private keys:
ssh-keygen
Press “enter” from the keyboard three times.
The “Public” & “Private” Keys will be successfully generated.
Step 7: Type the below-given command to copy the “id_ed25519.pub” content:
sudo cat id_ed25519.pub
Copy this content using the right-click.
Step 8: Now, we will paste this key content to all three machines individually. First, you have to go to the “.ssh” directory in all the three machines using the below-given command:
cd .ssh/
Step 9: Open the “authorized_keys” in all three machines using the below-given command:
sudo nano authorized_keys
Step 11: Paste the “id_ed25519.pub” content in all the three machines.
Do “CTRL+X” to exit & press “Y” to save the file. Press “enter” from the keyboard to completely exit from all three authorized_keys files.
5. Paste the Private IP Addresses of Slaves in the hosts File
Step 1: Exit from the .ssh/ directory using the below-given command:
cd ..
Step 2: Go to the “hosts” file in “Ansible” using the below-given command:
sudo nano /etc/ansible/hosts
Step 3: Paste the slaves & master private IP Addresses here:
[master]
Machine3 ansible_host=10.0.6.181
[slaves]
Machine2 ansible_host=10.0.2.158
Machine4 ansible_host=10.0.6.49
Do “CTRL+X” to exit & Press “Y” to save the file. Press “enter” from the keyboard to completely exit.
Step 4: Now, we will ping all machines using the below-given command:
ansible -m ping all
Type “yes” three times and all the machines will be successfully connected.
6. Create Three Scripts for Installing Required Tools on Machines
Worker 1 — Jenkins & Java Must Be Installed.
Worker 2: Docker & Kubernetes
Worker 3: Java, Docker & Kubernetes
Worker 4: Docker & Kubernetes
Step 1: First, create the script file to install “Java” & “Jenkins” Over “Machine-1”. Use the below-given command to create the “script1.sh” file:
nano script1.sh
Step 2: Paste the below-given scripts here to Install the “Java” & “Jenkins”.
sudo apt-get update
sudo apt-get install openjdk-17-jre-headless -y
sudo wget -O /usr/share/keyrings/jenkins-keyring.asc \
https://pkg.jenkins.io/debian/jenkins.io-2023.key
echo deb [signed-by=/usr/share/keyrings/jenkins-keyring.asc] \
https://pkg.jenkins.io/debian binary/ | sudo tee \
/etc/apt/sources.list.d/jenkins.list > /dev/null
sudo apt-get update
sudo apt-get install jenkins -y
Do “CTRL+X” to exit from the file & Save the file using “CTRL+S”. Exit from the file by pressing the “enter” from the keyboard.
Step 3: Now, create the script file to install “Java”, “Docker”, & “Kubernetes” Over “Machine-3”. Use the below-given command to create the “script2.sh” file:
nano script2.sh
Step 4: Paste the following scripts here to Install “Java”, “Docker” & “Kubernetes”.
sudo apt-get update
sudo apt-get install openjdk-17-jre-headless -y
sudo apt-get install docker.io -y
sudo systemctl enable --now docker
sudo swapoff -a
sudo apt-get install -y apt-transport-https ca-certificates curl gpg
sudo mkdir -p -m 755 /etc/apt/keyrings
curl -fsSL https://pkgs.k8s.io/core:/stable:/v1.28/deb/Release.key | sudo gpg - dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg
echo 'deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v1.28/deb/ /' | sudo tee /etc/apt/sources.list.d/kubernetes.list
sudo apt-get update
sudo apt-get install -y kubelet kubeadm kubectl
sudo systemctl enable --now kubelet
Do “CTRL+X” to exit from the file & Save the file using “CTRL+S”. Exit from the file by pressing the “enter” from the keyboard.
Step 5: Again, create the script file to install “Docker” & “Kubernetes” over“Machine-2” & “Machine-4”. Use the below-given command to create the “script3.sh” file:
nano script3.sh
Step 6: Paste the following scripts here to Install the “Docker” & “Kubernetes”.
sudo apt-get update
sudo apt-get install docker.io -y
sudo systemctl enable --now docker
sudo swapoff -a
sudo apt-get install -y apt-transport-https ca-certificates curl gpg
sudo mkdir -p -m 755 /etc/apt/keyrings
curl -fsSL https://pkgs.k8s.io/core:/stable:/v1.28/deb/Release.key | sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg
echo 'deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v1.28/deb/ /' | sudo tee /etc/apt/sources.list.d/kubernetes.list
sudo apt-get update
sudo apt-get install -y kubelet kubeadm kubectl
sudo systemctl enable --now kubelet
Do “CTRL+X” to exit from the file & Save the file using “CTRL+S”. Exit from the file by pressing the “enter” from the keyboard.
7. Create the Playbooks to Run these Scripts to Install the Much Needed Tools
Step 1: Create a playbook using the below-given command:
nano play.yaml
Step 2: Paste the below-given script to run all the script files here:
---
- name: install Jenkins & Java on Machine 1
become: true
hosts: localhost
tasks:
- name: running script1
script: script1.sh
- name: install Java, Docker & Kubernetes on Machine-1
become: true
hosts: master
tasks:
- name: running script2
script: script2.sh
- name: install Docker & Kubernetes on Machine-2&4
become: true
hosts: slaves
tasks:
- name: running script3
script: script3.sh
Do “CTRL+X” to exit from the file & Save the file using “CTRL+S”. Exit from the file by pressing the “enter” from the keyboard.
Step 3: Check the syntax using the below-given command:
ansible-playbook play.yaml --syntax-check
The syntax is “OK” here. No Problem.
Step 4: Now, we will run the “play.yaml” using the below-given command:
ansible-playbook play.yaml
The execution of the “Scripts” will be getting started.
8. Configure Kubernetes Slaves Properly on Machine-3
Step 1: Go to the “Machine-3 (KBM)” & paste the below-given command to initialize the “kubeadm”.
sudo kubeadm init --apiserver-advertise-address=10.0.6.181
Press “enter” from the keyboard.
Step 2: Copy this token & Paste it into Machine-2 & Machine-4 one by one using the “sudo” command.
sudo kubeadm join 10.0.6.181:6443 - token deah3l.63iy82w2wemxob0i \
- discovery-token-ca-cert-hash sha256:fcef755283501fdbe7e734dd6f52c657737b8300d93abadbcbe1cfb0f18d5fe5
Both the Machines 2 & 4 successfully joined the nodes.
Step 3: Paste the below-given commands in the “Master” node:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
Step 4: Install the “Calico Network” to run the cluster using the below-given command:
curl https://raw.githubusercontent.com/projectcalico/calico/v3.27.2/manifests/calico.yaml -O
Step 5: Run the below-given command:
ls
Your “calico.yaml” file will be shown.
Step 6: Run the below-given command:
kubectl apply -f calico.yaml
Step 7: All the deployments have been successfully created. Now, run the below-given command to get all the nodes:
kubectl get no
9. Configure Jenkins Setup Properly Here on Machine-1
Step 1: Copy the “Machine-1 (Main) Public IP Address” with 8080 in the browser address bar. A command will be given, copy the below-given command from here:
/var/lib/jenkins/secrets/initialAdminPassword
Step 2: Go to the “Machine-1 (Main)” & paste the below-given command with “sudo cat”.
sudo cat /var/lib/jenkins/secrets/initialAdminPassword
A token will be given. Copy this token from here.
Step 3: Paste the token in the “Administrator Password” section & click on “Continue”.
Step 4: Click on “Install Suggested Plugins”.
Step 5: The plugins installation will be automatically started.
Step 6: Create the user by filling the below-given details:
Username: — admin
Password: — admin
Confirm Password: — admin
Full name: — admin
E-mail address: admin@admin.com
Click on “Save and Continue”.
Step 7: Click on “Save and Finish”.
Step 8: Click on “Start using Jenkins”.
Step 9: The “Jenkins Dashboard” will be set up successfully.
10. Add “Kubernetes Master (Machine-3)” as a Node Here
Step 1: Click on “Set up an agent”.
Step 2: Choose “Node name” as “Kubernetes Master” & “Type” as “Permanent Agent”.
Step 3: Choose the following options here:
Description: — Kubernetes Master
Remote root directory: — /home/ubuntu/jenkins
Label:- Kubernetes-Master
Launch Method: — Launch agents via SSH
Host: — 10.0.6.181 (Master Private IP Address)
Click on “Add” in “Credentials”.
Click on the “Jenkins”.
Step 4: Choose “kind” as “SSH Username with private key” with the following fields:
ID & Description: — pwdless
Step 5: Choose “Username” as “ubuntu”.
Choose “Enter directly” & click on “Add”.
Paste the “Demo.pem” key content here. (Used the key pair during the instance creation)
Step 6: Click on “Add”.
Step 7: Choose “Host Key Verification Strategy” as “Non verifying Verification Strategy”.
Click on “Save”.
Step 8: Your “Kubernetes Master” has been successfully added as a “Node”.
11. Create the DockerHub Credentials for Jenkins Pipeline Creation
Step 1: Go to the “Manage Jenkins”.
Step 2: Click on “Credentials”.
Step 3: Click on “Global>Add Credentials”.
Step 4: Put the DockerHub username & password here.
Step 5: The “DockerHub” Credential has been successfully created.
12. Fork the Repository in the GitHub Account
Step 1: Click on “Fork> Create a new fork”.
Step 2: Choose “Repository Name” as “website”, while “Description” as “For Capstone Project 2 Devops”.
Click on “Create fork”.
Step 3: The repository will be successfully forked.
13. Create a Docker file in Given GitHub Repository
Step 1: Click on the “+ Create new file”.
Step 2: Paste this content here:
FROM ubuntu/apache2
COPY . /var/www/html
Put the file name as “Dockerfile”.
Step 3: Click on the “Commit Changes”.
Step 4: Again, click on the “Commit Changes”.
Step 5: The “Dockerfile” will be successfully created.
14. Create a Pipeline to Automate the Tasks
Note: Replace our DockerHub Credentials & Git Hub Repository URL with your DockerHub credentials & forked repository. Otherwise, build will not be successfully created.
Step 1: Click on “Create a job”.
Step 2: Choose “Item Name” as the “Testpipeline” with the “Pipeline” option.
Step 3: Go to the “Pipeline” section & choose the “Hello World” script here.
Step 4: Now, we will use the below-given script to check whether the pipeline script is working properly or not.
pipeline {
agent none
environment {
DOCKERHUB_CREDENTIALS=credentials("2d637aab-be8f-43b1-b1ed-f01bc5bb095e")
}
stages {
stage('Hello') {
steps {
echo 'Hello World'
}
}
}
}
Click on “Save”.
Step 5: Click on “Build Now”.
Step 6: The Sample “Hello World” will be successfully created. Click on “#3”.
Go to the “Console Output”.
Step 7: Again, paste the below-given script in the “Pipeline” section. Click on “Save”.
pipeline {
agent none
environment {
DOCKERHUB_CREDENTIALS=credentials('2d637aab-be8f-43b1-b1ed-f01bc5bb095e')
}
stages {
stage('Hello') {
steps {
echo 'Hello World'
}
}
stage('Git') {
agent {
label 'Kubernetes-Master'
}
steps {
git 'https://github.com/visaltyagi/website.git'
}
}
}
}
Step 8: Click on the “Build Now”. Click on “#5”.
Go to the “Console Output”.
The Build will be successfully created.
Step 9: The “GitHub Repository Content” will automatically fetched in the “Kubernetes Master (Machine-3)”.
Run this test using the below-given command:
cd /home/ubuntu/jenkins/workspace/Testpipeline
Run the below-givencommand to check the files present here:
ls
You will find all the files present here.
Step 10: Now, we will push the “Docker Hub Image” using the pipeline code.
pipeline {
agent none
environment {
DOCKERHUB_CREDENTIALS=credentials('2d637aab-be8f-43b1-b1ed-f01bc5bb095e')
}
stages {
stage('Hello') {
steps {
echo 'Hello World'
}
}
stage('Git') {
agent {
label 'Kubernetes-Master'
}
steps {
git 'https://github.com/visaltyagi/website.git'
}
}
stage('Docker') {
agent {
label 'Kubernetes-Master'
}
steps {
sh 'sudo docker build /home/ubuntu/jenkins/workspace/Testpipeline -t visaltyagi12/project2'
sh 'sudo echo $DOCKERHUB_CREDENTIALS_PSW | sudo docker login -u $DOCKERHUB_CREDENTIALS_USR --password-stdin'
sh 'sudo docker push visaltyagi12/project2'
}
}
}
}
Click on “Save”.
Step 11: Click on “Build Now”. The build will be successfully created & click on “#6”
Click on “Console Output”.
The “Dockerfile” will be successfully pushed to the “Docker Hub” Account.
Step 12: Login into your DockerHub Account & you will notice that “visaltyagi12/project2” will be successfully pushed to the DockerHub account.
Step 13: Now, we will create the “deployment.yaml” & “service.yaml” file to deploy the website using the “Kubernetes” tool.
Go to the “GitHub” account & click on “Add file”.
Click on “Create new file”.
Step 14: Paste the below-given content here:
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
spec:
replicas: 2
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: visaltyagi12/project2:latest
ports:
- containerPort: 80
Put the name as “deployment.yaml” & click on the “Commit changes” .
Step 15: Again, click on the “Commit Changes”.
Step 16: Create a “service.yaml” file for deploying the website over node port 30008.
apiVersion: v1
kind: Service
metadata:
name: my-service
spec:
type: NodePort
selector:
app: nginx
ports:
- port: 80
targetPort: 80
nodePort: 30008
Step 17:Again, click on “Commit changes” & both the yaml files will be shown in the “Git Hub Repository”.
Step 18: Go to the “Configure” in the “Jenkins” & paste the below-given code to create the Kubernetes Deployment for deploying the application:
pipeline {
agent none
environment {
DOCKERHUB_CREDENTIALS=credentials('588da552-478b-4a8f-9edf-a4de0ff29435')
}
stages {
stage('Hello') {
steps {
echo 'Hello World'
}
}
stage('Git') {
agent {
label 'Kubernetes-Master'
}
steps {
git 'https://github.com/visaltyagi/website.git'
}
}
stage('Docker') {
agent {
label 'Kubernetes-Master'
}
steps {
sh 'sudo docker build /home/ubuntu/jenkins/workspace/Testpipeline -t visaltyagi12/project2'
sh 'sudo echo $DOCKERHUB_CREDENTIALS_PSW | sudo docker login -u $DOCKERHUB_CREDENTIALS_USR --password-stdin'
sh 'sudo docker push visaltyagi12/project2'
}
}
stage('K8s') {
agent {
label 'Kubernetes-Master'
}
steps {
sh 'kubectl apply -f deployment.yaml'
sh 'kubectl apply -f service.yaml'
}
}
}
}
Click on “Save”.
Step 19: Again, click on “Build Now”. Now, the build will be successfully created & the website has been successfully deployed over Slaves through Kubernetes Architecture.
Click on “#7”.
Click on “Console Output”.
Step 20: Paste both the Slaves’ IP one by one in the browser address bar & the website will be successfully deployed through “Kubernetes”.
15. Automate the Pipeline using Github Webhooks.
Step 1: Go to the “GitHub Repository (website)” & click on the “Settings”.
Step 2: Click on the “Webhooks”.
Step 3: Click on “Add webhook”.
Step 4: Choose “Payload URL” as http://35.94.249.224:8080/github-webhook/
Click on the “Add webhook”.
Leave other fields as it is.
Step 5: The Webhook has been successfully created.
Step 6: Go to the “Testpipeline” & choose “GitHub hook trigger for GITScm polling” in “Build Triggers”.
Click on “Save”.
16. Do the Changes & Test the Pipeline
Step 1: Go to the “Configure” & add this line to “pipeline code”.
sh 'kubectl delete deploy nginx-deployment'
Save the pipeline.
Step 2: Now, go to the “index.html” file in the “GitHub Repository”.
Change “Title” from “DevOps Project 2” to “DevOps Capstone Project 2”.
Click on “Commit Changes”.
Step 3: Again, click on the “Commit Changes”.
Step 4: An automatic build will be created successfully. Click on “#8”.
Step 5: Click on the “Console Output”. The pipeline will be shown to you.
Step 6: Go to the “Browser” & again refresh the IP Address of “Machine-2” & “Machine-4”. You will notice that in “title”, now “DevOps Capstone Project 2” is showing.
Find the DevOps Capstone Project 1 Solution Here:
Implementing a DevOps Lifecycle on a Website using Docker & Jenkins Only: DevOps Capstone Project 1