Setting up a CI/CD Pipeline Process with Jenkins and Docker in AWS

Ulises Magana
Cloud Native Daily
Published in
12 min readMay 9, 2023

Part 2: Monitoring Made Easy: Enhancing CI/CD with Splunk and Jenkins Integration

In today’s fast-paced development environment, it is crucial to have a robust Continuous Integration and Continuous Delivery (CI/CD) pipeline to quickly and efficiently deploy code changes to production. In this article, we will explore how to set up a CI/CD pipeline using Jenkins and Docker by building a simple Flask application, testing it, and deploying it to Docker Hub.

Objectives

  • Containerize a simple Flask application using Docker to make it more portable and scalable.
  • Use Git to manage the source code of the application to make it easier to collaborate and version-control changes.
  • Implement Infrastructure as Code for automated build, test, and deployment process using Jenkins pipeline script.
  • Ensure Continuous Integration of the application by configuring Jenkins to automatically build and test every time code changes are made.
  • Implement Continuous Delivery/Deployment by configuring Jenkins to automatically deploy the application to a Docker registry when the build and test phases are successful.

Key concepts and tools to be applied

Infrastructure as Code: Firstly, we will leverage Jenkins pipeline scripting to automate the build, test, and deployment process of our application. This pipeline script can be version-controlled and treated as code, making it easier to manage and reproduce the pipeline.

Continuous Integration: It is implemented using Jenkins, which will automatically build and test our application every time code changes are made. This will help identify issues early in the development process, allowing for faster feedback and remediation.

Continuous Delivery/Deployment: We will also use Jenkins to automatically deploy our application to a server or Docker registry when the build and test phases are successful, ensuring that the latest version of our application is always available to end-users. This ensures Continuous Delivery/Deployment and reduces the time-to-market for our application.

Continuous Testing: The Flask application will be continuously tested while the Docker image is run as part of the pipeline process, helping to identify any issues early in the development cycle.

Webhook Triggers: We will set up webhook triggers to automatically trigger pipeline builds whenever changes are pushed to the Git repository. This ensures that our pipeline is always up-to-date with the latest changes, and our application is continuously built and tested.

Docker: To make our application more portable and scalable, we will containerize it using Docker. Docker allows us to package our application and its dependencies into a single, portable unit that can be run consistently on any platform.

Git: We will use Git to manage the source code of our application, making it easier to collaborate and version-control changes.

Pre-requisites

  • AWS account and familiarity with AWS EC2.
  • Docker Hub account.
  • Familiarity with Git version control and experience with command line interface.
  • Github account.
  • Understanding of the Flask application framework and Docker containerization technology.

Project Structure

app/__init__.py — This file is used to mark the directory as a Python package and to define the Flask application object. It is in the same directory as the app.py file since when Flask looks for an application, it searches for the __init__.py by default. In this project, it is not necessary to include any content in this file.

app/app.py

from flask import Flask

app = Flask(__name__)

@app.route('/')
def hello_world():
return 'Hello, World!'

if __name__ == '__main__':
app.run(debug=True, host='0.0.0.0')

This code defines a simple Flask application that has one route, the root route ‘/’. When a user navigates to the root route, the hello_world() function is executed, which returns a string "Hello, World!".

  • The first two lines of the code import the Flask class and create a new Flask application instance.
  • The @app.route('/') decorator defines the URL path to match for the following function.
  • The hello_world() function is decorated with @app.route('/'), which means that it will be called when the user navigates to the root URL path.
  • The final if __name__ == '__main__': block ensures that the app.run() function only runs if the script is executed directly, rather than being imported by another script.
  • The app.run() function starts the Flask application and makes it available on the network on the IP address 0.0.0.0 and the default port 5000. The debug=True argument enables debug mode, which allows the server to reload itself on code changes and provides helpful debugging messages.

Dockerfile

FROM python:3.9-alpine

WORKDIR /flask_app

COPY requirements.txt .

RUN pip install --no-cache-dir -r requirements.txt

RUN pip install pytest

COPY app/ .

COPY tests/ app/tests/

CMD [ "python", "app.py" ]

It is used to create a Docker image for a Flask application. It uses the Python 3.9-alpine image as a base and sets the working directory to /flask_app. The requirements.txt file is copied into the image and the necessary dependencies are installed. Please note that the — no-cache-dir parameter in line 4 is used to tell pip not to use cache during the installation process, which can help reduce the size of the Docker image.

Then, pytest is installed for testing purposes. The application code is copied into the image along with the test directory. Finally, the command is set to run the Flask application by running app.py using the python command.

Jenkinsfile

pipeline {
agent any

stages {
stage('Build') {
steps {
sh 'docker build -t my-flask-app .'
sh 'docker tag my-flask-app $DOCKER_BFLASK_IMAGE'
}
}
stage('Test') {
steps {
sh 'docker run my-flask-app python -m pytest app/tests/'
}
}
stage('Deploy') {
steps {
withCredentials([usernamePassword(credentialsId: "${DOCKER_REGISTRY_CREDS}", passwordVariable: 'DOCKER_PASSWORD', usernameVariable: 'DOCKER_USERNAME')]) {
sh "echo \$DOCKER_PASSWORD | docker login -u \$DOCKER_USERNAME --password-stdin docker.io"
sh 'docker push $DOCKER_BFLASK_IMAGE'
}
}
}
}
post {
always {
sh 'docker logout'
}
}
}

This is a Jenkinsfile for a CI/CD pipeline that builds, tests, and deploys a Flask application using Docker. It is a declarative pipeline that specifies three stages: Build, Test, and Deploy.

  • The Build stage uses the Dockerfile to build a Docker image for the Flask application, tagging it with a specified name.
  • The Test stage runs the application tests using the pytest framework within a Docker container.
  • The Deploy stage pushes the Docker image to a specified registry after logging in with the Docker credentials.
  • The post section of the pipeline ensures that Docker is always logged out after the pipeline is complete, for security reasons.

Overall, this Jenkinsfile automates the entire process of building, testing, and deploying the Flask application, ensuring that changes to the application are automatically tested and deployed, making the development process more efficient and reliable.

requirements.txt

Flask==2.1.0

Lists all the dependencies (in this case only Flask version 2.1.0) required by the Flask application.

tests/test_app.py

from app import app

def test_index():
client = app.test_client()
response = client.get('/')
assert response.status_code == 200

It tests the Flask application’s behavior by importing the app instance from the app package and creates a test client object from it. Then, it simulates a GET request to the root endpoint '/' using the test client object and asserts that the response's HTTP status code is equal to 200. Overall, it checks whether the server is able to serve requests on the root endpoint without any errors.

Walkthrough

Before starting the project, make a git clone of the following repository: https://github.com/UlisesME/CI-CD-Jenkins in your local computer or in the EC2 instance to be used and create your own repository in Github.

  1. Create an EC2 instance.
  • AMI: Ubuntu Server 22.04 LTS (free tier eligible)
  • Instance type: t2.micro
  • Key pair: Select or create your own
  • Network settings: Create your own security group allowing SSH traffic from Anywhere for the purpose of this project. Leave everything else as the default settings. After launching your instance, go to your security groups and edit the one associated to your EC2 to add port 8080.
Security group configuration
  • Configure storage: 1x8 GiB gp2
  • Launch the instance and then connect to it with SSH.

2. To ensure that traffic is able to reach the Jenkins server, use the Linux firewall to allow incoming traffic on port 8080. We need to do this, since even the security group associated we created before has this port opened, the firewall on the instance itself may still be blocking traffic to that port.

sudo ufw allow 22
sudo ufw allow 8080
sudo ufw enable

Note: We are also allowing income traffic on port 22 since when you enable UFW with the ufw enable command it disables access to all ports that you have not explicitly allowed.

3. Install Jenkins with the following script.

#!/bin/bash
# This script installs Jenkins on an Ubuntu server

# Update package lists
sudo apt-get update

# Install Java 11
sudo apt-get install -y openjdk-11-jdk

# Download Jenkins key and add it to system
curl -fsSL https://pkg.jenkins.io/debian-stable/jenkins.io-2023.key | sudo tee /usr/share/keyrings/jenkins-keyring.asc > /dev/null

# Add Jenkins to system package source list
sudo sh -c 'echo deb [signed-by=/usr/share/keyrings/jenkins-keyring.asc] https://pkg.jenkins.io/debian-stable binary/ | sudo tee /etc/apt/sources.list.d/jenkins.list > /dev/null'

# Update package lists again to include Jenkins and install it
sudo apt-get update
sudo apt-get install -y jenkins

# Verify Jenkins is installed and working
sudo systemctl status jenkins

Note: If you have trouble installing Jenkins for this project due to expired Jenkins signing keys, check the references section of this article for a link to the new signing keys website in case the keys are updated again.

Jenkins running successfully

4. Configure Jenkins and install the Docker plugin to run our pipeline stages in isolated containers. This ensures that our application is built and tested in a clean and consistent environment.

  • Enter your IP address and the port in a new browser tab to start the configuration.
Getting started with Jenkins
  • Install the suggested plugins.
  • Create your first admin user as you desire.
  • Instance configuration: Leave it with the Jenkins URL shown in your screen.
  • Click on Manage Jenkins -> Manage Plugins -> Available Plugins and install without restart the Docker plugin.
Jenkins Plugin Manager

5. Install Docker with the following script.

#!/bin/bash

# Update package list
sudo apt-get update

# Install necessary packages to use HTTPS repositories
sudo apt-get install apt-transport-https ca-certificates curl software-properties-common -y

# Add Docker GPG key
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /usr/share/keyrings/docker-archive-keyring.gpg

# Add Docker repository to sources list
echo "deb [arch=$(dpkg --print-architecture) signed-by=/usr/share/keyrings/docker-archive-keyring.gpg] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null

# Update package list for the addition to be recognized
sudo apt-get update

# Install Docker CE
sudo apt-get install docker-ce -y

# Add current user to the docker group to run Docker commands without sudo
sudo usermod -aG docker ${USER}

# Activate the changes to groups
su - ${USER}

# Verify Docker is installed and working
sudo systemctl status docker
Docker running successfully
  • Add the jenkins user to the docker group and restart the Jenkins server.
sudo usermod -aG docker jenkins

# Restart so that Jenkins server runs with the new group membership
sudo systemctl restart jenkins

6. Configure your Docker Hub credentials.

  • Create an account in Docker Hub or log in.
  • Go to ‘Account Settings’ under your account name and then ‘Security’ as shown below to create a token:
Docker Hub Account Settings
  • Make sure to save your access token since it only appears one time when generating it.
Generating a new access token in Docker Hub
  • In your Jenkins server go to Manage Jenkins -> Credentials -> System -> Global credentials -> + Add credentials. Enter your Docker Hub username, the token you created and a brief description of the credentials.
Adding Docker Hub credentials into Jenkins

7. Create the environment variables used in the Jenkinsfile for the Docker image and for the Docker Hub credentials.

  • Go to Manage Jenkins -> Configure System and under ‘Global properties’ add them as shown in the picture below.
  • Name: DOCKER_BFLASK_IMAGE
  • Value: [your-dockerhub-username/repository-name]
  • Name: DOCKER_REGISTRY_CREDS
  • Value: ID generated when adding the Docker Hub credentials into Jenkins.
Docker environment credentials

8. Create a new Jenkins job.

  • In your Dashboard go to New Item -> Pipeline and enter the name of your project.
Jenkins’ projects
  • Select ‘Github project’ and add your Github repository.
Jenkins pipeline configuration
  • In the Pipeline section, under SCM select ‘Git’ and enter your repository again.
  • Note: In case it is a private repository you would need to provide your Github credentials.
Jenkins SCM configuration
Jenkins SCM configuration
  • Build the pipeline.
Successful Build
  • Verify if the image was uploaded in the Docker Hub repository as below.
Docker Hub Flask repository

9. Finally, add a webhook to your pipeline to trigger a build whenever there is a change in your Github repository.

  • Go to settings in your Github repository and then to ‘Webhooks’.
  • Payload URL: [EC2-IP-Address:8080/github-webhook/]
  • Content type: application/json
  • Trigger only on the push event
Github settings
  • Go again to your pipeline and check the box shown below:
Jenkins pipeline configuration
  • Note: What Github does in the background is to make an HTTP POST to the Jenkins server and for this latter to accept it, the security groups need to be updated to whitelist the Github public IP addresses. Therefore, go to Github IP Addresses and retrieve the list of its IP addresses in https://api.github.com/meta.
Github’s list of IP addresses
  • Create a new security group and add the Github’s IPs into port 8080 as inbound rules.
Github Webhooks security group
  • Associate the security group to your EC2 instance.
Adding the new security group
Associated security groups
  • Change the message of the app.py file in the ‘hello_world’ function to whatever you want.
  • Add, commit and push your changes.
  • Go to your Jenkins pipeline and see how a new build is triggered.
  • Note: Understanding webhooks helps you to ensure that the latest changes are tested, built, and deployed as quickly as possible. This means a time reduction between development and deployment, making the development process faster and more efficient.

Project Summary

As a DevOps engineer, it is crucial to understand the concepts and best practices presented in this project. By containerizing a Flask application with Docker, using Git for version control, and automating the build, test, and deployment process with Jenkins, this project demonstrates how to optimize software development and deployment workflows.

Understanding the benefits and nuances of containerization, version control, and automation are fundamental skills for any DevOps engineer. The use of Docker allows for easier application portability and scalability, while Git provides a reliable and efficient way to manage changes to the application source code. Additionally, automating the build, test, and deployment process using Jenkins helps ensure that the latest version of the application is always available to end-users, minimizing the time-to-market for the application.

By grasping the concepts presented in this project, you can develop more efficient and streamlined development workflows, ultimately leading to faster and more reliable application deployment.

Congratulations if you made it this far!

Connect with me on

LinkedIn: https://www.linkedin.com/in/ulises-maga%C3%B1a/

Github: https://github.com/UlisesME

References

Further Reading:

--

--

Ulises Magana
Cloud Native Daily

Cloud & Infrastructure Engineer with diverse experience in software development, database administration, SRE & DevOps.