Creating a Full CI/CD Pipeline on AWS with Jenkins, Slack, and GitHub: Part 1 — Project Setup and Dockerization

Rapidcode Technologies
14 min readNov 23, 2023

--

Hey, folks! Welcome to this cool tutorial where I’ll walk you through building a slick CI/CD Pipeline with Jenkins on AWS. Here’s the game plan:

Part 1 (Right now) → We’ll kick things off by setting up our project. We’ll download a Web App to test our infrastructure and pipeline. We’ll also create and test some Dockerfiles for the project and upload it all to GitHub.

Part 2 (here) → We’ll get Slack in on the action. We’ll create a Bot for Jenkins to keep us posted on how the pipeline’s doing.

Part 3 (here) → It’s time to build the AWS Infrastructure with Terraform. We’ll whip up some EC2 instances, set up SSH keys, create the network infrastructure, and lay the foundation for IAM roles.

Part 4 (here) → We’re not done with AWS yet. In this step, we’ll make S3 buckets, and ECR repositories, and finish defining the IAM roles with the right policies.

Part 5 (here) → We’ll fine-tune our Jenkins and Web App instances by making sure the user data is just right.

Part 6 (here) → We’ll put the icing on the cake by implementing the pipeline in a Jenkinsfile. We’ll run the pipeline and see everything come together smoothly. Then, we’ll wrap things up with some final thoughts.

Let’s get started!

INTRODUCTION

Goal

Imagine we’re working on a cool Web App, and we want to make our development and release process super smooth, following the principles of DevOps.

So, here’s the plan: Every time we make changes and push them to our GitHub repository, a CI/CD Pipeline kicks in. This pipeline is like a well-oiled machine, running various tests, checking for security issues, and saving valuable data along the way.

We save the test results in an S3 bucket on AWS for future reference and get instant feedback via Slack. Plus, we store our software components as Docker images in the AWS Elastic Container Registry.

To make all this magic happen, we’re using Terraform, which lets us define our infrastructure programmatically, just like regular code. It’s like turning our entire setup into a smart, automated, code-driven system. Cool, right?

Consideration

The pipeline we’re building here focuses on just one branch. In larger and more advanced projects, you’d typically use multiple branches like Test, Staging, and Production to manage different stages of development.

But the good news is, the techniques we’re using are the building blocks for these more complex setups. With just a few tweaks, you can expand and create multiple branches to suit the needs of your project. It’s all about scaling up when you’re ready.

What we will create

The web app will be served from an EC2 instance, in particular from a docker container which will be pulled (its image) at boot time from a specific AWS Elastic Container Registry.

The Jenkins server will be hosted on its own EC2 instance and will be accessible to the outside world at Jenkin’s specific port (8080).

These two instances will be supplemented with their own Elastic Network Interface under their own Subnet.

A Router with a Route Table will allow internal communications and allow the Internet Gateway to correctly let external users communicate through the VPC with the instances.

Instances will have only the necessary policies to perform their duties without having access to unnecessary AWS services (for obvious security reasons). Since the Jenkins setup scripts will be quite lengthy, we will upload them to an S3 bucket, and in the EC2 user data we will pull them down and run them (this is needed to avoid the 16k characters limit of AWS EC2’s user data.

We will also upload the GitHub SSH keys to AWS Secret Manager. Let’s see what we are going to build using Terraform:

In our Jenkins pipeline, we’re going to create a series of stages inspired by a well-structured pipeline from the BlueOcean plugin of Jenkins. Here’s what they are:

pipeline

Pipeline stages specified in the Jenkinsfile we are going to create

  • Setup → This step initializes the variables needed by the pipeline and logs in the AWS Elastic Container Registry.
  • Build Test Image → This step builds and pushes the docker image for the unit/integration tests.
  • Run Unit Tests → This step runs the unit tests and produces a report which will be uploaded to an S3 bucket. It also sends a Slack message telling the channel the tests’ results.
  • Run Integration Tests → This step runs the integration tests and produces a report which will be uploaded to an S3 bucket. It also sends a Slack Message telling the channel the tests’ results.
  • Build Staging Image → This step builds and pushes the staging image, namely a copy of the production one, which will be used for Load Balancing and Security checks.
  • Run Load Balancing tests / Security checks → This step runs some load balancing tests and performs security checks on the Staging Image. It saves reports which are uploaded to an S3 bucket and it also sends a Slack message telling the channel that these tests have been run.
  • Deploy to Fixed Server → This step builds and pushes the production image and then reboots the EC2 instance hosting the Web App (this instance will be constructed such that it will pull down the new ‘release’ image and run it at each boot).
  • Clean Up → Since we have already pushed the images to the AWS ECR in the previous steps, we can (and we must) remove the old images in the local machine to avoid stacking them up and cluttering the storage. The last uploaded images will be kept, while the older ones will be discarded. This step also clears the config.json file (which otherwise would store the credentials for the remote AWS ECR).

COMPLETED PROJECT

The completed project is available on GitHub:

A Note for Future Enhancement

There are loads of ways to make this project even cooler. Here are some ideas:

  1. Zero Downtime: Right now, when a successful pipeline completes, the Web App’s hosting EC2 instance reboots. That means some downtime. To keep things running smoothly in the real world, you could explore Blue/Green deployment or consider using Elastic Beanstalk or other solutions to avoid downtime.
  2. Staging Image Setup: Currently, the Staging Image runs as a Docker container on the Jenkins server. It works fine for us, but in a larger setup, it’s smarter to duplicate the necessary infrastructure, deploy the Staging image there, run tests, and then tear it down.
  3. Branch Logic: While our Jenkins Pipeline kicks off on any branch push, it’s a good idea to define more branches and tweak the CI/CD logic to fit your project’s needs better.
  4. Scaling Up: Our infrastructure might struggle with high web traffic. To handle it like a champ, you can think about Vertical Scaling (upgrading to a more powerful instance) or Horizontal Scaling (adding more instances to share the load). You can do this by setting up an Auto Scaling Group and a Load Balancer or by using Elastic Beanstalk. Node.js also offers options for distributing the load among worker nodes.

Setup and Run locally

1. Create a directory

mkdir project1 && cd project1

2. Clone a simple web app project.

git clone https://github.com/rapidcode-technologies-private-limited/simple-nodejs-web-app.git
simple-web-app

remove the .git folder inside, as if we were starting a project anew.

If you wanna run application and check what that web application provide us, we first need to install the dependencies.

3. Install dependencies

Go to the server directory and install dependencies by following the command:

npm i
npm i

4. Run application

After installing dependencies you need to run the following command in order to run your application:

npm run watch 
npm run watch

Your application is running on a port 8000 and you can access it by typing the following URL in your web browser.

goto->browser->type-> http://localhost:8000/

Then your application looks like:

goto->browser->type-> http://localhost:8000/users

Now is a good time to perform testing of our application, this application contains two test cases first is unit test that will test whether our code properly work or not and the second is integration test that checks the connectivity to our database by fetching users' information.

here is no database like mysql instead we have a file under ./simple-web-app/server/src/routes/db.json that simulates the database.

5. Unit testing

Type the following command:

npm run test:unit

this command saves the result at server/mochawesome-report/mochawesome.html by using mochawesome.

6. Integration testing

Type the following command:

npm run test:integration

this command saves the result at server/mochawesome-report/mochawesome.html by using mochawesome.

7. Load balancing test

For completeness, let’s also try to run the load balancing test.

This test sends requests to the web app to test load or traffic.

First, run the application npm run watch and then open the next terminal to load the balancing test and run the following command:

npm run test:load

8. Let's create a repo on GitHub and push that code for further CI/CD

  • Create a repo on GitHub having a name nodejs-web-app
  • rename your simple-nodejs-web-app to nodejs-web-app on the local repo.
  • initialization your nodejs-web-app repo by the following command:
git init

And create .gitignore file to avoid unnecessarily node module and other files.

you can perform the following command for that:

npx gitignore node

And open .gitignore file and add the following line at the end of the file.

This new line in the .gitignore will tell git to not commit the folder mochawesome-report to the repository.

Now add all files in staging area by git add . command.
And then commit and push on the GitHub repo.

git commit -m "First commit"

Also, rename the branch to main:

git branch -M main
  • Add the git url to Origin
git remote add origin git@github.com:rapidcode-technologies-private-limited/nodejs-web-app.git

The above connection string will only work, if you set up the SSH key on GitHub otherwise it will though an authentication error.

  • Push the code by following the command:
git push -u origin main

Technically, the -u flag adds a tracking reference to the upstream server you are pushing to.

What is important here is that this lets you do a git pull without supplying any more arguments. For example, once you do a git push -u origin main, you can later call git pull and git will know that you actually meant git pull origin main.

Finally, your code is pushed to your remote repository.

GitHub

So, all is good and we have tested and run our application and now we are going to create and set up a Docker file.

Docker

Docker is like a superhero for keeping things consistent across different platforms. We’re going to create test, staging, and production images, and they’ll live in the AWS Elastic Container Registry. These images get a cool Git commit hash tag, so you can easily match them up with the right code.

Let's start creating a Dockerfile for our web application here we are going to create two Dockerfiles one for testing our code and the second for the staging/production environment.

testing Dockerfile is actually for testing our code to perform unit testing and integration testing and that will ensure that our code works fine. this docker file is simple and for testing purposes and it is not optimistic.

Let's create it. 🚀🐳

1. Test image

Create a new file in the root directory ( nodejs-web-app/ ) called Dockerfile.test (it is not inside /server ):

Dockerfile.test

And put there the following code:

FROM node:lts-alpine@sha256:b2da3316acdc2bec442190a1fe10dc094e7ba4121d029cb32075ff59bb27390a

COPY . /opt/app

WORKDIR /opt/app/server

RUN npm i
  • The first line: FROM node:lts-alpine@sha256:..... will pull down the node:lts-alpine docker image from Docker Hub.
  • The second line COPY . /opt/app will just copy everything in the current directory to the /opt/app folder inside the container.
  • With the third line WORKDIR /opt/app/server we set the working directory to be that of the server so that we are ready for the dependencies installation.
  • npm i to install all the required modules to run tests.

Since we do not need to put every file/folder in our project into the Docker Image, we can add a .dockerignore in which we are going to specify what files / folders docker does need to ignore:

open the .dockerignore file and paste the following content:

Dockerfile.test
node_modules
.gitignore
.git

Now let's create a Docker image by following the command:

docker image build -t hello:world -f Dockerfile.test .
docker image build

This will create a Dockerfile with a tag name hello:world and here I am specifying the Dockerfile name by using -f option and . refer to the path of the Dockerfile.test

you can check images by docker image ls command

Create a container by following the command:

docker container run -d -i --name testing-ctr hello:world

-d → Detached, meaning that the container needs to run in the background;

-i → Interactive, allowing the container to remain ‘active’ in the background (thanks to the -d ) without exiting immediately.

After running the container above command returns the container id.

Now, this is a time to perform a test case on our web app container.

Goto inside the container and perform the test:

docker container exec -it b64e /bin/sh

b64e is a character of container id.

And then you will prompt inside the container:

and now type the following command inside the container to perform testing

npm run test:unit

Above <snap> is proof that our code is working fine.

COOL COOL …..👏

2. Staging / Production Image

Let’s now create the Dockerfile at root directory nodejs-web-app (it is not inside /server ) which will build the staging/production image.

Paste the following code:

FROM node:lts-alpine@sha256:b2da3316acdc2bec442190a1fe10dc094e7ba4121d029cb32075ff59bb27390a

COPY --chown=node:node . /opt/app

WORKDIR /opt/app/server

RUN npm i && \
chmod 775 -R ./node_modules/ && \
npm run build && \
npm prune --production && \
mv -f dist node_modules package.json package-lock.json /tmp && \
rm -f -R * && \
mv -f /tmp/* . && \
rm -f -R /tmp

ENV NODE_ENV production

EXPOSE 8000

USER node

CMD ["node", "./dist/bundle.js"]

Let’s analyze it:

The first line is the same as before, we pull down the node:lts-alpine docker image with a specific sha256 to have consistent builds.

COPY --chown=node:node . /opt/app → With this command we copy everything in the current directory . to the /opt/app the directory inside the container but assigning everything to the user node which is provided to us for security reasons. We’ll run the server as this user which has low privileges so that if the server gets pawned, the attacker still wouldn’t have much power (unless privilege escalation vectors would be there);

We set the working directory WORKDIR to /opt/app/server inside the container;

RUN → Execute the following commands: npm i to install all the dependencies. chmod 775 -R ./node_modules to allow us to prune some modules afterward. npm run build will use Webpack to build our compact staging/production version. npm prune --production will remove all the ‘devDependencies’. The other 4 commands mv ... rm ... mv .. rm are used to delete everything in the working directory apart from the dist and node_modules folders and the package.json and package-lock.json files. These last four commands, if we had /bin/sh , could be combined into one: /bin/sh -O extglob -c ‘rm -r -v !("dist"|"node_modules"|"package.json"|"package-lock.json")' ;

ENV NODE_ENV production → Set the environment variable node_env to production and this comes with a lot of optimizations/good practices for a node.js project in production (see https://expressjs.com/en/advanced/best-practice-performance.html);

EXPOSE 8000 → Will expose the port 8000 to the external world;

USER node → Will set the user to use when running the image;

CMD ["node", "./dist/bundle.js"] → Will start the server which is Webpacked in the dist folder in the bundle.js file.

Let’s add Dockerfile to the .dockerignore:

Dockerfile
Dockerfile.test
node_modules
.gitignore
.git

Create a Docker image by following the command:

docker image build -t hello:prod .

Now run the container, mapping the inside port 8000 to our port 8000 :

docker container run -d -i --name production-ctr -p 8000:8000 hello:prod

This command returns the docker container ID following :

f68a1e2ab222ea6aa579c6cd50da7b0de473222a6fd7d2a9d420f9a0c6cb32f9

Navigating to http://localhost:8000 we should see the home page of our Web App.

Cool cool!

You can also perform a test inside the container Let go inside the container and perform a load test

docker container exec -it f68a /bin/sh

We could also enter the container and ‘remove + install’ loadtest to run some load balancing tests (we need to make sure that loadtest is not in the ‘devDependencies’ otherwise, it will not be installed since we are in ‘production’ mode):

and then install and run the load test:

COOL COOL!

Now we are ready to add , commit and push these changes to our remote repository.

Be sure you are in nodejs-web-app the directory:

git add .
git commit -m "Added Dockefiles for building test and staging/production images"
git push -u origin main

If we head over to GitHub, you’ll find our shiny new files in the repository.

That wraps up this first part! We’ve set up our simple-nodejs-web-app, grabbed the server, and crafted Dockerfiles to whip up those fantastic test and staging/production images. Plus, we’ve made sure our GitHub repo is all set to push our local changes to the remote one. Why? Because in the future, we’re going to set up the Webhook that will kickstart our Jenkins Pipeline.

In the next step, we’re diving into the world of Slack. Stay tuned, and we’ll catch you there!

Cheers to a smoother development journey!” 🚀📦🎉

--

--

Rapidcode Technologies

Architecting the future of innovation and design with cloud-native skills. 🌟 Let's transform your business! 🌟 #Innovation #Perseverance