How I got nominated for excellence in Udacity’s Cloud Devops Capstone Project

Sagarnil Das
13 min readJul 13, 2020

--

I am a Data Scientist by profession. I am always working with algorithms, solving new use cases, reading research papers and writing codes. Don’t get me wrong, I am simply in love with this field. But I always wanted to understand the infrastructure side of an application. How do we set up an application, scale it, monitor it and the whole 9 yards. Once again, I resorted to one of my favorite learning platform — Udacity. I found out about their AWS Cloud Devops Engineering Nanodegree program and decided to get enrolled in it.

I was feeling good about this decision and most of the lessons were really great apart from some parts where I felt that the instructional materials were very haphazard and erroneous. However, one of the very big problems I faced was when I started working on the capstone project. One of the major problem which I saw was the major disconnect between what you were required to achieve in this project and what you were taught in the previous lessons so far. I also found out in the forums that numerous students were also having the same problem as me and really didn’t have a structured idea about how to complete this project. But before I go into the details of this project, and how I did it, let me show you something. This is what the reviewer wrote in his review of my capstone project. :D

You can find the complete project here.

The compliment I received from the reviewer :D

On his suggestion to write a blog post, I thought that as this is a pretty prevalent problem amongst students, maybe my own research which took hours can help some of you who are taking this excellent Nanodegree in order to successfully complete your capstone project in a step by step instructional fashion. So let’s jump right in!

Aim of the Capstone project

In this project, we were required to apply the skills and knowledge which were developed throughout the Cloud DevOps Nanodegree program in order to deploy a complete web application in AWS in a dockerized fashion and within a kubernetes cluster. These include:

  • Working in AWS
  • Using Jenkins to implement Continuous Integration and Continuous Deployment
  • Building pipelines
  • Working with Ansible and CloudFormation to deploy clusters
  • Building Kubernetes clusters
  • Building Docker containers in pipelines

Apart from the basic project rubric, we were told that in order to make our project stand out, we can do quite a lot of things which includes but was not limited to :

  • Add linting to the pipeline in order to get your file checked for errors.
  • Add security scan to the pipeline in order to detect vulnerabilities.
  • Add a rolling or a blue green deployment to your pipeline. To learn about rolling deployment click here and to learn about blue green deployment, click here.
  • Perform additional post deployment testing in the pipeline.

This is what I did as my basic project submission:

I developed a CI/CD pipeline for micro services applications with rolling deployment. I took a nodeJS app, and dockerized it. Finally I deployed this in a kubernetes cluster with 3 worker nodes and 1 node balancing instance in AWS. I used jenkins for the CI/CD.

And these are the things which I did as an additional step (which I guess made me receive such a glowing review! :D)

  • Adding security scanning as a part of the continuous integration process.
  • Integrate Slack with jenkins to send automated messages describing the results of the pipelines.
  • Perform additional post deployments tests : Whether the app is successfully running and also whether it successfully rolled out.
  • Prune docker environment.
  • Map custom Route53 domain with the LoadBalancer DNS name.

The app which I will deploy is the Bcrypt app (https://github.com/felladrin/bcrypt-sandbox) used for encryption and decryption. This is written with nodeJS.

Step by step guidance for completing this project

Step 1: Setting up the Jenkins instance — I have used my local ubuntu machine as the Jenkins instance. I also have added additional plugins in it like blue ocean, aqua microscanner, aws pipeline etc. We also have installed the slack integration plugin and set up global variables (like AWS credentials, dockerhub credentials etc) with a specific workspace to send automated message at different stages of the pipeline. Let’s look at the sub-steps one by one.

a) Install Jenkins — You can find the documentation here.

b) Login to your jenkins instance — After you have unlocked jenkins and created your first admin user and login.

Jenkins Login Screen

c) Home screen of jenkins — Once you have logged in, you will see the home screen like this:

Jenkins Home Screen

d) Install necessary plugins — We will install a number of plugins in order to make our lives easier. They are the blue ocean plugin for easy deployment of pipelines, the aqua microscanner plugin for security check, the Pipeline: AWS steps plugin to add Jenkins pipeline steps to interact with the AWS API, the slack upload plugin to upload files to slack generated during build process and the slack notification plugin for receiving slack notifications about the pipelines that we run. In order to do this, go to Manage Jenkins > Manage Plugins > Available and download the above mentioned plugin. For the blue ocean plugin, you will find multiple ones, download all of them just to be on the safe side as this will be one of the major plugins that we will use. After the plugins are downloaded and installed, restart Jenkins.

e) Configure system — After restarting jenkins go to Manage Jenkins > Configure system. Here we will do the following:

  • Configure slack messages like this
Jenkins Slack Message settings
  • Configure Aqua Microscanner token — Go to https://microscanner.aquasec.com/signup and register. After that you will recieve a token in your email. Get that token, and paste it in the Aqua microscanner section.
Jenkins Aqua Microscanner Token
  • Create a slack workspace for getting Jenkins notification and add the jenkins app there. Now in jenkins, add the workspace name, click on the Add button to add your credentials of slack and give the default channel name for the notifications to arrive.
Jenkins Slack credentials
  • Add the slack token — Add the jenkins app to slack. Here’s a very good article about how to do that: https://www.baeldung.com/jenkins-slack-integration. Once you have done that, you will receive a slack bot token. Paste it in jenkins in the following section:
Slack Bot Token

f) Configure Credentials — Go to Manage Jenkins > Manage Credentials. Here add all the credentials you will need later. That will be your AWS user credentials (Access ID and secret access key) and dockerhub credentials (for pulling the image from dockerhub). You have already set up the slack credentials in the last step. If you haven’t do it here.

Jenkins Global Credentials for AWS, dockerhub and slack

Congratulations! Your Jenkins instance is up and ready to go and this concludes step 1.

Step 2: Create an AWS Elastic Kubernetes Service Cluster

We can do this step both via a cloudformation script or with EKSCTL.

a) If you wish to use AWS Cloudformation, you can go here in my repo and take these files to be used as a template. Now go to AWS cloudformation and create a new stack and as a template, upload this file and the stack will be created.

b) An even easier way is to use eksctl, which is the official CLI tool for AWS EKS. The equivalent command to deploy the same stack via eksctl is:

eksctl create cluster --name capstoneclustersagarnil --version 1.16 --nodegroup-name standard-workers --node-type t2.medium --nodes 3 --nodes-min 1 --nodes-max 4 --node-ami auto --region ap-south-1

Here a major heavylifting will be done by eksctl. It will create an EKS cluster and will deploy 3 nodegroup workers of type (t2.medium) and a autoscaling group with minimum 1 node and maximum of 4 nodes. It will create the necessary VPC, the public and the private subnets, the internet gateway and the route table.

Now that our infrastructure is complete, we will focus on our app and getting it deployed. Clone/download this repo within your project folder: https://github.com/felladrin/bcrypt-sandbox and rename the folder to ‘app’.

Step 3: Creating a Dockerfile

In your project folder where the ‘app’ folder is located, create a docker file like this.

What this dockerfile will do are as follows:

  • The first line tells it that it’s a nodeJS app.
  • Inside the container, it will create the WORKDIR as /bcrypt.
  • It will copy all the contents of the app/ folder into the /bcrypt/ folder.
  • Then it will run the necessary command to install necessary dependencies, update the app and then build, run and serve it.

Step 4: Create a yaml file for deploying the app in the EKS cluster

The file named “capstone-k8s.yaml” contains code to deploy the dockerized image of the app to be deployed in the EKS cluster we created in step 2. This will create the necessary autoscaling groups, the Elastic LoadBalancer and the expose the necessary ports required by the app to run.

Step 5: Create a JenkinsFile

This is one of the most important files in this project. This is the file that will be used by jenkins to run the whole CI/CD pipeline. Let’s try to understand what’s happening in this file part by part.

a) Sending automated slack message at the start of the pipeline and linting the dockerfile.

For this part, you need to install hadolint in your jenkins instance. In this part, we are declaring a pipeline and telling Jenkins that it will consist of multiple stages. In the first stage, we are sending a customized message to our slack channel which we set up earlier that a job has started. Then in the next stage, we are checking our DockerFile with hadolint for any syntax error and printing appropriate message.

b) Using Aqua microscanner to check for vulnerabilities

In this stage, we are using the aqua microscanner plugin we downloaded to run a security scan against our app to check for any vulnerabilities.

c) Building, tagging and pushing the docker image to dockerhub

In this step, we first run a shell script to build our docker image with the name of the image as capstone-app-sagarnil. After that we login to dockerhub with the credentials we saved within jenkins credential manager before. Then we tag our image and push it to dockerhub.

d) Deploying the dockerized image of the app in the Kubernetes cluster

This is a major step, where we will deploy our dockerized image into the EKS cluster we built in step 2. First, we are logging in to AWS with the AWS credentials we saved in the Jenkins credential manager before and then doing the following:

  • Updating the kubeconfig file with our EKS cluster name
  • Pointing Kubernetes towards the right context. (You can find the arn url if you click and select your EKS cluster).
  • Applying the capstone-k8s.yaml file with kubectl to build the necessary resources discussed in Step 4.
  • Check the nodes that are up.
  • Check if the deployment is successful
  • Check if the pods are up.
  • Check if the service is up.

e) Check if the app is up

Here we do a curl for the address of the Elastic LoadBalancer. If you get a response, then you are successful. You can get this address from the previous command of

sh "kubectl get service/capstone-app-sagarnil"

f) Check if the app rollout is successful

With this command, we see if the app was correctly rolled out or not.

g) Prune unnecessary docker resources

Clean up the system and you are done!

Step 6 — Push your files to a github repo

Now that you have your JenkinsFile ready, your folder structure should look somewhat like this:

File Structure after creating the JenkinsFile

I have some extra files here but you really don’t have to worry about them because they won’t be used by jenkins anyway. I created them for locally testing the app before making it automated with Jenkins. As long as you have the app folder, the capstone-k8s.yaml file, the Dockerfile and the Jenkinsfile, you’re good to go!

Now create a new github repository and push all these files to that repo.

Files pushed to github

Step 7 — Run the pipeline from Blue Ocean plugin of Jenkins

In Jenkins, click on “Open Blue Ocean”. Now click on “New Pipeline. You should arrive at a screen like this:

Blue Ocean new pipeline

Click on Gihub. If you are using it for the first time, you have to allow jenkins to access your github repo. Now select the repository into which you pushed your files in Step 6 and click “Create Pipeline”.

Blue Ocean select repository

Once you click on “Create Pipeline”, the pipeline will start. What will happen is Jenkins will automatically look for the Jenkinsfile and execute all the commands that are in it. Here are some sample screenshot from the run of my pipeline.

a) Sending Slack Message at the start of the pipeline

Sending Slack Message the the very beginning of the pipeline

b) Hadolinting Dockerfile

I tried both scenarios with an erroneous docker file and then correcting it.

Pipeline Failed
Pipeline Succeeded

c) Run Aqua MicroScanner

Run Security scanner via the aqua microscanner plugin on the docker image

Aqua Microscanner

d) Build and push the docker image and deploy in the Kubernetes cluster

  • Builds and pushes the docker images to dockerhub.
  • This next step is a multi-step process. This is where we create the kubernetes deployment from the docker image and also create the load balancer service which operates on the three worker nodes we deployed earlier through the capstone-k8s.yaml file. The load balancer gets deployed as a service with an external IP. This is where we will find our app at port 9080. The following diagram shows the deployment step and also that the worker nodes instances are up and running which are autoscaled.
Deployment in Kubernetes cluster
Kubernetes Worker nodes up and running

e) Checking if the app is up

As post deployment steps, we first check if the app is up and running by doing a curl at the DNS of the loadbalancer followed by the port number — 9080. We also send a slack message if the app is up.

Checking if app is up with a curl

f) Check if the rollout of the app was successful

Next we check if the app succesfully rolled out or not.

Checking for rollout

g) Prune the docker system — Finally we prune the docker environment to delete any unused resources.

Step 8 — Check if the app is up

Whew that was a long journey! But I hope that you found it worth it! Just a few steps left. Check if our app is up and running at the DNS of the load balancer at port 9080.

App up and running in the DNS of the LoadBalancer

Step 9 — Check if you have received the necessary messages in your slack channel

Check your slack app for all the messages including Pipeline start, Pipeline end, Pipeline success and Pipeline failed.

Slack messages received

Step 10: Reroute your DNS LoadBalancer with a custom domain name by Route53

This is an additional step which you don’t have to really do. I mean a lot of the steps I did was additional to the basic project rubric. But this step will cost you money which is more than the rest of the resources. Here, I purchased a custom domain name called darkarosh.net and mapped the DNS address of the LoadBalancer to a custom domain which was bcrypt.darkarosh.net. So instead of that long DNS name, if I just try to open bcrypt.darkarosh.net:9080, then I will again successfully reach the app.

Creation of a custom Record Set for our app

As a final step, we registered a domain in route 53 and in the hosted zones, we created a new record set where we create an alias for the DNS name of the loadbalancer so that it can be memorized easily by people without having to remember a huge DNS name.

Final app up and running in our custom domain

That’s it people! It was a long journey but in the end, when you see your own app running on a distributed cluster, there’s really nothing more satisfying than it! Specially for me, it really was as I am from a completely different background (Data Science) and completing this project with all the background research and figuring out stuffs gave me a really amazing sense of achievement. I hope this article has been of some help to you. If you have any questions about any of the steps, please feel free to comment. I will definitely get back to you…

Cheers and stay well!

--

--

Sagarnil Das

Lead, Data Science at Hopscotch. Mentor at Udacity, Alumni of Robotics, AIND, DAND, MLND and Cloud Devops nanodegree.