Advanced DevSecOps Multiple Microservices Kubernetes Project Using AWS EKS, Jenkins, and ArgoCD

Soham Valsangkar
8 min readMay 22, 2024

--

Project Introduction: In this project, we leverage the power of AWS EKS, Jenkins, and ArgoCD to build, test, secure, and deploy multiple microservices. Our goal is to demonstrate a comprehensive DevSecOps pipeline that integrates state-of-the-art tools and practices for continuous integration and continuous deployment (CI/CD) within a microservices architecture.

Project Overview:

  1. Jenkins Server Configuration: Install and configure essential tools on the Jenkins server, including Jenkins itself, Docker, Sonarqube, Kubectl, AWS CLI, and Trivy.
  2. Sonarqube Integration: Integrate Sonarqube for code quality analysis in the DevSecOps pipeline.
  3. Docker Script: Create a Script for Building and Pushing docker images in the ECR Repositories for all microservices.
  4. Release script: Create a Script to change the Manifest file for every microservice and push the changes to git Repo
  5. Jenkins Pipelines: Create Jenkins pipelines for deploying code to the EKS cluster.
  6. EKS Cluster Deployment: Using the eksctl command we will create an EKS cluster to deploy this project it is a managed kubernetes cluster on AWS.
  7. ECR Repositories: Create an ECR Repository for all services in the src folder and make sure to keep the images private
  8. ArgoCD Installation: Install and set up ArgoCD for continuous delivery and GitOps.
  9. Jenkins Pipeline: Creating a Jenkins pipeline for deploying microservices to an EKS cluster involves automating steps like code checkout, code quality analysis, security scans, Docker image building, and Kubernetes deployment.
  10. ArgoCD Application Deployment: Use ArgoCD to deploy all the microservices to the EKS Cluster.

Prerequisites:

Install and Configure Jenkins, Docker, AWS CLI, Trivy, kubectl, and eksctl

Step 1: Configure Jenkins Server:

Install and login into the Jenkins server

We will store some credentials in Jenkins for this project

We will need AWS_ACCOUNT_ID to Save it as a secret text

We will also need GitHub credentials

Step 2: Install and Configure Sonarqube using the docker image

docker run -itd -p 9000:9000 sonarqube:latest

Run this command to start the Sonarqube server as a docker container

Go to localhost:9000 and username: admin and password: admin to get into the server, you might need to set up a new password as prompted. We will create a project called Multiple-Microservice-Deployment.

Click on Administration then Security, and select Users

Click on Tokens and Generate a new token, Copy this token somewhere safe

Now create webhooks for Quality Gate

Click on Administration then, Configuration and select Webhooks

We will need to add a URL as

http://jenkins-server-ip:8080/sonarqube-webhook/

I have given my private IP of the Jenkins server

Now we need to create the project Multiple-Microservices-Deployment

click on Locally and Use the existing token

Select Linux as OS and Others

After performing the above steps, you will get the command which you can use in the Jenkins Pipeline

We will need to create a secret in Jenkins for the Sonarqube token

Then head to Manage Jenkins> System

Github Repo: You can clone this repo for the source code

https://github.com/SohamV1/Multiple-Microservices-Deployment.git

Step 3: Create a Script for building docker images and pushing it into ECR:

#!/bin/bash

set -euo pipefail
SCRIPTDIR="$(cd "$( dirname "$BASH_SOURCE[0]")" && pwd )"
echo $SCRIPTDIR
BUILD_NUMBER="${BUILD_NUMBER}"
AWS_DEFAULT_REGION="${AWS_DEFAULT_REGION}"
REPO_PREFIX="${REPO_PREFIX}"

log() { echo "$1" >&2; }

TAG=${BUILD_NUMBER}
echo $TAG
REPO_PREFIX=${REPO_PREFIX}
echo $REPO_PREFIX

while IFS= read -d $'\0' -r dir; do
echo $IFS
svcname="$(basename "${dir}")"
if [[ $svcname == .* ]]
then
echo "Skipping hidden directory: $svcname"
continue
fi
builddir="${dir}"
image="$REPO_PREFIX$svcname:$TAG"
(
if [ $svcname == "cartservice" ]
then
builddir="${dir}/src"
fi
cd "${builddir}"
docker system prune -f
docker container prune -f
log "Building and pushing: ${image}"
aws ecr get-login-password --region "${AWS_DEFAULT_REGION}" | docker login --username AWS --password-stdin "${REPO_PREFIX}"
docker build -t "${svcname}" .
docker tag "$svcname" "${image}"
docker push "${image}"
)
done < <(find "${SCRIPTDIR}/../src" -mindepth 1 -maxdepth 1 -type d -print0)

log "Successfully built and pushed all the images"

I have saved this in Scripts/make-docker.sh with the source code

In this script, we enter each directory of the microservice create the docker image of that microservice tag that image with the ECR repo of that microservice, and push it as well.

Step 4: Create a script to edit the kubernetes-manifest for each microservice and push the changes to GitHub

#!/bin/bash


set -euo pipefail

SCRIPTDIR="$(cd "$(dirname "$BASH_SOURCE[0]")" && pwd )"
echo $SCRIPTDIR

log() { echo "$1" >&2; }


REPO_PREFIX="${REPO_PREFIX}"
GITHUB_TOKEN="${GITHUB_TOKEN}"
GIT_USER_NAME="${GIT_USER_NAME}"
GIT_REPO_NAME="${GIT_REPO}"
TAG=${BUILD_NUMBER}


edit_k8s() {

for dir in ../src/*/;
do
svcname="$(basename "${dir}")"
if [[ $svcname == .* ]]
then
echo "Skipping hidden directory: $svcname"
continue
fi
image="$REPO_PREFIX$svcname:$TAG"
echo $image
file="${SCRIPTDIR}/../kubernetes-manifests/${svcname}.yaml"
sed -i "s|image:.*$svcname.*|image: ${image}|g" "$file"
done
cd ..
git add kubernetes-manifests/
git commit -m "updates manifest files to ${TAG} version"
git push @github.com/${GIT_USER_NAME}/${GIT_REPO_NAME">https://${GITHUB_TOKEN}@github.com/${GIT_USER_NAME}/${GIT_REPO_NAME} HEAD:master
}

edit_k8s

I have saved this in Scripts/make-release.sh with the source code

We are editing the image and tag with Jenkins Build Number for each microservice in the kubernetes-manifest file and pushing the changes to GitHub via this script

Step 5: Create ECR Repos

We will need to create ECR Repo for Each microservice as we will need to maintain the image for each microservice

Create an ECR Repo for each folder in src keep these repos private as it is the industry standards

Step 6: Create a Jenkins Pipeline

Check if all the necessary tools are installed with the following command

jenkins --version
docker --version
trivy --version
aws --version
eksctl --version

We need to install the following plugins from Dashboard > manage Jenkins> available plugins

Docker
Docker Commons
Docker Pipeline
Docker API
docker-build-step
OWASP Dependency-Check
SonarQube Scanner

We will need to store some credentials as follows:

We will need to store the AWS Account ID

You will need to store github credentials as well

You can copy the Jenkinsfile from

https://github.com/SohamV1/Multiple-Microservices-Deployment/blob/master/Jenkinsfile

You can paste this as a pipeline script

Step 7: Create an EKS cluster using eksctl

Create an EKS cluster using the below commands.

eksctl create cluster --name Multiple-Microservices-Cluster--region us-east-1 --node-type t2.medium --nodes-min 2 --nodes-max 2
aws eks update-kubeconfig --region us-east-1 --name Multiple-Microservices-Cluster

As the ECR Repos will be private we need

We will also need secret as the ECR Repo is private

kubectl create secret generic ecr-registry-secret \
--from-file=.dockerconfigjson=${HOME}/.docker/config.json \
--type=kubernetes.io/dockerconfigjson
kubectl get secrets

Step 8: Install & Configure ArgoCD

create a separate namespace for it and apply the argocd configuration for installation.

kubectl create namespace argocd
kubectl apply -n argocd -f https://raw.githubusercontent.com/argoproj/argo-cd/v2.4.7/manifests/install.yaml

All pods must be running, to validate run the below command

kubectl get pods -n argocd

Now, expose the argoCD server as LoadBalancer using the below command

kubectl patch svc argocd-server -n argocd -p '{"spec": {"type": "LoadBalancer"}}'

To access the argoCD, copy the LoadBalancer DNS and hit on your favorite browser.

Now, we need to get the password for our argoCD server to perform the deployment.

To do that, we have a pre-requisite which is jq. Install it by the command below.

sudo apt install jq -y

We need to get the password of argocd using the following command

export ARGOCD_SERVER='kubectl get svc argocd-server -n argocd -o json | jq - raw-output '.status.loadBalancer.ingress[0].hostname''
export ARGO_PWD='kubectl -n argocd get secret argocd-initial-admin-secret -o jsonpath="{.data.password}" | base64 -d'
echo $ARGO_PWD

We will get the command if you run it we will get argocd password

Log into ArgoCD using the DNS of the load balancer and username as admin and the received password from the above command

Now connect the git repo

Create the project in argocd

Use directory as the option and select the kubernetes-manifests folder

Once it is completely synced and healthy a load balancer for frontend external service will be created

If you hit the Load balancer DNS of the frontend external service you can see the application working

The application will look like this

In this microservices project, gRPC facilitates efficient communication between services like emailservice, checkoutservice, recommendationservice, paymentservice, and shippingservice. Each service uses gRPC to expose methods that other services can call remotely, ensuring fast and reliable inter-service communication.

Here we have used multiple microservices with different languages which is a major advantage of using microservices.

I have used the Google GKE microservices demo for the source code you can check that full repository here:

https://github.com/GoogleCloudPlatform/microservices-demo?tab=readme-ov-file

Finally, we have managed to deploy Multiple-Microservices using EKS, Jenkins, and Argocd

Thanks for reading my article!!

--

--

Soham Valsangkar

Enthusiastic DevOps Engineer, AWS Certified Solutions Architect Associate