Microservices with CI/CD, GitOps, and Service Mesh

Minn
15 min readDec 17, 2023

--

A simple experimental project containing multiple services that construct a microservices architecture.

In this article, we are going to deploy a simple microservices application on a Kubernetes cluster using GitOps practices, build a Jenkins CI Pipeline, and implement Istio Service Mesh and observability on our cluster.

CAUTION
This mini project simulates an environment and its architecture might lack some best practices. It is advised against using it for production environments.

Architecture Overview

Prerequisites

  • AWS account
  • kubectl cli
  • helm cli
  • terraform cli
  • aws cli with credentials configured in us-east-1
  • Docker Engine
  • Fundamental knowledge of Git, Kubernetes, and CI/CD will definitely help

Technologies

  • AWS
  • Python
  • Docker
  • Kubernetes
  • Terraform
  • ArgoCD
  • Jenkins
  • Istio
  • Prometheus
  • Grafana
  • Kiali

DISCLAIMER:
This mini project is built as an active-recall for my learning on microservices. It is intended for experimental and educational purposes. I may misinterpret some crucial information. Feel free to highlight any important point that I missed.

1. Create a Kubernetes cluster

Of course, since we are architecting containerized microservices, we need an infrastructure to deploy on. You can choose a Kubernetes cluster of your choice either local or cloud-based such as Amazon EKS.

In this project, I am using Kubernetes in Docker (kind) for our local development cluster.

Here is a walkthrough of my cluster configuration:

….
extraPortMappings:
- containerPort: 30000
hostPort: 80
listenAddress: "127.0.0.1"
protocol: TCP
- containerPort: 31000
hostPort: 443
listenAddress: "127.0.0.1"
protocol: TCP
- containerPort: 32000
hostPort: 15021
listenAddress: "127.0.0.1"
protocol: TCP

If you are using something like Minikube, you may have to edit config.json file for such mapping.

In this configuration, we map host ports used by our Istio service mesh to specific node ports in our Kubernetes cluster that will expose our services. The following is some explanation and you can always read more at Istio website.

  • Istio-ingress-gateway: HTTP(80), HTTPS(443)
  • Istiod: 15021

Since I only intended for local development, I used localhost(127.0.0.1) as my listening IP address. If you plan to expose services externally, consider using different IP address such as 192.168.1.100 instead of localhost.

Why Istio when we already have kube-proxy?

This is how communication between pods typically looks like.

As you can see here, if Hello pod wants to communicate with Happy pod; let’s just say a simple curl, the traffic needs to go first to kube-proxy. Afterwards, based on the route tables and routing rules, kube-proxy will route the traffic to the backend pod, which is Happy pod in this case.

Indeed things are working well with kube-proxy. However, we cannot always claim that “If it works, it works”. Imagine an e-commerce cluster has hundreds of pods and they communicate each other. The payment step is loading infinitely… Surely, we know something has crashed! But where do we trace and solve the problem? kube-proxy doesn’t have some built-in observability. “If it crashes, it crashes”, and there will be consequent non-stop calls to your support.

Service mesh comes in handy while solving such observability issues. One of the best known service is Istio. If we implement Istio in a similar scenario, this is how it will look like.

In this re-architecture, each pod has two containers: an application container and an istio proxy, also known as sidecar. Istiod control plane is responsible for deploying these sidecars whenever a pod is launched in the namespaces, that are labelled “istio-injection=enabled”.

Now, whenever Hello wants to communicate with Happy, the traffic is sent via sidecar proxy. The proxy has knowledge of routing and based on that, it routes traffic to the appropriate pod(with another sidecar). In this way, the traffic is successfully delivered to the desired backend. In case something fails during the communication, we can always trace back and identify the point of failure. This is also what makes service mesh different from traditional kube-proxy.

Enough of theory. It’s time to dive straight into the project!

Create kind cluster

To create a kind cluster, we first need kind cli installed on our system. I’m on a Windows system and since I have chocolatey package manager, I will install via:

choco install kind

If you are on MacOS, you may use something like Homebrew.

Now, create the cluster as specified in our configuration file.

kind create cluster --config kind-config.yaml --name hello-happy

This will take 2–10 minutes based on your system’s performance.
When it shows the cluster is created, you can check the status of the cluster by:

kubectl get nodes

Please use ‘context=hello-happy’ argument if you have any other clusters running on your system.

2. Deploy ArgoCD

For GitOps practices used in this project, we will use a tool called ArgoCD. It is a declarative, GitOps continuous delivery tool for Kubernetes. You can explore more at its website.

We will need to create a namespace where ArgoCD server will be deployed.

kubectl create namespace argocd

After that, let’s download ArgoCD and apply it to the created namespace.

kubectl apply -n argocd -f https://raw.githubusercontent.com/argoproj/argo-cd/stable/manifests/install.yaml

This will take 1–5 minutes. You can check the status of ArgoCD pods with:

kubectl -n argocd get pods

If every pod shows Running, we will access the UI by forwarding the port to 8080 of our host. Open a new terminal as port forwarding needs to handle inbound connections.

kubectl port-forward svc/argocd-server -n argocd 8080:443

You can access the UI now at localhost:8080.
The username will be admin and you can retrieve the password via:

kubectl -n argocd get secret argocd-initial-admin-secret -o jsonpath="{.data.password}" | base64 -d

3. Install Istio

For implementing service mesh in this project, we will be using Istio. There are different ways to download Istio, and I find Helm the most convenient among them.

Add Istio repository to the local helm system.

helm repo add istio https://istio-release.storage.googleapis.com/charts

First, we have to create a namespace for Istio and deploy the base components of Istio.

helm install istio-base istio/base -n istio-system --create-namespace

Then, we will deploy Istiod control plane and components into that namespace.

helm install istiod istio/istiod -n istio-system --wait --timeout=10m

4. Deploy the apps

Ok, now we have Istio and ArgoCD on our cluster. It’s time to initialize our deployment alongside with service mesh.

Connect repository

First, we need to connect our repository to ArgoCD, so that it can monitor, sync and deploy accordingly with the changes.
Under Applications > Repositories of ArgoCD, click on Connect Repository and fill in your repository details.

If your repository is private, you will need to fill in Username and Password. However, since GitHub stops supporting password authentication, we will have to use Personal Access tokens.

You can create a token by going to your GitHub profile settings. Under Developer settings > Personal access tokens > Tokens(classic) > Generate new token.
For simplicity of access control, we will check every permission for this token. After creating, it will give you a one-time-displayed secret. Store it at somewhere safe. We will need this token for our Jenkins pipeline as well.

Create applications

We will be creating two applications for prod and dev environments, which will listen on respective branches of our repository.
In the UI, go to Applications and Create an app.

For prod,

  • Name the app something meaningful like ‘hello-happy-app-prod’.
  • Sync policy will be Automatic and keep other parts default.
  • Fill in your connected repository details
  • Revision will be the branch that you want your app to monitor
  • Path is where your Kubernetes manifests are located in your repository. — For destination namespace, type prod
    — Don’t worry. My manifests already include a namespace definition file.

For dev,

  • The steps will be the same as prod. However, make sure you check the ‘AUTO-CREATE NAMESPACE’, as this branch doesn’t contain namespace file. Plus, to highlight a capability of ArgoCD.

After creation, the applications will start syncing with the repository and deploy the manifests to the desired namespaces. You can expand each app and watch the deployment process as well.

App Health will show Healthy as soon as your deployment is successful and the state of real infrastructure matches to that of GitHub.

You can also check the pods health by

kubectl -n prod get pods

kubectl -n dev get pods

In prod namespace, you will see each pod shows 2/2. This is because prod namspace is intended to use Istio service mesh and labelled with ‘istio-injection=enabled’. Subsequently, any pod created in this namespace will have an additional sidecar-proxy container. You can ensure this with

kubectl -n prod describe pods

It will show something like this.

5. Install monitoring tools

Ok, now our prod pods have sidecar-proxies installed. How can we ensure that our pods can communicate? Indeed, we can use something simple like curl. But, do you realize there is also default kube-proxy on our cluster?

What if our pods are using that instead of the desired service mesh?
The easiest and most dangerous way to verify this is to delete kube-proxy server. DON’T DO THIS. The other components outside of service mesh will probably fail.

As of now, there is a good communication between our Hello and Happy pod. You can check this by

kubectl -n prod exec <happy_pod_name> -- curl hello-world-service

However, we need to verify which service is routing the traffic.

There are various tools for collecting metrics from a Kubernetes cluster and displaying them. One of the popular pair of tools is Prometheus and Grafana, and we will use that in our project.

Install Prometheus via Helm repository

Prometheus is readily available on Helm. Let’s add the repository first.

helm repo add prometheus-community https://prometheus-community.github.io/helm-charts

We will be deploying our Prometheus and Grafana pods on a separate namespace. Let’s call the namespace ‘monitoring’. We will simultaneously create namespace and deploy Prometheus pods in it.

helm install prometheus prometheus-community/prometheus -n monitoring --create-namespace

This will take around 1–5 minutes and wait for every pod to be in the Running state. You can check this by

kubectl -n monitoring get pods

Alternatively, you can also check the Prometheus status with the following command:

kubectl wait pods --for=condition=Ready -l app=prometheus -n monitoring --timeout=120s

Install Grafana via Helm repository

After Prometheus is up and running, we can install Grafana.
How does Prometheus and Grafana work in pair? In simple words, Prometheus will collect metrics from our cluster, and Grafana will utilize those metrics to make a monitoring dashboard.

Another question is how Grafana knows it can use Prometheus as a data source? When deploying Grafana, we have to specify a configuration file for it. The file contains instructions about which datasource to be used to create dashboards, login credentials details, etc.

Here’s the relevant part of Grafana configuration file.

datasources:
- name: Prometheus
type: prometheus
url: http://prometheus-server.monitoring.svc.cluster.local
access: proxy
isDefault: true
adminUser: admin
adminPassword: admin

Similar to Prometheus, Grafana is also available as a Helm chart.

helm repo add grafana https://grafana.github.io/helm-charts

Now, let’s deploy Grafana into the same namespace as Prometheus.
Note: It’s better to deploy Grafana only after all of the Prometheus pods are running.

helm install grafana grafana/grafana -n monitoring -f grafana-value.yaml

If the Grafana pod shows running, we can access the UI on port 8081 of localhost via port-forwarding method.

kubectl port-forward svc/grafana 8081:80 -n monitoring

We have to allocate a port apart from 8080 since it’s used by ArgoCD server. The login credentials are ‘admin’ for both username and password. It’s optional to change the password, and feel free to skip it.

Go to Dashboards > Create Dashboard > Import Dashboard.
Since we want to monitor the service mesh, we will import Istio Workload Dashboard. Copy the ID and paste it accordingly. Your final configuration shall look something like this.

As of now, we won’t see any data since the pods are not communicating yet. We can simulate the traffic simply by curling from happy pod to hello-service. Remember, the previous architecture; applications don’t communicate directly. They use services and to ping or curl another pod, we have to do it to its service.

kubectl -n prod get pods

kubectl -n prod exec <happy_world_pod> -- curl hello-world-service

After 3–4 executions, you will start seeing the traffic from Grafana dashboard.
happy-world-app — outbound (or) hello-world-app — inbound

Remember we were having a doubt if our pods are communicating via service mesh or kube-proxy? It’s obvious that the traffic is routed through service mesh. How? The imported dashboard type says it all!

6. Install and Configure Ingress Service

As of now, everything is working pretty well. But how do we serve our application to users?

Isn’t it just convenient to serve it through node ip and service port? Please remember ‘PODS ARE EPHEMERAL!’. The ports will always change whenever pods get deleted, restarted, scaled, etc.

For the reason, we will serve our application through a static IP. It doesn’t matter what is happening at the backend, users will always be accessing the application through a single IP, 127.0.0.1 in this case.

Create Istio-Gateway Service

To accept the ingress traffic from users, we will need a gateway. Let’s create one.

helm install istio-gateway istio/gateway -n istio-system

Check the status of the gateway service after creating.

The service type shows LoadBalancer. We don’t have any load balancer since we are using local kind cluster. If our cluster is cloud-based, a load balancer will get automatically created by the CSP. Moreover, the port mappings are different from our desired locations when we create the cluster. So, let’s patch this gateway service to match the cluster.

kubectl patch -n istio-system svc istio-gateway --patch-file istio-gw-config.yaml

Query the status again and voila, we get what we want now.

Create a Gateway Object and VirtualService

Ok, we created a gateway service that will be an entry point for the external traffic. How do we route that traffic to our destination pods? To define such routing rules, we need to create a custom Istio resource called gateway object. Let’s apply this right now!

kubectl apply -f gateway.yaml -n prod

Since our pods are in the prod namespace, we will deploy the gateway object into the same namespace.
The gateway object itself does not directly route traffic from ingress gateway to the services. It uses something called VirtualService. In simple terms, we can see VS as a routing table where paths, headers, and other layer7 routings are handled.

In this project, we want to access our apps based on the path we used during our request. This is how we can define such routing rule.

http:
- name: 'route-service-a'
match:
- uri:
prefix: /hello
rewrite:
uri: "/"
route:
- destination:
port:
number: 80
host: hello-world-service
- name: 'route-service-b'
match:
- uri:
prefix: /happy
rewrite:
uri: "/"
route:
- destination:
port:
number: 81
host: happy-world-service

Additionally, we have to rewrite the path to root(/) as our Kubernetes services are serving at that path, not ‘/hello’ like in the request.

Check the status of the freshly deployed components.

kubectl -n prod get gw,svc

Now, the application will be serving at 127.0.0.1:80 and you can access each application with ‘/happy’ and ‘/hello’ paths. Moreover, I included a simple API in my Hello application to accept traffic on ‘/api/hello’.

@app_hello.route('/api/hello', methods=['GET'])
def get_hello():
return jsonify(message="Hello from the Hello microservice 2.0!")

We can retrieve(GET) that message from out Happy app via ‘/happy/api/hello’. Here’s how it’s done.

@app_happy.route('/api/hello', methods=['GET'])
def get_happy():
hello_response = requests.get('http://hello-world-service/api/hello')
hello_message = hello_response.json().get('message', 'Error getting hello message')
return jsonify(message=f"{hello_message} And Happy Final from the Happy microservice! Let's get stuff deployed")

Here’s a sample of how it would look like.

Sorry for the project being quite extensive :)) We have a pretty long way to go. Maybe take a short break and let’s finish this after!

7. Implement Observability

We already have Grafana for our service mesh. Why do we need another monitoring?

Because… we need to be more specific about traffic flow for our service mesh. Grafana is a great tool except it doesn’t focus much on service mesh topology, and instead provides a broad view of the cluster workloads.

For such Istio-specific observability, Kiali fits perfectly. It helps engineers understand how services communicate and how traffic is routed between them.

Let’s install a Kiali server in istio-system namespace via Helm.

helm install kiali-server kiali-server --repo https://kiali.org/helm-charts --set auth.strategy="anonymous" --set external_services.prometheus.url="http://prometheus-server.monitoring" -n istio-system

We will skip authentication with ‘anonymous’ strategy for the purpose of simplicity.

If the Kiali server is up and running, you can access it on the port 8082 of local host by forwarding its port.

kubectl port-forward svc/kiali 8082:20001 -n istio-system

As you can see, it provides insights of our service mesh. We can now easily identify which microservice is failing, how the traffic is routed, how the service mesh components are communicating, and much more. And that’s why we need Kiali despite having Grafana ;))

As the previous gateway routing we created, the istio-gateway is handling the external traffic and based on the paths, it’s routing to the respective service. Moreover, Happy app is also communicating with Hello service via ‘/api/hello’ HTTP path.

You can explore more about Kiali at your own convenience. On and on, you will see how powerful it is, when it comes to service mesh.

8. Create Jenkins CI pipeline

Finally, the last part :))

So far, we have

  • created a cluster
  • deploy microservices
  • implement service mesh
  • implement GitOps for deployments

Now, we need something that will build and update the artifacts seamlessly. We will use Jenkins for such CI/CD purposes.

Launch Jenkins Server

I already have a terraform configuration that will launch a Jenkins server with ‘t2.micro’ machine type.

Let’s create it! If you enjoy manual way, I have also provided user data shell script.

terraform init

terraform validate

terrafom plan

terraform apply

Access the Jenkins UI at <PUBLIC_IP>:8080. Set up the server per instructions and install the following plugins:

  • Docker Pipeline
  • CloudBees Docker Build and Publish

Configure for trigger and push

We will also need our credentials stored in Jenkins: to push our Docker image and push artifacts. Please store your Github Personal Access Token as Secret Text and Docker credentials as Username with Password.

For Jenkins to get triggered based on the push events of our GitHub repository that contains application code, we need to create a webhook at our GitHub repository.

Go to your GitHub repository settings and create one with the following format.
JENKINS_URL/github-webhook/

It shall show success if you follow the exact format.

If the format is correct but the request fails, ensure that Jenkins settings URL matches the real URL that you access Jenkins.

Create a pipeline

New Item > Pipeline

  • Check the ‘GitHub hook trigger for GITScm polling’
  • Specify the following and ensure the path to Jenkins file is correct

Click on Save after all the details are filled correctly.

Test the pipeline

Now, try editing something simple and push your changes to GitHub.

git add -A

git commit -m "Test CI/CD"

git push origin main

The pipeline shall get triggered now. Wait until the build is finished and success.

P.S. My pipeline failed on the first try as I had the previous test commit with the exact same name :))

This will also edit our k8s manifests from the manifests repository and open a pull request. Approve the pull request and merge it to main as well.

Soon after, you will see ArgoCD apps become Out Of Sync and start pulling updated manifests from the repository. The interval is ~3 minutes but you can also hard refresh the apps.

Conclusion

Alright, thanks for reading and coping with the project. In this project, we achieved a microservices architecture with CI/CD, GitOps, Service Mesh and Observability. It was a long journey but let’s embrace the learning journey.

I’m a novice and learning as well, so some parts may be technically deviated. Feel free to comment down below and give suggestions. I would really appreciate it!

The code and configurations for this project are available at my GitHub repositories. Edit the values and you can start recreating the project!

--

--