From Microservices to the Cloud: Orchestrating and Automating Deployments with Kubernetes

Armaghan Shakir
6 min readJan 4, 2024

--

In this article, I have shared my journey of developing different microservices for my application, orchestration and continuous deployment using github actions on Azure Kubernetes.

Github Repositry: geetu040/pixa

1. Storage Units

Following Storage units were created in Azure resource group

User Database

Image Storage

Image Storage Logs

  • Azure Service: Azure Storage Account Blob Service
  • Used by Microservice: usage-monitor-service
  • Image Storage generates logs on every event that are automatically stored here using Azure Diagnostic Tools

Application Logs

  • Azure Service: Azure Storage Account File Service
  • Used by Microservice: all
  • This is the persistent storage that is mounted in each container in the pod to store application logs

2. Developing Microservices

I created 5 microservices in python using FastAPI

  • Each microservice has a specific role, independent of other microservices and runs on a different port.
  • Each service is connected with a storage unit except for controller service.
  • controller-service provides the html pages to client and also works as a back-end to process client requests. It is not communicating with any storage unit, rather it redirects the client request to the particular service.
  • Logs for each microservice are maintained in a mounted storage unit, thus any crash or failure is backed by detailed error log.
  • JWT Authentication is used in controller-service to maintain user session.
  • Threading for Parallelism is used where controller needs to communicate with multiple services.
  • Every endpoint in each microservice returns a JSONResponse.
microservices
├── auth-service: port 5000
│ ├── config.py
│ ├── Dockerfile
│ ├── main.py
│ ├── README.md
│ ├── requirements.txt
│ └── service.py
├── controller-service: port 80
│ ├── config.py
│ ├── Dockerfile
│ ├── main.py
│ ├── README.md
│ ├── requirements.txt
│ ├── service.py
│ ├── static
│ └── utils.py
├── storage-account-service: port 5001
│ ├── config.py
│ ├── Dockerfile
│ ├── main.py
│ ├── README.md
│ ├── requirements.txt
│ └── service.py
├── storage-monitor-service: port 8000
│ ├── config.py
│ ├── Dockerfile
│ ├── main.py
│ ├── README.md
│ ├── requirements.txt
│ └── service.py
├── usage-monitor-service: port 8001
│ ├── config.py
│ ├── Dockerfile
│ ├── main.py
│ ├── README.md
│ ├── requirements.txt
│ └── service.py
└── README.md

3. Pushing Images to Github Container Registry

Next step is to build docker image of each microservice and push it to a remote container registry. In this case, I have used Github Container Registry

docker build -t usage-monitor-service:latest .
docker tag usage-monitor-service:latest ghcr.io/geetu040/usage-monitor-service:latest
docker push ghcr.io/geetu040/usage-monitor-service:latest

4. Kubernetes

I have the following manifest files that are configuring the complete kubernetes architechture on Azure

manifests
├── configmap.yaml
├── deployment.yaml
├── persistentvolumeclaim.yaml
├── service.yaml
└── storageclass.yaml

1. configmap.yaml

  • It contains all the environmental variables and secrets for
  • Azure Database for MySQL flexible server
  • Azure Storage Account File Service
  • Azure Storage Account Blob Service
  • JWT Token Secrets
  • These config values are available to containers as environmental variable
kind: ConfigMap
data:
PIX_DB_HOST: 'your-db-host-placeholder'
PIX_DB_DATABASE: 'your-db-name-placeholder'
PIX_DB_USER: 'your-db-user-placeholder'
PIX_DB_PASSWORD: 'your-db-password-placeholder'
...

2. storageclass.yaml

  • It creates a share in Azure Storage Account File Service, providing a persistent volume
  • This volume is accessible from Azure Portal and can be mounted in containers and VMs
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: pixa-sc
provisioner: kubernetes.io/azure-file
mountOptions:
- dir_mode=0777
- file_mode=0777
- uid=1000
- gid=1000
- mfsymlinks
- cache=strict

3. persistentvolumeclaim.yaml

  • This creates a claim on persistent storage created using StorageClass
  • This claim is referred in containers and deployments to mount original storage
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: pixa-pvc
spec:
accessModes:
- ReadWriteMany # ReadWriteOnce, ReadOnlyMany or ReadWriteMany
# storageClassName: hostpath
storageClassName: pixa-sc
resources:
requests:
storage: 1Gi

4. deployment.yaml

  • It creates 3 replicas of Pod, thus providing backup on failure of one pod and also enables load balancing
  • Each pod defines 5 containers which are running on different ports and communicating using localhost

Pod Definition

  • this part of manifest defines how containers are pulled from github container registry into the pod
kind: Deployment
...
spec:
...
template:
...
spec:

containers:

- name: pixa-controller-service
image: ghcr.io/geetu040/pixa-controller-service
ports:
- containerPort: 80
...

- name: pixa-auth-service
image: ghcr.io/geetu040/pixa-auth-service
ports:
- containerPort: 5000
...

- name: pixa-storage-account-service
image: ghcr.io/geetu040/pixa-storage-account-service
ports:
- containerPort: 5001
...

- name: pixa-storage-monitor-service
image: ghcr.io/geetu040/pixa-storage-monitor-service
ports:
- containerPort: 8000
...

- name: pixa-usage-monitor-service
image: ghcr.io/geetu040/pixa-usage-monitor-service
ports:
- containerPort: 8001
...

Rollback

  • In case a replica fails, there are 2 other replicas to process the incoming traffic and meanwhile the failed replica restarts.
apiVersion: apps/v1
kind: Deployment
...
spec:
...
replicas: 3

Using ConfigMap

kind: Deployment
...
containers:
- name: pixa-controller-service
image: ghcr.io/geetu040/pixa-controller-service
envFrom:
- configMapRef:
name: pixa-config

Mounting Persistent Volume for Logs

kind: Deployment
...
volumes:
- name: volume
persistentVolumeClaim:
claimName: pixa-pvc
containers:
- name: pixa-controller-service
image: ghcr.io/geetu040/pixa-controller-service
...
volumeMounts:
- mountPath: "/mnt"
name: volume

5. service.yaml

  • It map Node’s public IP address to deployment
  • It balances load between different replicas of deployment
apiVersion: v1
kind: Service
metadata:
name: pixa-service
spec:
selector:
app: pixa-deploy
type: LoadBalancer
ports:
- protocol: TCP
port: 80
targetPort: 80

5. Github Actions: CI/CD

From Building Docker Images to pushing them on Github Container Registry to Deploying them on Azure Kubernetes, everything is automated using github actions. This process involves following steps

Push Images to GHCR

  1. Saving Github Credentials in Repository Secrets
  2. Login to GitHub Container Registry
  3. Build Docker Images
  4. Push Docker Images to Github Container Registry
github-container-registry:
runs-on: ubuntu-latest
steps:
- name: Checkout Repository
uses: actions/checkout@v2
- name: Login to GitHub Container Registry
run: echo "${{ secrets.TOKEN }}" | docker login ghcr.io -u ${{ github.actor }} --password-stdin
- name: pixa-auth-service
run: |
docker build -t ghcr.io/geetu040/pixa-auth-service:latest microservices/auth-service/
docker push ghcr.io/geetu040/pixa-auth-service:latest
...

Deploy to AKS

  1. Save Azure Credentials in Repository Secrets
  2. Set up kubelogin for non-interactive login
  3. Get K8s context
  4. Deploys manifests
azure-kubernetes-services:
runs-on: ubuntu-latest
needs: [github-container-registry]
steps:
- uses: actions/checkout@v3
- name: Azure login
uses: azure/login@v1.4.6
with:
creds: '${{ secrets.AZURE_CREDENTIALS }}'
- name: Set up kubelogin for non-interactive login
uses: azure/use-kubelogin@v1
with:
kubelogin-version: "v0.0.25"
- name: Get K8s context
uses: azure/aks-set-context@v3
with:
resource-group: pixa-resource
cluster-name: pixa-cluster
admin: "false"
use-kubelogin: "true"
- name: Deploys application
uses: Azure/k8s-deploy@v4
with:
action: deploy
manifests: |
manifests/storageclass.yaml
manifests/service.yaml
manifests/persistentvolumeclaim.yaml
manifests/deployment.yaml

6. Application Demo

The User Interface and routing is very basic, as the project focuses more on cloud comping and continuous integration, but here is a quick demo to see how the final application looks like

  • Log in page
  • Dashboard with storage usage, bandwidth usage, list, upload and delete images.

7. Load Testing

I used postman for load testing my application.

Configuration

  • Virtual Users: 100
  • Duration: 5 minutes
  • Ramp Up: 3 minutes

Results

8. Links

Github Repositry: geetu040/pixa

Azure Documentation

Youtube freeCodeCamp: Docker Containers and Kubernetes Fundamentals

--

--