Deploying Containerized Microservices on Azure with Kubernetes: A Step-by-Step Guide to deploy your Nodejs app on Microsoft Azure

M Yasir Ghaffar
11 min readJun 24, 2024

--

INTRODUCTION

Welcome to your comprehensive guide on deploying microservices with Azure Kubernetes Service (AKS). Designed for developers, students, and tech enthusiasts, this guide will take you from the basics of your local development environment to the complexities of managing a robust microservices architecture on Azure.

In this guide, you will learn how to set up essential tools like Docker and the Azure CLI, containerize your applications, and integrate with Azure services like the Azure Container Registry (ACR) and AKS. We’ll explore best practices for deploying, scaling, and updating your microservices, as well as implementing security measures and monitoring to ensure your applications are secure and perform optimally.

By the end of this journey, you’ll have a solid understanding of microservices deployments on Azure and practical experience in deploying your applications to a live cloud environment. Let’s embark on this path together and equip you with the skills needed to thrive in cloud computing.

PREREQUISITES AND REQUIREMENTS (for Windows Users)

  • Operating System and Hardware Requirements: Ensure your PC runs Windows 10 64-bit (Pro, Enterprise, or Education) with Build 16299 or later. Your system should have a 64-bit processor with SLAT capabilities, at least 4GB of RAM, and hardware virtualization enabled in the BIOS settings.
  • Windows Subsystem for Linux (WSL2): Install WSL2 by following the official guidelines on the Microsoft website. This subsystem is essential for running Linux-based applications directly on Windows.
  • Docker Desktop for Windows: Enable WSL2 and install Docker Desktop by adhering to the instructions in the Docker official documentation. Docker will help you containerize and manage your applications.
  • Azure Portal and Azure CLI: Set up your Azure Portal account and install the Azure CLI to interact with Azure services directly from your command line.
  • Git: Install Git to manage your source code versions efficiently. This tool is vital for tracking changes and collaborating on software projects.

TRANSIT YOUR MONOLITHIC NODEJS APPLICATION INTO MICROSERVICES

  • Define Service Boundaries: Identify logical divisions within your application based on business capabilities.
  • Modularize the Codebase: Break your monolithic app into modules that could potentially be converted into standalone services.
  • Isolate the Data Layer: Each service should own its data model and database to ensure loose coupling and independent scalability.
  • Develop and Test Services: Rebuild interactions through lightweight APIs and ensure robust testing for each service.
  • Implement Continuous Deployment: Use tools like Jenkins or GitHub Actions to automate the building, testing, and deployment of each microservice.
  • Monitor Services Independently: Set up monitoring for individual services using tools like Prometheus and Grafana to keep track of their health and performance.

CONTAINERIZE YOUR MICROSERVICES WITH DOCKER

Containers are lightweight, standalone packages that encapsulate everything needed to run a software application, including the code, runtime environment, libraries, and system settings. Created using Docker files, containers provide a consistent and isolated environment for applications, ensuring that they work uniformly across different computing environments.
Here’s how to containerize each of your microservices using Docker:

Step 1. Create a Dockerfile

The Dockerfile is a text document that contains the commands a user could call on the command line to assemble an image. For a Node.js application, the following sample Dockerfile is provided, along with comments that guide you through each step:

# Use an official Node.js runtime as a parent image or use node:latest
FROM node:14

# Set the working directory in the container.
# if Dockerfile is at / and app is your working directory
# Carefully adjust paths, sometimes both WORKDIR and Dockfile are at /
WORKDIR /usr/src/app

# Copy package.json and package-lock.json for npm install
# This is done before copying your application to cache
# the npm install step separately
COPY package*.json ./

# Install dependencies
# If building your code for production, you can use npm ci instead
# Make sure your package.json file has a start script
# i.e. "start": "node server.js"
RUN npm install

# Copy the rest of your application's code
COPY . .

# Bind your application to port 3000
# This tells the container to listen on port 3000 at runtime
EXPOSE 3000

# Setting up environment variables. NOT RECOMMENDED TO SET ENV LIKE THIS
# For sensitive data, use Docker secrets or pass them securely at runtime.
ENV NODE_ENV=production API_KEY=your_api_key ANOTHER_KEY=another_key

# Define the command to run your app using CMD
# Here we use the npm start script
CMD ["npm", "start"]

Step 2. Building and Running the Containers Locally

Once you have a Dockerfile in each service directory, you can build the Docker images by running this for each service:

docker build -t yourservicename .

-t, add an extra tag yourservicename with the container for identification purpose.
Now run them:

docker run -p 3000:3000 -d yourservice

-p 3000:3000, Maps port 3000(left) of the container to port 3000 on the host. This allows you to access the app from http://localhost:3000 on your machine.
This ensures that our container images are ready and running locally. You can view status of all images built and services running on Docker Desktop.

Now We’ll cover the necessary steps to push your locally running Docker containers to ACR, explaining why these steps are critical for deploying microservices on Azure.

AZURE CONTAINER REGISTRY (ACR)

Azure Container Registry is a managed Docker registry service based on the open-source Docker Registry 2.0. It provides a secure, scalable, and reliable container registry for storing and managing private Docker container images.
Pushing local container images to Azure Container Registry (ACR) centralizes and secures your deployment process. ACR serves as a central repository that simplifies image management, supports version control for easy rollback, and integrates with CI/CD pipelines for automated deployments. This setup enhances security with fine-grained access control and automatic vulnerability scanning. By using ACR, you ensure that deployments are efficient and scalable, pulling images quickly from a reliable, centralized location, rather than directly from individual development machines. This method is essential for maintaining consistent, secure, and swift application deployments across various environments.

Step 1: Setting Up ACR

Install Azure Cli and login using your Microsoft azure account.

az login

Step 2: Create a Resource Group:

Azure organizes resources into groups. Resource groups in Azure are collections that hold related resources for an Azure solution, allowing you to manage and organize them as a single unit. If you don’t have a resource group, create one with:

az group create --name myexampleResourceGroup --location eastus
  • Replace myResourceGroup and eastus with your preferred resource group name and location.

Step 3: Create an ACR Instance:

az acr create --resource-group myResourceGroup --name MyACR --sku Basic --admin-enabled true

Replace MyACR with a unique name for your registry. The --admin-enabled true flag enables admin access to the registry. Adjust these parameters accordingly.

After creating ACR, log into it.

az acr login --name MyACR

Step 4: Pushing Images to ACR

Push each of your local containers to ACR.
Before pushing, tag your Docker image with the login server name of your ACR:

docker tag my-nodejs-app MyACR.azurecr.io/my-nodejs-app:v1

Then push the tagged images of your containers:

docker push MyACR.azurecr.io/my-nodejs-app:v1

AZURE KUBERNETES SERVICE (AKS)

Azure Kubernetes Service (AKS) is a managed Kubernetes service that simplifies the deployment, management, and scaling of containerized applications using Kubernetes. It provides an integrated environment for running and scaling applications from container images stored in Azure Container Registry (ACR). With AKS, you can automate container operations, easily scale applications, and manage the complexities of container orchestration, making it an essential tool for deploying modern applications efficiently and reliably on the cloud.
Here’s how to create an AKS cluster:

Step 1: Creating an AKS Cluster

Create cluster and attach to the resource group and ACR:

az aks create --resource-group myResourceGroup --name MyAKSCluster --node-count 3 --generate-ssh-keys --attach-acr MyACR

Make sure you attach it to the correct resource group and ACR otherwise you might see an error while image pulling in later steps.

Step 2: Configure kubectl to Use Your AKS Cluster

kubectl is a command-line tool that allows you to run commands against Kubernetes clusters.

az aks get-credentials --resource-group myResourceGroup --name MyAKSCluster

CREATING KUBERNETES MANIFESTS

To deploy your microservices on AKS, you will create Kubernetes manifest files. These files describe your application’s structure — like which images to use, the desired number of container replicas, network settings, and more.

Step 1: Create Deployment Manifests for each service:

A deployment manifest tells Kubernetes how to create and update instances of your application. Here is a basic example for your Node.js application (example-deployment.yaml):

apiVersion: apps/v1
kind: Deployment
metadata:
name: my-nodejs-app
spec:
replicas: 2
selector:
matchLabels:
app: my-nodejs-app
template:
metadata:
labels:
app: my-nodejs-app
spec:
containers:
- name: my-nodejs-app
image: MyACR.azurecr.io/my-nodejs-app:v1
ports:
- containerPort: 3000

Explanation:

  • apiVersion: Specifies the API version of Kubernetes used.
  • kind: Specifies the kind of object you want to create, in this case, a Deployment.
  • metadata: Data that helps uniquely identify the object, including a name string.
  • spec: Specific information about how the deployment is configured.
  • replicas: Specifies the desired number of instances.
  • selector: Defines how the deployment finds the pods it manages.
  • template: Describes the pods that will be created.
  • image: Specifies the Docker image to use for the pod.

Step 2: Create Service Manifests for each service

A service in Kubernetes is an abstraction which defines a logical set of Pods and a policy by which to access them — sometimes called a micro-service.

apiVersion: v1
kind: Service
metadata:
name: my-nodejs-app-service
spec:
type: LoadBalancer
ports:
- port: 80
targetPort: 3000
selector:
app: my-nodejs-app

Explanation:

  • type: LoadBalancer: Exposes the Service externally using a cloud provider’s load balancer.
  • port: The port that the service exposes (outside the cluster).
  • targetPort: The port on the container to route to.
  • selector: Connects the service to specific deployments.

Few things to keep in mind while setting up your deployments and service (THEN CONTINUE TO STEP 3):

Setting up Internal vs. External Ports:

Ensure that the port (external) and targetPort (internal) in your service manifest are correctly set. targetPort is the port on which the container is listening, while port is the port exposed by Kubernetes Service.
Example: If your Node.js app listens on port 3000, and you want to expose it on port 80 externally:

apiVersion: v1
kind: Service
metadata:
name: nodejs-service
spec:
type: LoadBalancer
ports:
- port: 80
targetPort: 3000
selector:
app: nodejs-app

Choosing the Right Service Type

  • LoadBalancer vs. ClusterIP: Use LoadBalancer for services that need to be accessible from outside the cluster. For internal communication, use ClusterIP.
  • LoadBalancer: Automatically creates an external IP to interact with the service.
  • ClusterIP: Exposes the service on a cluster-internal IP, making it only reachable from within the cluster.

Environment Variables and Service Discovery

When one service in your cluster needs to communicate with another, use Kubernetes DNS for service discovery rather than hardcoding IPs.

apiVersion: apps/v1
kind: Deployment
metadata:
name: myapp-deployment
spec:
containers:
- name: myapp
image: myapp:1.0
env:
- name: OTHER_SERVICE_URL
value: "http://otherservice:3002"

Step 3: Deploy your deployments and services:

Apply the deployment:

kubectl apply -f example-deployment.yaml

Apply the service:

kubectl apply -f service.yaml

And your services have been deployed and running successfully.

MONITORING AND LOGGING:

You might need to view a couple things just to confirm the status of your application, whether everything is ok or not.
Here are a few things that you might need to check:

View pods:

kubectl get pods

View more pod details:

kubectl describe pods

View All Services:

kubectl get services

Monitoring with Kubernetes Dashboard

Access kubernetes dashboard (deploy if not already set up):

kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.0/aio/deploy/recommended.yaml

Start proxy and access dashboard via following url. You might need to create a token or use a kubeconfig file for authentication.

kubectl proxy
http://localhost:8001/api/v1/namespaces/kubernetes-dashboard/services/https:kubernetes-dashboard:/proxy/

View Logs for a Specific Pod:

kubectl logs <pod-name>

Streaming Logs:

kubectl logs -f <pod-name>

CONCLUSION:

In this guide, we’ve explored the shift from monolithic to microservices architecture using Docker, Azure Container Registry (ACR), and Azure Kubernetes Service (AKS). Starting with setting up essential tools like Docker, Azure CLI, and kubectl on a Windows environment, we containerized a Node.js application, uploaded the Docker images to ACR, and deployed them as microservices on AKS.

We focused on best practices for Kubernetes deployments and services, discussing how to use LoadBalancer for external access and ClusterIP for internal communications within the cluster.

By the end of this guide, we’ve built a scalable, flexible system capable of meeting modern software demands. This setup not only streamlines development and scaling but also enhances reliability and reduces system downtime, providing a robust foundation for managing enterprise applications on Azure.

WHAT TO DO NEXT?

Here are some next steps to consider that can enhance your system, expand your skills, and optimize the application lifecycle management:

1. Continuous Integration/Continuous Deployment (CI/CD)

  • Automate Your Workflow: Set up CI/CD pipelines to automate testing, building, and deploying your applications. Tools like Jenkins, Azure DevOps, GitHub Actions, or GitLab CI can help streamline these processes.
  • Deploy Updates Automatically: Ensure that changes to your codebase in source control trigger automated workflows that test, build, and deploy the updates to your live environment without manual intervention.

2. Advanced Kubernetes Features

  • Explore Kubernetes Operators: Operators are software extensions to Kubernetes that make use of custom resources to manage applications and their components. Operators can automate complex tasks such as managing stateful applications like databases or handling cluster configurations.
  • Implement Service Meshes: Consider using a service mesh like Istio or Linkerd to manage service-to-service communications, observability, traffic management, and security within your cluster.

3. Security Enhancements

  • Implement Stronger Security Practices: This could include setting up network policies to control pod communication, using Role-Based Access Control (RBAC) for resource authorization, and integrating more robust security scanning tools.
  • Secrets Management: Use Kubernetes Secrets or integrate with external secrets management tools like HashiCorp Vault to manage and inject secrets securely into your applications.

4. Cost Management and Optimization

  • Monitor Resource Usage and Costs: Tools like Azure Cost Management can help you monitor and control Azure spending. Kubernetes resource quotas and limits can help manage the resource usage.
  • Implement Autoscaling: Beyond horizontal pod autoscaling, consider cluster autoscaling to dynamically adjust the number of nodes in your cluster based on the needs.

5. Performance Tuning and Optimization

  • Fine-Tune Resource Allocation: Adjust the CPU and memory requests and limits based on the actual usage patterns of your applications to optimize resource utilization and application performance.
  • Load Testing: Regularly perform load testing to understand how your applications behave under stress and identify bottlenecks.

6. Disaster Recovery Planning

  • Back Up Cluster Data: Ensure that your cluster data, including databases and persistent volumes, is backed up regularly. Consider using tools like Velero for backup and recovery.
  • Create a Disaster Recovery Plan: Document and periodically test recovery procedures to ensure you can quickly recover your Kubernetes environment and applications in case of a disaster.

7. Learning and Development

  • Stay Updated with Kubernetes and Azure Developments: The landscape of cloud-native technologies evolves rapidly. Keep learning about new features and best practices by following the Kubernetes and Azure communities.
  • Experiment with New Technologies: Use separate development or staging environments to experiment with new features, tools, or architectural changes without affecting your production environment.

8. Documentation and Knowledge Sharing

  • Document Your Architecture and Processes: Ensure that your team and new members can easily understand your setup and operations by maintaining detailed and up-to-date documentation.
  • Share Knowledge: Consider presenting your learnings and experiences in blogs, talks, or internal training sessions to help others and foster a culture of knowledge sharing.

--

--

M Yasir Ghaffar

Software Engineer | Full Stack Developer | Tech Enthusiast