K8s pt.3 / Deployment setup (AKS)

Liebertar
Dev-ops
Published in
6 min readJan 27, 2024

Before you start reading this article & guidance, I would like to let you know about what kind of technical preparation or setup’s needed first.

https://rafay.co/wp-content/uploads/2019/11/k8saks1-1.jpg

I hope you aware that additional charges may be incurred during the preparation of basic setups. Of course all the deployment steps can be followed with a basic Azure account, but there will be additional charges for using the portal services.

The fundamental concept of Kubernetes remains consistent across various cloud platforms. so If you simply want to grasp the structure or idea, you can use this article for a clearer understanding

Blueprint

Platform : Azure Portal
What would be used : Istio (instead of Nginx), Azure Cloud Registry (ACR), Azure Kubernetes Service (AKS), some Domain provider
Process : FE / BE Application -> Containerize w/ Docker -> push image to Registry (ACR) -> pull Image from ACR -> K8s Deployment with YAML -> DNS Setup

Prep.1 — Application

First thing we need to check is the application part. We need to check if our application has no problem while building and containerizing it.

1. Containerize your Application w/ Docker

Regardless of its usage (Frontend / Backend), we need the containerized application. Place the Dockerfile in your application’s root directory and run it first for checking if there’s no problem while containerizing the application.

It doesn’t matter if your using Node.js, Next.js or any kind of application. please just check if your app has been succesfullly containerized with Docker (we are not focusing on Application side this time). Docker file below is just an example.

Dockerfile (example — Node js)

# Stage 1: Build the application
# Filename : Dockerfile

FROM node:18 as build
WORKDIR /var/app
COPY package.json package-lock.json ./

# use --force command if there's any problem while installing pkgs.
RUN npm install -g pm2 --force
RUN npm ci
COPY . .
RUN npm run build
COPY ecosystem/development/ecosystem.config.js ./ecosystem/development/

# Stage 2: Create the final image with only necessary files
FROM node:18-alpine
WORKDIR /var/app
COPY --from=build /var/app/package.json /var/app/package-lock.json ./
RUN npm install -g pm2 && npm ci --force
COPY --from=build /var/app/dist ./dist
Copy the ecosystem.config.js from the build stage
COPY --from=build /var/app/ecosystem/development/ecosystem.config.js ./ecosystem/development/

EXPOSE 4040
CMD ["pm2-runtime", "start", "ecosystem/development/ecosystem.config.js", "--env"]

When you successfully containerized the application, just make sure it’s getting the client’s request correctly. You can do a quick check by going to localhost:3000 or localhost:4040 – it's just like running it with the local compiler. Easy, right?

Prep.2 — Set up for Azure

If you’re ready to go with your application part, (You can use any sample application) now we should to into the basic steps for Azure settings.

1. Install Azure CLI

As we need to access the Azure with our local machine (your computer), we need to install the CLI first. Go into the link below, follow each of the steps and try logging into azure with your local machine.

https://learn.microsoft.com/ko-kr/cli/azure/install-azure-cli

#after installing it, try below.
az login

2. Create Azure Cloud Registry (ACR)

After installing the CLI and logging into Azure, the first thing to do is creating a resource group by each of service.

I hope the term ‘Service’ doesn’t sound too complex.

In most of applications, the application itself is not divided into individual services. I guess what you’ve worked on “follow a similar pattern” as we just started knowing about the K8s.

Splitting the application by each service only makes sense when your application is already structured perfectly for expanding its services in many directions. But Keep in mind, it’s the ideal approach for managing Kubernetes, and it’s worth envisioning your application in this way for future scalability.

Sorry for the small talk. Let’s continue.

Before we create ACR, we need to create a resource group first. ACR would be collected in the resource group later.

I would recommend grouping all the necessary resources under one resource group. In case your application is not divided into individual services, just create one resource group for this time. If you’re deviding the resource group even if it is just one service, it may lead to challenges in communication between VM resources and might require additional steps to set up the Gateway or Privacy settings.

az group create --name yourResourceGroupName --location eastus || easteurope || somewhere as your wish

# Refer to following page
https://learn.microsoft.com/en-us/azure/azure-resource-manager/management/manage-resource-groups-cli

For creating Azure Container Registry,

az acr create --resource-group myResourceGroupName --name myContainerregistryName --sku Basic

If you have successfully created the registry, you should receive the following response from the Azure CLI:

{
"adminUserEnabled": false,
"creationDate": "2019-01-08T22:32:13.175925+00:00",
"id": "/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/myResourceGroup/providers/Microsoft.ContainerRegistry/registries/mycontainerregistry",
"location": "eastus",
"loginServer": "mycontainerregistry.azurecr.io",
"name": "mycontainerregistry",
"provisioningState": "Succeeded",
"resourceGroup": "myResourceGroup",
"sku": {
"name": "Basic",
"tier": "Basic"
},
"status": null,
"storageAccount": null,
"tags": {},
"type": "Microsoft.ContainerRegistry/registries"
}

3. Push Image to ACR

We’ve created a Resource Group and Container Registry. The next step is to verify if everything is working correctly.

Now, let’s push the Application Image (The containerized application we already built with Docker — in the container, there should be an created application image) to the Azure Container Registry.

sudo docker tag <sampleAppName>:<tag> <yourRegistryName>.azurecr.io/<sampleAppName>:<tag>` 
sudo docker push <yourRegistryName>.azurecr.io/<sampleAppName>:<tag>

# Check below example

sudo docker tag liebertar-backend:latest liebertarregistry.azurecr.io/liebertar-backend:latest`
sudo docker push liebertarregistry.azurecr.io/liebertar-backend:latest

If you’re encountering an “Auth Failed” response from the CLI, it indicates that the Docker login to the Azure Container Registry (ACR) is unsuccessful. This usually happens because Docker needs proper authentication to proceed.

then, Try below command, using the private key for accessing ACR. ( You can create it or find it in your Azure portal resource.

echo "your-private-key-to-access-acr" | docker login <yourRegistry>.azurecr.io -u <yourRegistryName> --password-stdin

# Check below example

echo "r28Vep+KfJiX1TgdfsdfasdfZffFbOd/LLcK+cPO/4fdfau5FC" | docker login libertarregistry.azurecr.io -u liebertarregistry --password-stdin

— password-stadin : This flag indicates that the password will be provided via stdin (standard input), which is why we used the echo command to pass the password.

4. Create Cluster

Now, let’s create a cluster using AKS. I suggest configuring it with three nodes, as shown in the command below.

az aks create --resource-group yourResourceGroup --name yourAKSClusterName --enable-managed-identity --node-count 3 --generate-ssh-keys

The first node will handle the front-end (FE) application and Istio settings. The second node is designated for the default MongoDB pod and the back-end (BE) application. Lastly, the third node will be dedicated to running a MongoDB backup pod.

Structure will be made — edited by Liebertar

This backup pod will sync with the default MongoDB pod. We’ll address additional backup strategies, such as using cronjobs and snapshots, later on.

For now, let’s set up Node #1.

5. Connecting Cluster to Registry: Integrating for Image Pull

az aks update \
--resource-group <yourResourceGroupName>\
--name <yourClusterName> \
--attach-acr <yourRegistryName>

This command updates the AKS cluster, establishing a connection to the specified ACR. This ensures that the cluster can pull container images from the registry seamlessly.

3. Next Steps

Deploying, Load Balancers, and HTTPS Configuration w/ Istio.

As we conclude the current discussion, the next steps involve the practical deployment of your applications.

(1) We’ll be setting up LoadBalancer services for effective traffic distribution, utilizing YAML deployment files for front-end (FE) and back-end (BE) applications.

(2) Additionally, we’ll explore how to secure your communication by obtaining HTTP/HTTPS certificates and connecting your domain using Istio Ingress Controller.

(While you can explore options with Nginx, Istio is emerging as a more efficient and comprehensive solution, providing advanced features for managing and securing traffic in Kubernetes environments).

01.27.24 — Fin.

--

--