Setting up Servers on Kubernetes via Azure

Natalie P
coding spaghetti
Published in
10 min readNov 2, 2023

I’m completely new to kubernetes. A friend had suggested it when I used up all my startup AWS credits too quickly. Motivated by seeing other startup founders just pull through and get stuff out the door, I was motivated to crush my self-limiting belief that I’m not technical enough. So I decided to tackle this. Probably the wrong choice, since it took me a little too long get stuff out the door, but at least I’ve come out the other side more confident.

Microsoft for startups recently launched, and I able to easily get 5k in credits along with access to other perks like GitHub Enterprise, OpenAI…

Once you’ve created an account for azure create a resource group. Resource groups are easy ways for you to group things easily for so you can easily deploy, update, and delete them as a group and manage permissions.

I divided based on environments: prod and dev. If you have a larger project with multiple subprojects you may want group things differently.

Naming Resources

Since there are so many resources you can create, I know I wouldn’t be able to distinguish or remember what each thing is so I used a naming convention: {name}-{resource-group}-{name of resource}

Consideration is that some resources only take camelCase, therefore if you’re OCD about consistency you may want to just name your resources with camelCase.

ex: scoop-dev-rg

Benefits:

  • Since some resources get auto generated by Azure when you create a resource, I knew I didn’t create it, when they didn’t follow my naming convention. Adding name was a differentiator, you can also choose something like a company name or project name.
  • For resource-group I used my environment: prod vs dev. Therefore when I saw a bunch of resources I new which ones worked together and didn’t.
  • Name of the resource, allowed me to recognize what I was creating. While azure has icons to represent this, I honestly can’t remember which one represents which. Having the resource name made this so much easier. I usually avoid shorthand, but that’s because I have terrible memory, so feel free to use shorthand ex: ‘resource group’ might be ‘rg’

Creating the Resource Group in Azure

Search for resource group in the search bar if you don’t already see it on the main screen.

Hit the create button on the top left.

Choose your subscription plan you want your group to charge to.

Name your resource group. Note it needs to be in camelCase. Example I created scoop-dev-rg and scoop-prod-rg.

Choose your Region

I didn’t need to set any of the other settings, therefore feel free to just hit Review + Create and then Create, or Next until you hit the end, and hit Create. Note: creating resource groups is free, so don’t be shy on hitting the create button.

You’ll be brought to the resource group dashboard, where you can see all of your resource groups along with your new resource group!

When you click in you’ll be able to see all the resources you assign to this group. At the moment this will be empty.

Creating the Kubernetes Cluster in Azure

Similarly search and select Kubernetes services

Click createon the top right, select Create a Kubernetes cluster

Select the resource group the cluster belongs to.

Select cluster preset configuration details. This affects cost. If you’re just playing around use test/dev.

Name your cluster. Following my convention mind was scoop-dev-cluster.

I choose the defaults for the rest and hit the Review + Createbutton and then Create

Note: that Azure moved away from service principles as a way to manage permissions to resources and now uses managed identity

  1. Service Principal: This is the older method where you manually create and manage the Azure AD application and associated service principal that the AKS cluster uses.
  2. Managed Identity: This is a newer and recommended method where Azure manages the identity for you, providing an identity for the AKS cluster automatically in the background.

To use the terminal:

You can download tooling so you can access azure, on your own terminal with the service principle we’re about to create, or you can use their cloud terminal.

I just use to the cloud terminal for simplicity which can be accessed: https://portal.azure.com/#cloudshell/ or by clicking the terminal icon on the top right of your azure screen.

You will need to login to the terminal az login

You will get instructions:

A web browser has been opened at https://login.microsoftonline.com/organizations/oauth2/v2.0/authorize. Please continue the login in the web browser. If no web browser is available or if the web browser fails to open, use device code flow with `az login — use-device-code`.

Follow the instructions that open up on the browser.

az account set--subscription {subscription-id} make sure you set your account to the subscription you want charged. If you only have one it will be set to the default one.

Setting up Network Contributor role

The Network Contributor role in Azure is a built-in role that grants permissions to manage networking resources, but not access to them. This role is typically assigned to users who need to manage network resources, which can include virtual networks, subnets, network interfaces, and several other network-related services.
https://learn.microsoft.com/en-us/azure/role-based-access-control/built-in-roles#network-contributor

RESOURCE_GROUP=scoop-dev-rg
SUBSCRIPTION_ID=$(az account show --query id -o tsv) #note only if you set your subscription id before else put it manually
az ad sp create-for-rbac --name facets-github-service-principle --role contributor --scopes /subscriptions/$SUBSCRIPTION_ID/resourceGroups/$RESOURCE_GROUP --json-auth
CLUSTER_NAME=scoop-dev-cluster
RESOURCE_GROUP=scoop-dev-rg
SUBSCRIPTION_ID=$(az account show --query id -o tsv)
MANAGED_IDENTITY_PRINCIPAL_ID=$(az aks show --name $CLUSTER_NAME --resource-group $RESOURCE_GROUP --query identity.principalId -o tsv)
az role assignment create --assignee $MANAGED_IDENTITY_PRINCIPAL_ID --role "Network Contributor" --scope /subscriptions/$SUBSCRIPTION_ID/resourceGroups/$RESOURCE_GROUP
RESOURCE_GROUP=scoop-dev-rg
CLUSTER_NAME=facets-cluster
REGISTRY_NAME=scoopDevContainerRegistry
RESOURCE_ID=$(az acr show --name $REGISTRY_NAME --query id --output tsv)
MANAGED_IDENTITY_PRINCIPAL_ID=$(az aks show --name $CLUSTER_NAME --resource-group $RESOURCE_GROUP --query identity.principalId -o tsv)

# Create the role assignment
az role assignment create --assignee $MANAGED_IDENTITY_PRINCIPAL_ID --scope $RESOURCE_ID --role AcrPull
az role assignment create --assignee $MANAGED_IDENTITY_PRINCIPAL_ID --scope $RESOURCE_ID --role AcrPush

Azure Login Credentials

NAME=myApp
RESOURCE_GROUP=scoop-dev-rg
SUBSCRIPTION_ID=$(az account show --query id -o tsv)
#note only if you set your subscription id before else put it in manually

az ad sp create-for-rbac --name $NAME --role contributor \
--scopes /subscriptions/$SUBSCRIPTION_ID/resourceGroups/$RESOURCE_GROUP \
--json-auth

Output should look like

 {
"clientId": "<GUID>",
"clientSecret": "<STRING>",
"subscriptionId": "<GUID>",
"tenantId": "<GUID>",
"resourceManagerEndpointUrl": "<URL>"
(...)
}

Save it the full json object into github actions. I created a secret named AZURE_CREDENTIALS_DEV

Assign it network contribution role

CLIENT_ID=<enter clientId value>
az role assignment create --assignee $CLIENT_ID --role "Network Contributor" --resource-group $RESOURCE_GROUP

Public IP addresses

If you want the service to always deploy to the same IP address you will need to create a static IP for your service. Store it as a github secret.

I named mine scoop-dev-backend-ip it will show up in the dns name

You can use the command line to create a static public ip address as well.

NAME=scoop-dev-backend-ip
RESOURCE_GROUP=scoop-dev-rg

az network public-ip create \
--resource-group $RESOURCE_GROUP \
--name $NAME \
--allocation-method static

You will use it later as a setting in your github k8.yml

type: LoadBalancer
loadBalancerIP: ${AZURE_PUBLIC_IP}

Please note: if you need a https IP address you will need to setup an ingress controller. The type will be ClusterIP instead of LoadBalancer. Additional settings for an SSL certificate will also be needed, which is not covered in this tutorial.

Docker Repo

We’ll be using docker to create an image (blueprint of your setup) of the service you want to deploy to kubernetes.

We will need:

  • azure repo to store the images
  • docker credentials to use with github actions to enable auto deployment of the images to the repo

Goto container registries, press the create button and create a registry.

When the registry is done building goto Access keys on the right. Check off the admin user to get the login credentials you would need for azure/docker-login@v1

You will need to store login server, username and one of the passwords into github environment secrets for the repo to plan to deploy. It will eventually be added to a github actions yml like this.

- uses: azure/docker-login@v1
with:
login-server: ${{ vars.AZURE_CONTAINER_REGISTRY_LOGIN_SERVER_DEV }}
username: ${{ vars.AZURE_CONTAINER_REGISTRY_USERNAME_DEV }}
password: ${{ secrets.AZURE_CONTAINER_REGISTRY_PASSWORD_DEV }}

Under your repositories settings for the codebase you want to serve using kubernetes add your variables and secrets. Once secrets are inserted you can’t read them again, vs variables are viewable. Note: store things you don’t want visible such as passwords as secrets. Port numbers, urls, usernames can be stored as variables and will be accessed via vars.{{variable_name}}. Secretes will be accessed via secrets.{{secret_name}}

Auto deployment via Github Actions

First store your environmental variables in github secrets and variables.

Under .github/workflows create a yaml file with deployment instructions.

Cluster Namespace

Note if you aren’t using default as the namespace to deploy your services to make sure to create it

NAMESPACE=scoop-services-namespace
kubectl create namespace $NAMESPACE

Github Workflow


name: Build and deploy Node.js app to scoopDevContainerRegistry

on:
push:
branches:
- workflow

env:
NODE_VERSION: '16' # Define node version here to maintain consistency across jobs
PYTHON_VERSION: '3.x' # Define python version here to maintain consistency across jobs
DB_TYPE: ${{ vars.DB_TYPE_DEV }}
DB_DATABASE: ${{ vars.DB_DATABASE_DEV }}
DB_HOST: ${{ vars.DB_HOST_DEV }}
DB_USERNAME: ${{ vars.DB_USERNAME_DEV }}
DB_PORT: ${{ vars.DB_PORT_DEV }}
DB_PASSWORD: ${{ secrets.DB_PASSWORD_DEV }}
DB_SSL_CA_CERT: ${{ secrets.DB_SSL_CA_CERT_DEV }}
JWT_SECRET: ${{ secrets.JWT_SECRET_DEV }}
JWT_EXPIRES_IN: ${{ secrets.JWT_EXPIRES_IN_DEV }}
PORT: ${{ vars.PORT_DEV }}
FILESERVICE_PORT: ${{ vars.FILESERVICE_PORT_DEV }}
BE_FILE_SERVER_URL: ${{ vars.BE_FILE_SERVER_URL_DEV }}
REGISTRY_FOLDER: backend #folder name making it easier to find all docker images from same repo
AZURE_CONTAINER_REGISTRY_LOGIN_SERVER: ${{ vars.AZURE_CONTAINER_REGISTRY_LOGIN_SERVER_DEV }}
AZURE_CONTAINER_REGISTRY_USERNAME: ${{ vars.AZURE_CONTAINER_REGISTRY_USERNAME_DEV }}
AZURE_CONTAINER_REGISTRY_PASSWORD: ${{ secrets.AZURE_CONTAINER_REGISTRY_PASSWORD_DEV }}
AZURE_CREDENTIALS: ${{ secrets.AZURE_CREDENTIALS_DEV }}
AZURE_PUBLIC_IP: ${{ vars.AZURE_PUBLIC_IP_DEV }}
NAMESPACE: scoop-services-namespace
RESOURCE_GROUP: scoop-dev-rg
CLUSTER_NAME: scoop-dev-cluster

jobs:
build:
runs-on: ubuntu-latest
permissions:
contents: read
packages: write

steps:
- name: Checkout repository
uses: actions/checkout@v3
with:
token: ${{ secrets.GITHUB_TOKEN }}

- name: Setup Node.js
uses: actions/setup-node@v3
with:
node-version: ${{ env.NODE_VERSION }}

- name: Login to Docker Registry
uses: docker/login-action@v1
with:
registry: ${{ env.AZURE_CONTAINER_REGISTRY_LOGIN_SERVER }}
username: ${{ env.AZURE_CONTAINER_REGISTRY_USERNAME }}
password: ${{ env.AZURE_CONTAINER_REGISTRY_PASSWORD }}

- name: Build and push Docker images
uses: docker/build-push-action@v2
with:
context: .
file: Dockerfile
build-args: |
DB_TYPE=${{ env.DB_TYPE }}
DB_DATABASE=${{ env.DB_DATABASE }}
DB_HOST=${{ env.DB_HOST }}
DB_USERNAME=${{ env.DB_USERNAME }}
DB_PORT=${{ env.DB_PORT }}
DB_PASSWORD=${{ env.DB_PASSWORD }}
DB_SSL_CA_CERT=${{ env.DB_SSL_CA_CERT }}
JWT_SECRET=${{ env.JWT_SECRET }}
JWT_EXPIRES_IN=${{ env.JWT_EXPIRES_IN }}
PORT=${{ env.PORT }}
FILESERVICE_PORT=${{ env.FILESERVICE_PORT }}
BE_FILE_SERVER_URL=${{ env.BE_FILE_SERVER_URL }}
push: true
tags: |
${{ env.AZURE_CONTAINER_REGISTRY_LOGIN_SERVER }}/${{ env.REGISTRY_FOLDER }}:${{ github.sha }}
${{ env.AZURE_CONTAINER_REGISTRY_LOGIN_SERVER }}/${{ env.REGISTRY_FOLDER }}:latest

deploy:
runs-on: ubuntu-latest
needs: build
permissions:
contents: read
id-token: write
deployments: write

steps:
- name: Setup Node.js
uses: actions/setup-node@v3
with:
node-version: ${{ env.NODE_VERSION }}

- name: Set up Python and Azure CLI
uses: actions/setup-python@v2
with:
python-version: ${{ env.PYTHON_VERSION }}

- name: Login to Azure
uses: azure/login@v1
with:
creds: ${{ env.AZURE_CREDENTIALS }}

- uses: azure/docker-login@v1
with:
login-server: ${{ env.AZURE_CONTAINER_REGISTRY_LOGIN_SERVER }}
username: ${{ env.AZURE_CONTAINER_REGISTRY_USERNAME }}
password: ${{ env.AZURE_CONTAINER_REGISTRY_PASSWORD }}

- name: Setup kubectl
uses: azure/setup-kubectl@v3
with:
version: 'latest'

- name: Set up AKS context
uses: azure/aks-set-context@v3
with:
resource-group: ${{ env.RESOURCE_GROUP }}
cluster-name: ${{ env.CLUSTER_NAME }}

- name: Checkout repository
uses: actions/checkout@v3
with:
token: ${{ secrets.GITHUB_TOKEN }}

- name: Substitute environment variables in k8s config
run: |
export PORT=${{ env.PORT }}
export IMAGE_TAG=${{ github.sha }}
export CONTAINER_REGISTRY=${{ env.AZURE_CONTAINER_REGISTRY_LOGIN_SERVER }}
export PUBLIC_IP=${{ env.AZURE_PUBLIC_IP }}
export REGISTRY_FOLDER=${{ env.REGISTRY_FOLDER }}
touch k8/k8.yml
envsubst < k8/k8.yml.template > k8/k8.yml

- name: Echo contents of k8.yml
run: cat k8/k8.yml

- name: Deploy to AKS
id: deploy-aks
uses: Azure/k8s-deploy@v4
with:
namespace: ${{ env.NAMESPACE }}
images: ${{ env.AZURE_CONTAINER_REGISTRY_LOGIN_SERVER }}/${{ env.REGISTRY_FOLDER }}:${{ github.sha }}
manifests: |
k8/k8.yml

k8.yml.template

---
apiVersion: apps/v1
kind: Deployment
metadata:
name: backend-deployment
labels:
app: backend
spec:
replicas: 1
selector:
matchLabels:
app: backend
template:
metadata:
labels:
app: backend
spec:
containers:
- name: backend
image: ${CONTAINER_REGISTRY}/${REGISTRY_FOLDER}:${IMAGE_TAG}
imagePullPolicy: Always
ports:
- containerPort: ${PORT}

---
apiVersion: v1
kind: Service
metadata:
name: backend-service
spec:
selector:
app: backend
ports:
- name: backend-port
protocol: TCP
port: ${PORT}
targetPort: ${PORT}
type: LoadBalancer
loadBalancerIP: ${PUBLIC_IP}

Docker file

# Specify the base image with Node.js version
FROM node:16.15.0

# Set the working directory in the container
WORKDIR /app

# Copy the package.json and yarn.lock files to the working directory
COPY package*.json yarn.lock ./

# Install dependencies
RUN yarn install --production

# Copy the rest of the application files to the working directory
COPY . .

# Display the contents of the build context
RUN ls -a

# Build the application
ARG DB_TYPE
ARG DB_DATABASE
ARG DB_HOST
ARG DB_USERNAME
ARG DB_PORT
ARG DB_PASSWORD
ARG DB_SSL_CA_CERT
ARG JWT_SECRET
ARG JWT_EXPIRES_IN
ARG PORT
ARG BE_FILE_SERVER_URL

# Set environment variables
ENV DB_TYPE=$DB_TYPE
ENV DB_DATABASE=$DB_DATABASE
ENV DB_HOST=$DB_HOST
ENV DB_USERNAME=$DB_USERNAME
ENV DB_PORT=$DB_PORT
ENV DB_SSL_CA_CERT=$DB_SSL_CA_CERT
ENV DB_PASSWORD=$DB_PASSWORD
ENV JWT_SECRET=$JWT_SECRET
ENV JWT_EXPIRES_IN=$JWT_EXPIRES_IN
ENV PORT=$PORT
ENV BE_FILE_SERVER_URL=$BE_FILE_SERVER_URL

# Build the application
RUN yarn build

# Expose a port if your application needs to listen on a specific port
EXPOSE $PORT

# Define the command to start your Node.js application
CMD [ "yarn", "start:prod" ]

--

--