Running Docker containers in Azure Kubernetes Service cluster

What are we going to learn?

Nitish Nagpal
10 min readMar 18, 2020
  • How to create docker image?
  • How to run commands inside the docker containers or make them interactive?
  • How to make docker containers communicate securely with each other?
  • How to store containers data in docker volumes?
  • How to push the docker image to azure container registry(ACR)?
  • How to deploy container in AKS cluster?

How to create docker image?

To create a docker image, we need a Dockerfile.

Dockerfile contains instructions for how to create a Docker image.

FROM microsoft/dotnet:aspnetcore-runtime
WORKDIR /app
COPY ./out .
ENTRYPOINT ["dotnet", "samplewebapp.dll"]

To publish the .dll in a release mode.

dotnet publish -c Release -o out

To build a docker image.

  • t : Allows you to give the name of the image.Here, we have given samplewebapp.
  • . : It means the Dockerfile exists in the current directory.
docker build -t samplewebapp .

To check the images.

docker ls image

To run a docker image in a container.

  • -d : This option runs the container in a detached mode.Your console won’t be occupied.
  • -p : port 8080 of localhost is going to map with the port 80 of the container.
docker run -d -p 8080:80 --name myapp

To check the running containers.

docker ps

To check all the containers.

docker ps -a

To stop the running container.

docker stop myapp

To remove the container forcefully.

-f: forces the container to be removed.

docker rm -f myapp

Let’s create a multi-stage.Dockerfile for asp.net core webapp.

FROM microsoft/dotnet:sdk AS build-env
WORKDIR /app
# Copy csproj and restore as distinct layers
COPY *.csproj ./
RUN dotnet restore
# Copy everything else and build
COPY . ./
RUN dotnet publish -c Release -o out
# Build runtime image
FROM microsoft/dotnet:aspnetcore-runtime
WORKDIR /app
COPY --from=build-env /app/out .
ENTRYPOINT ["dotnet", "samplewebapp.dll"]

To build an image out of the above docker file, we run the following command:

docker build -t samplewebapp:v2 -f multi-stage.Dockerfile .
  • -f : used in the above command refers to explicitly giving the name of the file if it is not named as “Dockerfile”.
  • v2 : Refers to the tagging the image samplewebapp with version v2.

Now, to run this image in a container.

docker run -d -p 8080:80 --name samplewebappv2 -e StageSetting = Multi-Stage samplewebapp:v2

You could check the logs of above running container, by following command:

docker logs samplewebappv2

You can get the sample code at my github repo:

How to run commands inside the docker containers or make them interactive?

Docker provides a command through which we can make the container interactive.Let’s learn about by pulling up redis image.

  • -it: by specifying this flag makes the container interactive.
// pull the image if does not exist & run in the container.
docker run -d -p 6379:6379 --name redis1 redis
// check the logs.
docker logs redis1
// run the commands inside the container.
docker exec -it redis1 sh
// Now, we are inside the container. we can run commands inside it.
// gives the file contents inside the container.
#ls -al
// connect to the redis cli inside the container on port 6379.
#redis-cli
// test it by typing
#ping ( it will return back pong).
// set the name variable to some value.
#set name John
// get the value back in the name variable.
#get name
// exit from redis-cli.
#exit
// exit from container shell.
#exit

How to store containers data in docker volumes?

Storing container data

  • Containers are disposable.
  • Keep data in docker volumes.
  • Mount a volume.

let’s use postgres image to store the data in the container.

docker run -d -p 5432:5432 -v postgres-data:/var/lib/postgresql/data
--name postgres1 postgres

the highlighted code means we are mapping to the volume inside the container /var/lib/postgresql/data with postgres-data.

Now run the above container in the interactive mode.

docker exec -it postgres1 sh// create the db inside the container.
#createdb -U postgres mydb
#psql -U postgres mydb// create a table.
#CREATE TABLE people (id int, name varchar(20));
// insert into table.
#INSERT INTO people (id, name) VALUES(2, 'Steph');
// exit from the db
#/q
// exit from the container
#exit

The data is mounted successfully on the docker volume postgres-data.

How to make docker containers communicate securely with each other?

Docker also provides us the command to make the containers communicate with each other.

Here,

  • - -rm: this flag specifies the container redis2 to be removed after the shell is exited.
  • sh: it specifies to open the container in shell mode.
  • - -link: this flag specifies the two containers existing container redis1 & new container to be created redis2 to communicate with each other.
// run the containers in linked mode.
docker run -it --rm --link redis1:redis --name redis2 redis sh
// opens the redis-cli.
#redis-cli -h redis
// now, get the name variable value.
#get name (it returns the value which was set in the container redis1)
// exit from the redis-cli.
#exit
// exit from the container.
#exit

How to push the docker image to azure container registry(ACR)?

Apparently, we have multiple ways of persisting the image.Either, it can be pushed to docker hub or jfrog registry or an ACR in our case.

Let’s push our image to Azure container registry.

we can create ACR using azure-cli or azure portal or powershell.

Let’s do it through azure-cli.

Prerequisites:

  • Install azure cli.

You can check whether it got installed successfully or not by running the command:

az --version

Login into your azure account.

az login

If you have got multiple subscriptions listed, then select an appropriate subscription.

az account set --subscription "YOUR_SUBSCRIPTION_NAME"

Before, we create ACR, we need to have a resource group.

az group create -n "RESOURCE_GROUP_NAME" -l "LOCATION"

After the resource is created successfully, we need to create ACR.

az acr create -g "RESOURCE_GROUP_NAME" -n "CONTAINER_REGISTRY_NAME"

Finally, we have created a private registry in our subscription.Now, we are in a position to push the local docker image to the registry.

For that, we need to login into ACR

az acr login -n "CONTAINER_REGISTRY_NAME"

To get the server name where the azure container registry is persisted.

$loginServer = az acr show -n "CONTAINER_REGISTRY_NAME" --query loginServer --output tsv

Now, the $loginServer variable has the server name in it.

Do, docker image ls again to get all the images & copy the image name from there.

docker tag IMAGE_NAME:TAG_NAME LOGIN_SERVER_NAME/IMAGE_NAME:TAG_NAME

Finally, we have arrived at the step to push the docker image to the ACR.

docker push $loginServer/IMAGE_NAME:tag

Now, verify the ACR repository list contains the name of the docker image we pushed in the last step.

az acr repository list -n "CONTAINER_REGISTRY_NAME" -o table

Of-course, we can push the multiple versions of the image into ACR by tagging them appropriately with version.

We can check the tags of the particular image pushed into ACR by following command:

az acr repository show-tags -n "CONTAINER_REGISTRY_NAME" --repository "IMAGE_NAME" -o table

To delete the image from the ACR with specific tag.

az acr repository delete -n "CONTAINER_REGISTRY_NAME" -t IMAGE_NAME:TAG_NAME

How to deploy container in AKS cluster?

We have different options where we can deploy the container.

Here, we are going to discuss about the highlighted one AKS.

Before we jump forward, we should know what an Orchestrator is.

An Orchestrator manages the cluster of worker nodes that are each able to run the containers and we provide the description of the application in a declarative format such as yaml file.

Then, it becomes the responsibility of an orchestrator to choose the node to run each of the containers on.

It provides :

  • Health monitoring
  • Self-healing
  • Upgrades
  • Scaling
  • Resource constraints
  • Networking
  • Service discovery
  • Ingress

Kubernetes basics

A production grade container orchestration system.

Few terminologies under it:

  • Cluster: Master nodes schedule containers.Worker nodes run containers.
  • Kubectl: Command line tooling to manage your cluster & resources in it.
  • Pod: It is made of one or more containers.It is a smallest unit a cluster has.They might be created on one worker node & disposed & re-created on other node.
  • ReplicaSet: It is used to define how many instances of pod should be running in your cluster.
  • Deployment: It deploys a replica set.Runs code on kubernetes.
  • Service: A pod might be running a web-api ,but that is not fixed, the pad might keep on changing.Therefore, we delegate that responsibility & communicate with the service instead of pod.It addresses load balancing.
  • Namespace: Allows the different microservices to be isolated. A cluster could also host different applications, which can be separated by the namespace.
  • YAML: It provides declarative deployments.
  • Helm: Package manager for kubernetes.

Let’s create a kubernetes cluster from azure portal where you will be asked to fill up the required info.

Give the cluster name, kubernetes version & dns name prefix.

For now, we can keep the node count to 3, which means 3 VM’s or server, width standard DS2 v2.

Click next to move to Authentication tab.

Here, we need to have service principal in place for the cluster infrastructure.If we don’t have an existing service principal, let’s shoot up a default service principal.

We can keep the roll based authentication for kubernetes disabled for now.

Click on next to view the networking screen.

Keep the Http application routing to No and network configuration to basic only.

Click on next to monitoring screen, where you can enable the container monitoring to monitor & get the diagnostics of containers in the cluster.

Click on next to reach to the Review + Create screen.

You can also automate the process of setting up the cluster by downloading the template for automation.

When, it is done with the deployment, you can visit the created cluster and you would be able to see below screen.

You can Monitor containers & view logs, as we enabled these options while we were in process of creating a cluster.

You can click on scale to edit the number of nodes for a cluster.

You can check the kubectl version.

kubectl version --short

Install the azure kubernetes cli by following command.

az aks install-cli

Update the kube config by getting credentials from the azure.

az aks get-credentials -g "RESOURCE_GROUP_NAME" "CLUSTER_NAME"

Get the nodes from the cluster.

kubectl get nodes

You can also scale out our cluster from the azure-cli.

az aks scale -g "RESOURCE_GROUP_NAME" "CLUSTER_NAME" --node-count 3

Look at this sample-app.yml file for deployment of the services on kubernetes cluster.

# redis
---
apiVersion: v1
kind: Service
metadata:
labels:
app: redis
name: redis
spec:
clusterIP: None
ports:
- name: redis-service
port: 6379
targetPort: 6379
selector:
app: redis
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: redis
labels:
app: redis
spec:
replicas: 1
selector:
matchLabels:
app: redis
template:
metadata:
labels:
app: redis
spec:
containers:
- name: redis
image: redis:alpine
ports:
- containerPort: 6379
name: redis

# db
---
apiVersion: v1
kind: Service
metadata:
labels:
app: db
name: db
spec:
clusterIP: None
ports:
- name: db
port: 5432
targetPort: 5432
selector:
app: db
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: db
labels:
app: db
spec:
replicas: 1
selector:
matchLabels:
app: db
template:
metadata:
labels:
app: db
spec:
containers:
- name: db
image: postgres:9.4
env:
- name: PGDATA
value: /var/lib/postgresql/data/pgdata
- name: POSTGRES_USER
value: postgres
- name: POSTGRES_PASSWORD
value: postgres
ports:
- containerPort: 5432
name: db
volumeMounts:
- name: db-data
mountPath: /var/lib/postgresql/data
volumes:
- name: db-data
persistentVolumeClaim:
claimName: postgres-pv-claim
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: postgres-pv-claim
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi

# result
---
apiVersion: v1
kind: Service
metadata:
name: result
labels:
app: result
spec:
type: LoadBalancer
ports:
- port: 5001
targetPort: 80
name: result-service
selector:
app: result
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: result
labels:
app: result
spec:
replicas: 1
selector:
matchLabels:
app: result
template:
metadata:
labels:
app: result
spec:
containers:
- name: result
image: dockersamples/examplevotingapp_result:before
ports:
- containerPort: 80
name: result

# vote
---
apiVersion: v1
kind: Service
metadata:
name: vote
labels:
apps: vote
spec:
type: LoadBalancer
ports:
- port: 5000
targetPort: 80
name: vote-service
selector:
app: vote
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: vote
labels:
app: vote
spec:
replicas: 2
selector:
matchLabels:
app: vote
template:
metadata:
labels:
app: vote
spec:
containers:
- name: vote
image: dockersamples/examplevotingapp_vote:before
ports:
- containerPort: 80
name: vote

# worker
---
apiVersion: v1
kind: Service
metadata:
labels:
apps: worker
name: worker
spec:
clusterIP: None
selector:
app: worker
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: worker
name: worker
spec:
replicas: 1
selector:
matchLabels:
app: worker
template:
metadata:
labels:
app: worker
spec:
containers:
- image: dockersamples/examplevotingapp_worker
name: worker

To check the deployments happened on all nodes for the above deployments.

kubectl get deployments vote

To see the kubernetes dashboard , we have to run the following command.

as aks browse -g "RESOURCE_GROUP_NAME" "CLUSTER_NAME"

browse to http://127.0.0.1:8001/ to see the kubernetes dashboard.

You can explore all the options available from the left side.

That’s all for this article.Thanks for reading it.

--

--