Kubernetes: Deploying containers in cluster
Motivation
Every time we end up writing an application, there is a need of deploying it so that people can see it. After deploying the application, there comes the chaos. We need to keep constant eye for the following aspects.
- Continuous monitoring of system so that it remains live.
- What if one of our server is not working expectedly, how to take it down and reboot another.
- What if we are getting more traffic, now the server needs to scale.
- What if there is no downtime of our services.
These are too many what-if’s to handle, right ? Most of the time we just boot up a vm, host our application there and open our port to the world so that people can see it. But this scenario comes with vulnerability, scaling issues and downtimes.
Let’s know something that handles all of these. Such a production grade system is Kubernetes.
Kubernetes
Kubernetes is a container orchestration platform that helps us to deploy containerized application into cloud infrastructure.
See the two bold words, orchestration and containerized application. Let’s discuss.
So what’s the buzz with containerized applications. Just think of container, to be something of a carrier that contains something(in this case our application). What if we get our applications already build with all the dependencies into an image so that any machine can run it. It accelerates the delivery timeline and creates clear abstraction in between application and infrastructure.
And for orchestration, think of the man who is standing in front of a group of musician in orchestra. He doesn’t play any music but directs the musician by indicating which notes to play. Kubernetes plays the exact role here.
Just to make it more clear, Kubernetes does not built any application, it just handles pre built applications so that those deployed containerized application can communicate in between themselves. It also helps us to automate, scale and manage our deployments.
To get started with Kubernetes, we need to clear out few keywords and know their meaning.
- Node: A physical computer.
- Cluster: A collection of inter-connected nodes where one is cluster master and others are workers. When we give the commands to any cluster, the command actually goes to cluster master and then the cluster master communicates with other workers.
- Pod: As we see earlier, in case of docker, container is the lowest logical unit. For kubernetes, it’s a Pod. A pod is capable of running multiple containers in itself. The Pod is not any architectural unit rather it’s more of a group that contains several containers.
- Deployment: A deployment represent a set of multiple identical pods. It has a template that defines the structure of a pod. Similar to pod it’s also not an architectural unit . It actually groups the pod into a logical unit for management and discovery.
- Service: After deploying a pod into clusters, we can access it publicly and internally by any ip or hostname as it’s not exposed. A service actually expose pods so that it can be accessed internally or publicly via ip or hostname. In reality, a service actually groups a set of pods exposed endpoint to a single resource. Services are of different types. i.e. NodePort, Loadbalancer, ClusterIP, Ingress, etc.
- Secret: We can have our api key and other server specific configuration that are private to us stored in a resource with encoded value named Secret that can be deployed in cluster.
- ConfigMap: It’s also same as secret. But data in here is not encoded.
So to summarize, a containerized application would be deployed in a node inside cluster which will abstracted by pods that are grouped together in deployments and capable of communicating with other pods and public internet via a service. And all of these are managed by a system called Kubernetes.
Prerequisites
To get yourself up and running with Kubernetes, we need the followings
- A cluster up and running in our physical computer or cloud. That can be done by minikube for local pc. For this tutorial I would be using Google Kubernetes engine in cloud.
- We would be using a tool named kubectl. The installation instruction are here
- We need to have our docker image ready and published to any public container registry as Kubernetes won’t be building and publishing any image for us. Just one more thing in mind, we would be using GCR(Google Container Registry) because by default Kubernetes can access that. We can use Dockerhub too for that but in that case we need to set the imagePullSecret in deployment files. But for now it’s out of scope in this article.
Steps
- Open terminal and run
kubectl version
The output should look like this
- Create a cluster in google cloud. Make sure kubernetes engine api is enabled. If not, then you can enable it from Api & Services.
What we did here is initializing a cluster with 3 nodes(physical machine in cloud) and named it demo-cluster
In addition I have also enabled autoscaling. You can find it in More Options
under Machine Configuration
under Node Pool configurations. It might take several minutes to boot.
- Make sure you installed google cloud sdk from here. And login into terminal using
gcloud auth login
- Now connect to your cluster using this
gcloud container clusters get-credentials demo-cluster --zone us-central1-a --project nomadic-mesh
- Double check that you are in your desired cluster using
kubectl config current-context
The output would be like this, incase of minikube it will showminikube
only.
- Now make sure you build your docker image and push it to google container registry. Run
gcloud auth configure-docker
to setup docker configuration with gcloud sdk. - Build the image using
docker build . -t gcr.io/nomadic-mesh/docker-image-demo
heregcr.io
is the domainnomadic-mesh
is the project name anddocker-image-demo
is the image name - Push the docker image using
docker push gcr.io/nomadic-mess/docker-image-demo
- We are using this coding repo from github. Just checkout the
before-k8s
tag in a new branch nameddev
usinggit checkout -b dev before-k8s
- We will open a folder named
k8s
and put our kubernetes files there nameddeployment.yaml
,service.yaml
,configmap.yaml
We will explain each of them below. - Let’s get into
k8s/configmap.yaml
file
line 1
kubernetes api version.line 2
type of kubernetes resource.line 3-4
metadata information of resource where name
is application-configmap
and line 5–6
is the data
block which can contain many key values pair. In this case only one is there with key APP_NAME
and value My app name
- Now get to
k8s/deployment.yaml
file
line 1
is the api version. line 2
is the type of kubernetes resource. line 3-4
has metadata information. line 5-27
is the specification of deployment. We said earlier a deployment should have a template. line 10-27
is the template definition. line 11-14
has the metadata of template where name
and labels
are declared. You can have multiple key value pair as labels
Just keep in mind the key value pair pod: application
will used for deployment discovery to service and management. line 7–9
actually describes on which key values matching this deployment will be discovered. Here we declared if pod: application
is in any selector
field in any service this deployment will be discovered. line 15-27
is the specification of template. We will have list of containers. so the first container specification is
name: application-deployment-container
image: gcr.io/nomadic-mesh/docker-image-demo
imagePullPolicy: Always
ports:
- containerPort: 8000
env:
- name: APP_NAME
valueFrom:
configMapKeyRef:
key: APP_NAME
name: application-configmap
So the name
of the container is application-deployment-container
The image that will run in the container will be gcr.io/nomadic-mesh/docker-image-demo
We will always pull the image by setting imagePullPolicy: Always
Our app is hosted on port 8000
according to our Dockerfile
Now we need expose our application port settings containerPort: 8000
on line 21
In our application we have also used an environment variable name APP_NAME
which has to be supplied under env
key as list. So line 23-27
does that and the variable value comes from a configmap
named application-configmap
with key APP_NAME
Now just see carefully the configmap name is the same name that we used under metadata of configmap.yaml
line 6
actually shows the number of replica of the pod we will be having under this deployment.
Now get the final k8s/service.yaml
file
line 1
line 2
& line 3-4
are the same as before. line 5-11
are the specification of the service where we used pod: application
as our selector for deployment discovery and we can forward multiple port under ports
keyword. Here we only port forward one combinations. We set 8000
as targetPort
which is our application port and 80
as our port in the service. We used type LoadBalancer
as this will give a real ip. In case of services only LoadBalancer, Ingress give real ip while NodePort and ClusterIP don’t.
Moment of Truth
So as all of these are explained let’s deploy this to our cluster. You can do it in two ways. You can deploy file by file or you can deploy whole folder that contains .yaml files. Kubernetes automatically discovers which to deploy.
- Make sure you are in the correct cluster.
kubectl config current-context
- Deploy one file after another file
kubectl apply -f k8s/configmap.yaml
kubectl apply -f k8s/deployment.yaml
kubectl apply -f k8s/service.yaml
- If all the files are in same folder you can run like this
kubectl apply -f k8s/
Wait sometime to let the loadbalancer to boot up. You will get a real ip on that.
Just go to the link you should be seeing your application there.
Last Words
So I tried to demonstrate how one can deploy a docker image simply using kubernetes. This is a very basic tutorial that can help you understand how to write .yaml
file for kubernetes and deploy it. You can do many fascinating architectural configurations using kubernetes.
You can find all the codes in the coding repo in after-k8s
tag.
If you have any questions and need any help with your ongoing project, drop a comment here. I would be happy to help.