Trying out Kubernetes!

Juan Tejeria
The Appraisal Lane Developers
3 min readAug 7, 2017

Docker and its containers have been around for some time now, and we’ve seen how different cloud providers came up with services that allow to scale and manage your containerized applications. Although these services are indeed great, they are not free, and in some scenarios we might just want to have our own hosted solution.

Kubernetes is exactly this! It’s an open-source system that allows container orchestration. It provides the functionality to scale, deploy and manage your containerized applications. Designed and implemented by Google, it was later donated to the Cloud Native Computing Foundation.

Kubernetes not only supports Docker, but also other container tools, although we will only be focusing on Docker.

So, how does Kubernetes work?

Understanding Kubernetes relies on the understanding of its components, which are Pods, Labels, Controllers and Services.

A Pod is the basic unit in Kubernetes and consists of a group of containers. All containers within a Pod will share the environment and be controlled as a single application.

Controllers consist of a reconciliation loop which objective is to get the cluster into a desired state.

Last but not least, we have the Services. Kubernetes Services are a set of pods that work together. Additional to pods, Kubernetes provides service discovery and request routing. There are different ways on how to expose services but right now we are not going to get into it.

Let’s try and understand a bit more about Kubernetes with its architecture:

Kubernetes Architecture

We can see a master node, that will be the main controlling unit of the cluster. It will handle the workload and communication across the system. The Controller Manager will be managing the Daemon Controller and the Replication Controllers which are in charge of the lifecycle of pods. The master also contains a Scheduler that will be checking the resource utilization on each node.

Each of the Kubernetes Nodes will have a Kubelet process, that is in charge of the running state of the node, and a Kube-Proxy, an implementation of a network proxy and a load balancer.

Hands On

Let’s now try to see what kubernetes offers.

First of all, we will need to start a kubernetes cluster, you can do this manually or you could use some help like Kops!

Once you have a kubernetes cluster up and running, deploying your containers and exposing it is really simple. We can do this with two simple steps:

kubectl run app-name --image=nginx:1.7.9 --port=8081kubectl expose deployment app-name --type=LoadBalancer --port=80 --target-port=8081 --name=service-name

This will create the necessary components within the cluster to get a nginx service running on port 8081, that is accessed through a Load Balancer on port 80. This step could be done in a unique step, defining the deployment on a yaml file as shown in the kubernetes documentation. Once we have the file defined:

kubectl create -f file_name.yml

Additional helpful commands:

# List the nodes of the cluster
kubectl get nodes
# Describe de different components
kubectl describe nodes/pods/services
#Rolling update a deployment with a new image version
kubectl
set image deployments/app-name app-name=nginx:1.9.0

Conclusion

Kubernetes has a lot to offer and is there for us to use, for free, though there are plenty of things that are normally solved by the different cloud providers and their services for container applications that you will need to do on your own. Even though setting up a production-ready cluster with kubernetes might take some time, it can be lots of fun!

--

--