Kubernetes — an overview

Daniel Pereira
Just Another Dev Blog
5 min readFeb 22, 2017

Kubernetes is an open source system for automating deployment, scaling, and management of containerized applications. It was started by Google in 2014 and later donated to the Cloud Native Computing Foundation.

With Kubernetes we can quickly deploy our applications, scaling it according to our needs, without having to stop anything in the process. It is made to be portable, extensive and self-healing, granting an easier management from people who have to administrate the system.

The system has some key concepts, which are:

  • Nodes
  • Pods
  • Deployments
  • Services

This post aims to give a brief introduction to these concepts, helping to understand how Kubernetes works.

Nodes

Nodes, previously known as minions, are the machines on Kubernetes. They can be either a physical or a virtual machine. When reading the statuses of a Node we can get the following information:

  • Addresses: here we can obtain the hostname, internal and external IP addresses from the Node.
  • Condition: there are two conditions: OutOfDisk and Ready. OutOfDisk will be true when there is insufficient space to add new pods. Ready will tell if the Node is healthy and ready to accept pods. If the Controller can not reach the Node for 40 seconds the Ready condition is set to Unknow.
  • Capacity: the resources available in the node, such as CPU, memory and the maximum number of pods allowed.
  • Info: general information about the node, such as program versions and OS name.

Unlike some cloud services like ECS, Kubernetes node creation doesn’t actually create the machine, we must create it in our desired environment (e.g. AWS EC2, on-premises) and pass the proper configuration to the system. When we ask Kubernetes to create a node it just creates a representation to properly manage the resources available. Kubernetes will perform health checks to verify is a node is healthy. If so, the given node is available to receive Pods.

To control the nodes we have Node Controllers. It’s their responsibility to assign a CIDR block to the node when it is registered, which will ensure the proper IP is set. They also keep track of the relation between the list of nodes available and the list of machines available: when a node is deemed unhealthy the Controller will check if the machine is unhealthy as well. If that is the case, the node is removed from the list of available nodes.

Pods

Pods are composed of a group of one or more containers, the shared storage for them and their options. Kubernetes supports many types of containers, being Docker the most common one.

Every container inside a pod will have the same IP address. As expected in this case, they can found themselves by calling localhost. This is valid only on containers running on the same pod, though: if we have two containers located in two different pods they will have different IPs and cannot find each other using localhost.

Pods should not be treated as persistent entities. In a case of some failure (e.g. node failure) they will be destroyed, which means we shouldn’t use them as a place to hold vital information. Data that needs to outlive the pod should be stored in volumes. Since pods are intended to be ephemeral it is important to understand their lifecycle.

Each pod will have a phase, which can be described as a high-level summary of the current status of the pod in the lifecycle. A phase can assume the following values:

  • Pending: the pod was already accepted by Kubernetes, but that are images still being created.
  • Running: all of the containers have been created and the pod is bound to a node.
  • Succeeded: the pod has terminated all containers with success.
  • Failed: the pod has terminated and at least one container failed.
  • Unknown: it was not possible to obtain the status of the pod.

Deployments

Deployments are used to make updates on Pods. We can use them to bring up new Pods, change the image version of a container and even recreate the previous state if something goes wrong. When creating a Deployment we will define a desired state and Kubernetes will keep our environment in that desired state.

Imagine that we want to make sure we always have 3 pods running our web server. This is achievable by creating a Deployment that defines the replicas property as 3. What Kubernetes does when it runs this Deployment is to create 3 Pods with the given configuration of our web server. If for some reason one of the pods is destroyed Kubernetes will automatically bring up a new one. This will make our desired state of 3 replicas be achieved even when some problem occur.

We can monitor the status of our deployment to see if everything is going according to what is expected. When looking a Deployment status, the following information is available:

  • Desired: how many pods were defined in the desired state. When the deployment is finished the number of current pods should be equal to this column.
  • Current: indicates the total replicas the Deployment manages.
  • Up-to-date: how many pods have the latest template. For instance, if we change the container’s image version and run the Deployment again, a pod will only be considered up-to-date when the deployment finishes.
  • Available: how many pods are in the Ready status.

Services

Imagine that we have two services: ServiceA and ServiceB. ServiceA needs to communicate with ServiceB. Since Pods are ephemeral we cannot use them to be the ServiceB interface. If a Pod ends up being terminated the reference to it is no longer valid and the environment will not work properly anymore. We need something that is able to act as an interface and that will not be destroyed. In Kubernetes we achieve this using Services.

A Kubernetes Service consist of a set of Pods and a policy that defines the access control. Services can have label selectors, which are commonly used to invoke actions over the right subset of pods. We can think of them as a tag in AWS EC2, which allows us to select a set of instances based on the given information.

When publishing Kubernetes Services we can define how they are going to be exposed. For instance, a backend service usually is going to be accessible inside the local network, while a frontend service needs to be available outside the cluster. The possible types of Service we can define are listed bellow:

  • ClusterIP: the service is going to be exposed inside the cluster, with a local IP, and will not be reachable outside the cluster. This is the default option.
  • NodePort: exposes the service on the given port, using the node IP. For instance, if the Node runs on 10.0.15.5 and the NodePort is 4567, we can reach the service on 10.0.15.5:4567.
  • LoadBalancer: exposes the service using a cloud provider’s load balancer.
  • ExternalName: the service is going to be exposed using the name configured on this property (e.g. mydomain.com).

--

--