Container Management With Kubernetes

Bouwe Ceunen
Axons
Published in
5 min readNov 23, 2018

A container ship, cruising across the ocean, carrying precious cargo containers. All of these containers need to be precisely fit onto the ship, a tight fit of container placement means less fuel and more added value. The ship is your cloud infrastructure provider like AWS, the containers are your Docker containers that harbor your applications and the management system that loads those containers onto ships is Kubernetes. Like everything in life, the more efficient something can be done, the less money you have to spend.

Kubernetes is Greek for helmsman. Kubernetes logo is therefore a helm (steering wheel). Photo by garrett parker on Unsplash

Kubernetes Core Elements

  • Namespaces

Namespaces help you to maintain a clear overview of all your Deployments, Pods, etc. Namespaces are used to divide certain application chunks from one another. You can make namespaces to separate all of your monitoring, all of your logging and all of your applications.

  • Nodes

The underlying infrastructure of your cluster are nodes. These nodes represent for example EC2 instances on AWS. These nodes all run kubelet which will make sure everything goes smoothly with your Kubernetes cluster. It is possible to create a cluster from a wide range of instances, it really doesn’t matter which you choose as long as they have enough memory and CPU that fit your needs and the needs of Kubernetes.

  • Pods

Next thing are pods, pods run on nodes. No rocket science there, the Pod is the smallest schedulable entity in the Kubernetes cluster. It consists of one or multiple docker containers, depends how many docker containers you want to schedule together. Pods are created by for example Jobs, Stateful Sets, etc.

  • Jobs

Jobs are meant to run till completion, they are used to run a job which will not run indefinitely. Examples are clearing expired access tokens, transforming some data, etc. Next in line are Cronjobs, which will schedule Jobs at specified intervals.

  • Cronjobs

Cronjobs schedule Pods on specified intervals, which can range from every minute to each sunday noon. They spin up Jobs, which in turn will spin up Pods. Configuration of the schedule of these Jobs is the same as with other cron related jobs.

  • Replica Sets

Yet another level up the chain are Replica Sets, these ensure that there is always a certain specified amount of pods running. Replica Sets are controlled by Deployments.

  • Deployments

A little more complicated are the Deployments. Deployments manage Replica Sets and keep track of old Replica Sets so that it is always possible to do a rollback to an earlier Replica Set with another Docker image and version of your application.

  • Replication Controllers

Replication Controllers are deprecated in favor of Deployments and Replica Sets. With Replication Controllers you don’t have Replica Sets and thus rolling updates (which spin up a new Replica Set and slowly takes Pods down from the old and booting up Pods from the new) and rollbacks are a lot more difficult.

  • Daemon Sets

Daemon Sets are deployed on each node. So if you have for example 5 nodes and a Daemon Set, you can rest assured that your application is deployed on each node exactly once. This can be interesting for networking Pods or for Pods that gather logs from each node, just like Fluentd.

  • Stateful Sets

With Stateful Sets you can attach separate AWS EBS volumes. Any data you gather when your application is running in a Pos is lost when that Pod goes down, unless you write to a separate volume. You can claim a persistent volume, which on AWS are EBS volumes with a Persistent Volume Claim, which is explained next.

  • Persistent Volume Claims

Persistent Volume Claims are exactly what they say, claims on persistent volumes. These claims are made out of Storage Classes, you can request a claim which will have a certain Storage Class.

volumeClaimTemplates:
— metadata:
name: data
spec:
storageClassName: ssd
accessModes: [ “ReadWriteOnce” ]
resources:
requests:
storage: 1Gi
  • Storage Classes

These Storage Classes on AWS can be for example gp2, io1, etc. This is very easy so you can make several storage classes based on certain needs and expectations of your EBS volumes.

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: ssd
annotations:
storageclass.beta.kubernetes.io/is-default-class: "true"
provisioner: kubernetes.io/aws-ebs
parameters:
type: gp2
  • Services

The functionality of Services is to link your Pods and give them a DNS resolvable endpoint. You can have an application with 5 Pods and with 1 Service, all requests to this service will automatically be distributed across those Pods.

apiVersion: v1
kind: Service
metadata:
name: argo-ui
namespace: argo
spec:
ports:
- port: 80
targetPort: 8001
selector:
app: argo-ui

There are several ports, including targetPort and port. Port is the port of the service, so where you have to do your requests and targetPort is the port of your application to which you want to send requests. For example, you can do a GET call to the following URL to get the Argo UI.

http://argo-ui.argo:80

Internal DNS in Kubernetes is very easy, you always have following format:

http://<service_name>.<namespace_name>:<service_port>
  • (Cluster)Roles

Roles in Kubernetes make it easy to implement RBAC in your cluster. There are 2 kinds of roles, roles and clusterroles. Roles apply in the namespace where they were created and clusterroles span across all namespaces.

  • (Cluster)RoleBindings

Rolebindings and clusterrolebindings bind (cluster)roles to certain namespaces. So if you want to grant a specific user to a specific namespace, you can create a rolebinding that binds the ‘admin’ role to the specified namespace. With Dex, you can then create a way to login to your Kubernetes cluster with the OpenId and for example login with your Google account.

apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: admin
namespace: kafka
resourceVersion: "117396656"
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: admin
subjects:
- apiGroup: rbac.authorization.k8s.io
kind: User
name: bouwe.ceunen@gmail.com

Kubectl Commands

Kubectl is used to control everything in your Kubernetes cluster. If no namespace is given, it will use the ‘default’ namespace. You can get the Kubernetes version, get clusterroles, roles, pods, etc. It is also possible to open up bash in a Pods (if the Pod supports bash) and execute commands into the Pod itself. You can also of course create, edit and delete everything.

kubectl version
kubectl get clusterrole admin -o yaml
kubectl get roles --all-namespaces
kubectl get pods --all-namespaces
kubectl create namespace monitoring
kubectl apply -f deployment.yml -n monitoring
kubectl delete -f statefulset.yml -n kafka
kubectl exec -it kafka-0 -n kafka /bin/bash
kubectl edit rolebinding admin -n elk-stack
kubectl delete job remove-expired-tokens --all
kubectl get pods --all-namespaces
kubectl create namespace monitoring
kubectl apply -f deployment.yml -n monitoring
kubectl delete -f statefulset.yml -n kafka
kubectl exec -it kafka-0 -n kafka /bin/bash
kubectl edit rolebinding admin -n elk-stack
kubectl delete job remove-expired-tokens --all

Now that you know the basics of Kubernetes, don’t hesitate to read my other Medium post on setting up a Kubernetes cluster on AWS and lessons learned!

--

--