Geek Culture
Published in

Geek Culture

A Crash Course in Kubernetes

Automated container deployment, scaling, and management

Containers for the masses


This article is aimed at developers with a basic knowledge of Java and Docker, who are looking to understand how Kubernetes fits into contemporary software development.

We will aim to cover containerisation, container management and the basics of Kubernetes. We will finish with a worked example using AWS’ Elastic Kubernetes Service (EKS).


In Software Engineering we have code, and we need somewhere to run it. This can be our local machines, a server, or somewhere else entirely. However, we need to make sure wherever we run it, it runs in the same way. This might mean using the same language versions, the same OS, the same dependencies installed on a machine, etc.

Containers aim to fix this problem. They allow us to declare all of these facets, including the application, in an image. We pass the notion of the image around, providing a consistent environment wherever it is run.

We now no longer worry about running the application, but instead worry about running the image.

The container framework we have chosen to solve this issue is Docker. There is an expectation that you have a basic knowledge of how it works, but even if you do not, hopefully you’ll still be able to get the gist of how Kubernetes functions.

To clarify the use of containers, lets explore the below diagram:

Traditional Deployment vs Virtualised Deployment vs Container Deployment

The above contrasts the traditional, virtualised and container-based deployments. In the traditional instance we manually maintain the OS to provide a runtime for all of the apps running on it. Each of the apps would be competing for resources on the same server.

In the virtualised deployment we isolate the applications using virtual machines. However, these each require their own OS to be maintained and a Hypervisor layer (software that creates and runs virtual machines, responsible for sharing and distributing the physical machine’s resources with the VMs).

Containers are deployed in a more lightweight way than VMs as they have more relaxed isolation properties (not bound to specific VMs) and are portable, efficient and consistent.

Now we have established the use of containers, what is container management?

Container management is how we create, deploy, organise and scale our containers. Imagine we would like to host a website and decide to use containers.

We define a container with our site on, and would like to deploy it on the cloud. We want more than one instance of the container running and we want to distribute traffic between these instances, as well as take them out of circulation if something goes wrong. This is all done through container management.

We have introduced the relevant concepts, now let’s examine Kubernetes itself. Kubernetes is an open source container management system developed by Google.

Let’s break down the system into its component parts, explaining their purpose along the way.

Kubernetes Clusters

These collect and coordinate a group of highly available computers (known as Nodes). We deploy a container to the cluster, and the cluster decides how best to position that container on any given node. The logic of organising the cluster is the responsibility of the Control Plane, which communicates with the nodes via an API and a Kubelet service running on the node. Additionally there will be a container service running on the node (such as Docker) in order to manage the containers.

A Kubernetes Cluster

Kubernetes Deployments

If we have a cluster organised we will want to put something onto it. This is where we make a Deployment Configuration. This contains the image for your container and the number of instances you would like to run. The control plane uses it to update or create these instances on the nodes. It also monitors the deployed containers, replacing them if they have issues.

We can organise all of this using kubectl, the command line process for Kubernetes.

A Kubernetes Deployment

Pods and Nodes

When we deploy containers they run in Pods. Pods are used to group containers and resources, where containers share a common IP address and port number.

Introducing Pods

By default pods are only visible to other services. If we want to contact them from outside the cluster we need to use a proxy.


Pods are logically grouped and their access policy is defined by a Service constructed in a YAML file. The set of pods targeted by a service is usually defined by a LabelSelector. We can use services to do things like create a load balancer, but they are also responsible for letting pods die, then replicating them if necessary.

Services deal with pods using labels and selectors. Labels can be used to do things like add version tags or classify an object.

Kubernetes Services and Selectors

We have introduced the concept of a Replica Set. These are used to maintain a stable number of pods running at any one time, and contains a Pod Template for creating new pods whenever necessary.

Let’s introduce a couple more concepts and clarify all of the above in an example using AWS EKS.

Worked Example in AWS EKS

In this short example we’re going to deploy an example React application to EKS. Most of the necessary code can be found in the repository here.

EKS is useful as it allows us to host our nodes in AWS, and handles patching, node provisioning, and updates. We can spread nodes across availability zones for higher reliability, and AWS offers services like managed group nodes and Fargate which will help with resource scaling.

The first thing we need to do is create the React Application and build a Docker image for it. The Docker image is then used to create containers, which we will run on EKS.

We create the app using:

npx create-react-app my-app

Let’s dip into the src/App.js file and make some changes.

function App() {  return (
<div className="App">
<header className="App-header">
<img src={logo} className="App-logo" alt="logo" />
This is our example AWS EKS application!

Running the React app with npm start we get the below screen. This is what we’re hoping for when we try and reach our application running on EKS:

Let’s host this on EKS!

Now we need to build a Docker image. Going right back to the initial reason we introduced containers, our image defines what we need an environment to look like in order to run our code.

We can also leverage Docker to do multi-stage builds. As well as needing a consistent environment to run our code we also need consistent environments to build our code.

# An environment to build our code
# This is the base image. We want to build our own
# image on top of this.
FROM node:15.14.0-alpine3.10 as build
# We set the container working directory to my-app
WORKDIR /my-app
# Next we add the node modules to the path on the image
ENV PATH /my-app/node_modules/.bin:$PATH
# We copy over our package.json and package-lock.json
COPY my-app/package.json ./
COPY my-app/package-lock.json ./
# We install our dependencies
RUN npm ci
# Copy over all of the files from our local machine and
# run the build command
COPY my-app/. ./
RUN npm run build
# An environment to run our code
# This is the base image we want to use for our production
# build
FROM nginx:stable-alpine
# Copy the built files onto our image
COPY --from=build /my-app/build /usr/share/nginx/html
# Expose port 80 to be accessed via HTTP
# Start up the NGINX server
CMD ["nginx", "-g", "daemon off;"]

This is the dockerfile we will use to build the image, we can wrap it in a docker-compose to make running it easier.

version: '3.7'services:
container_name: react-eks-app
context: .
dockerfile: Dockerfile
- '80:80'

Once this is done we can bring up and build the container using docker-compose up --build.

The next stages assume you have the AWS CLI installed and up to date, kubectl installed and an AWS account with the necessary permissions. We will be following the instructions here in case you need any help setting these up.

We will also be going over the steps at a reasonably high level. Diving into details is beyond the scope of a Medium article. The next few sections are covered more completely in this article on creating a cluster.

Creating a VPC and IAM Roles

The first thing we need is a VPC to deploy our cluster to, where we use a stack provided by AWS. We also need an IAM role that we can assign to our cluster that gives it the necessary EKS permissions.

Create an EKS Cluster

This is done through the AWS console. We give it a name, the previously defined role and link it to the VPC that has just been made.

Once this step is done we should be able to see it in the console

Our React EKS Cluster is now available!

Configure our Computer to Connect to the Cluster

We will need our local machine to be able to connect to the cluster as well, which demands a kubeconfig file (a file for configuring Kubernetes access with the kubectl tool).

Once this is done we can run the kubectl get svc command and receive a response similar to the below:

An example service response

Create an IAM OIDC Provider

This is used to give our Kubernetes service accounts access to AWS resources and full instructions are provided in the previous article.

Create Nodes

Now we have a cluster and somewhere to put our nodes (the VPC) we can start adding them to our architecture! We will be using AWS Fargate, which is designed for EKS and ECS. It abstracts away the need to worry about servers and instead allows us to focus on the application.

We create a Fargate profile with the necessary permissions, this allows us to spin up resources as required by our containers. However, something worth noting at this step is that if we want to only use Fargate (including for the automatically set up components of a cluster such as core DNS, and later for our load balancer), we need to do some extra set up here.

Take a Breath and Review

So far we have set up a VPC to deploy nodes to, a cluster and the necessary permissions. However, we have not deployed anything to the cluster, nor have we provided a way to access the services running there. In the next steps we will do both of the above, with more detailed information in this article on deploying AWS load balancer controllers and this article on deploying application load balancers.

Install an AWS Load Balancer Controller

This manages AWS Elastic Load Balancers for a Kubernetes cluster. We will use it to create an application load balancer to route traffic to our React application.

Once these steps have been completed we should be able to run the command kubectl get deployment -n kube-system aws-load-balancer-controller and see something like the below:

AWS Load Balancer Controller service

Create an Image Repository and Add Image

Now we want to add the image we created in the previous step to an image repository, so it can be pulled down and run as a container. For this we will be using ECR, the AWS image repository. Image repositories are ways of storing images which can then be ran as containers elsewhere.

Creating an ECR repository is very straightforward. In the AWS Console navigate to the us-west-2 region, go to ECR and find the create button. All we need to do is enter the name of the repository we would like to use.

Once this is done we need to tag the image we built previously in order to connect it to the new repository:

docker tag <image-id> <aws_account_id>.dkr.ecr.<region>

We now need to authenticate docker to our private repository. Having authenticated we can now push our image.

docker push <aws_account_id>.dkr.ecr.<region>

Which we should be able to see in ECR

Viewing our image in ECR

Deploy the Image to our EKS Cluster

Finally we need to run the image as containers on our cluster. Primarily we need to create a Fargate profile, which allows us to deploy nodes for our application.

eksctl create fargateprofile --cluster <my-cluster> --region <region-code> --name <alb-sample-app> --namespace <namespace>

The next step is to create a yaml file referencing our image. The whole file can be seen here, but the core components are:

  1. One deployment to run our app. This is what takes the image we previously pushed to our repository and deploys it on a node.
  2. One Nodeport service to open a port on every node of your cluster. We are running our application over http, so we are using port 80. Kubernetes then routes incoming traffic on the NodePort to our application, irrespective of its node.
  3. One application load balancer ingress. This is what takes requests from the outside world and directs them to our containers.

We can use this file to update our cluster using the below command:

kubectl apply -f <file name>.yaml

Once this is complete we can run the below.

kubectl get ingress/ingress-react-eks -n react-eks

Which will give us a response similar to this image, but with a populated address.

An example of an ingress response

Copying the web address into our toolbar we can see our application being served up to us!

The same site on EKS!


In conclusion we have covered the motivation for and the realisation of containerisation and container management, culminating in a worked example using AWS EKS.



Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store
James Collerton

Senior Software Engineer at Spotify, Ex-Principal Engineer at the BBC