Deploying an Application with Kubernetes

David Seybold
Design and Tech.Co
Published in
8 min readMar 4, 2019

Building fault-tolerant, highly available applications is difficult. How do you manage deployments? Load balancing? What about scaling your infrastructure? There are a lot of moving parts which complicates things. Containers, like Docker and rkt, evolved to make it easier to deploy the same deployable unit across a number of different machines and have it run consistently on all of them. This makes deploying applications a little easier, but how can we manage these containers? One awesome tool for managing containers is Kubernetes.

Kubernetes is an open sourced container orchestration tool originally developed by Google. To quote the Kubernetes website:

Kubernetes builds upon 15 years of experience of running production workloads at Google, combined with best-of-breed ideas and practices from the community.

It is a great tool that significantly lowers the amount of effort needed to run a collection of containers. Not only that, but it’s really fun to say: Koob-er-net-eez.

In this tutorial I will show you how to start a Kubernetes cluster, and deploy a small application on it. By the end of this article I hope that you will have at least a cursory understanding of how to deploy an application using Kubernetes.

Setup Minikube

As we are just learning about Kubernetes and how it is used to deploy applications we have no need for a full fledged cloud infrastructure running on AWS or Google Cloud Platform. This would introduce another layer of complexity that would only serve to confuse us further. Luckily, Kubernetes provides a tool called Minikube, that we can use to run a single node Kubernetes cluster right on our local machine.

To install Minikube on your local, Kubernetes already has some excellent instructions here. Minikube usually runs on a VM so we will also need a hypervisor. By default, it uses VirtualBox which you are more than welcome to use. As I am using a Mac, I prefer hyperkit, and if you are following this on a Mac I recommend you use it. Follow these instructions to install the hyperkit hypervisor.

We will also need to install kubectl which is the command line tool for Kubernetes. It allows us to create and manage our Kubernetes infrastructure right from the command line. We will be using it throughout this tutorial. Follow the instructions here to install it for your operating system.

Once you have installed all dependencies you can start the minikube cluster. If you want to use Virtual Box just leave off the --vm-driver option.

$ minikube start --vm-driver=hyperkit

This may take a little while to complete as it must first download the minikube ISO file. Set your computer down, go get some coffee, and come back later. Luckily, this will only happen the first time you run it. Once it is finished you now have a functioning Kubernetes cluster to experiment with.

Basic Kubernetes Objects

At its most basic level there are four objects the Kubernetes uses when managing containers.

  • Pod: Smallest deployable object in Kubernetes. Represents a running process. Holds one or more tightly coupled containers.
  • Service: Abstraction of multiple pods. Uses a label to select which pods it will route traffic to.
  • Volumes: Allows data persistence for a pod.
  • Namespaces: Virtual clusters. Allows for multiple virtual clusters to run on one physical cluster.

Create Your First Deployment

A Deployment in Kubernetes is a higher level abstraction of the basic Kubernetes objects. It provides a state at which you want one part of your application to exist while running. We can configure the containers that will be a part of it, the number of replicas that exist at one time, and the labels that are applied to each pod. Deployments can be created directly on the command line or can be defined in a .yaml file. We will use a .yaml file as it is reusable and what you will find in the “real” world.

For this tutorial I have created a Docker image you can use to experiment with. Feel free to substitute your own image in its place. Here is a .yaml file for the user-service deployment.

Here we are creating a Deployment named user-service-deployment . The first few lines define the version of Kubernetes that is used and the type of controller that the file is setting up. We also add some metadata for the Deployment definition. A name is required for all Kubernetes objects and it must be unique within a namespace. I am not going to talk about Namespaces within Kubernetes but just be aware that we are using the default one right now. The spec.replicas attribute determines how many instances of the application are running at one time. The spec.selector attribute tells the Deployment controller how to find the pods that should be considered as a replica and then that same label is used on spec.template.metadata.labels. The final section tells the Deployment what container image to use and what port on the container to open. The user-service image contains a RESTful API exposed on port 12000 so that is what we are using here.

To create the Deployment on our Minikube cluster run this command:

$ kubectl create -f user-deployment.yaml

To check and see if it worked we can run this:

$ kubectl get deployments

This outputs a list of the Deployments on the cluster.

NAME                      READY   UP-TO-DATE   AVAILABLE   AGE
user-service-deployment 1/1 1 1 21s

Another way of monitoring the objects on your Minikube cluster is to use the Minikube dashboard. This can be accessed by running minikube dashboard . From here you can view the status of all the objects in an user friendly way.

Minikube Dashboard

Go ahead and play around with the replicas to see what happens when you update the Deployment. After editing the Deployment file run:

$ kubectl apply -f user-deployment.yaml

NOTE: The user-service application uses a sqlite database stored inside the Docker container. Running more than one replica of it at one time will result in inconsistent responses. The Kubernetes Service will route traffic between the replicas indiscriminately. This is obviously not a production grade application and is useful for a proof of concept only.

Create a Service

Now that we have a Deployment we need a way for other apps inside the cluster to be able to call it. When a pod is created it is given an IP address, but pods are ephemeral. They could fail and Kubernetes will spin up another one with a different IP address. It would be impossible for any other application to know which IP to use to call it. Kubernetes solves this problem by using Services. Services are an abstraction of the pods of one application. Using labels defined on the pods, it will select which pods within Kubernetes are associated with its configured application and route traffic to them accordingly. Other applications inside the cluster can call the application that a Service abstracts by using the Service name instead of an IP address.

Here is an example Service.yaml file for the user application.

Here we define the name of the Service in the metadata, the selector that it uses to find the pods to route traffic to, and the external port and target port that traffic is routed through. As the application exposes port 12000, we want the targetPort to point to that port however the port that the Service receives traffic on, can be anything you like.

We will add this to the Minikube cluster in the same way we added the Deployment.

$ kubectl create -f user-service.yaml

You can then see the Service inside the dashboard or by running kubectl get services

Create an Ingress

We now have a functioning application inside our cluster that can be accessed by other applications inside the cluster. For some applications this is all that needs to be done as there is no need for them to be accessible outside the cluster. But what if you want to expose one or many of your apps outside the cluster? Kubernetes offers a number of ways to do this, one of which is an Ingress. An Ingress allows you to run a software load balancer to route traffic to your different Services from outside the cluster.

To setup an Ingress we will first create some mandatory components. For an Ingress to work, at a minimum it will need an Ingress controller. The Ingress controller is responsible for fulfilling the request that gets sent to the Ingress. The Ingress definition that we will define further on, only tells the controller how to route the requests. Kubernetes currently supports and maintains GCE and nginx Ingress controllers however there are a large number of community maintained ones that can be used as well. I will show you how to setup an Ingress with an nginx controller, but for a list of some of the other possible controllers check out this link.

Kubernetes provides a definition for the mandatory components needed for setting up an nginx Ingress controller. To launch them in your cluster run the following command:

$ kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/mandatory.yaml

Then we need to enable Ingress for Minikube by running this command:

$ minikube addons enable ingress

Finally, we are ready to define our Ingress.

We define a service name and port to send the traffic that matches the path to. Using the kubectl create command we can define the Ingress in our cluster. The user-service application is now exposed outside the cluster. To get the IP address for the Ingress, you can go to the dashboard and select Ingress from the side bar or run

$ kubectl get ingress.extensions/ingress-nginx
NAME HOSTS ADDRESS PORTS AGE
ingress-nginx * 192.168.64.6 80 1m

Using cURL you can test one of the endpoints.

curl -kLX POST \
http://192.168.64.6/user-service/user \
-d '{
"name": "FirstName LastName",
"email": "email@example.com",
"phone": "1231231234"
}'

For information about the API, and what endpoints it offers see the README.

Now You Try

At this point you should have a fully functional application running on your Minikube Kubernetes cluster. It can be accessed inside the cluster using the user-service host and outside the cluster by routing through the Ingress. I have created another application that you can try to setup on your own using a similar process. You will need to modify the existing Ingress to expose it, create a new Service, and a Deployment. The Docker image is located at davidseyboldblog/todo-service:latest and is a simple API that lets you manage a todo list for a user. When you are done, you can compare what you have to the finished files in this repo.

Follow Here for More Awesome Content

--

--