From Zero to Kubernetes: The fast track

Pavel Mička
Elevēo-Techblog
Published in
10 min readMar 25, 2019

Historically, Enterprise Java Development was known (and feared) for its steep learning curve. It was necessary to use dozens of lines of XML code just to deploy a simple application or to configure application servers. With the rise of DevOps, this hassle is merely the beginning of a long and painful process — the developer (or DevOps engineer, if you wish) is not only responsible for configuration but also for running the application. This has historically required either; creation of curator scripts that ensure that the application is running (and restarts it, in case it is not running), or manual intervention in case of application failure.

Modern trends to the rescue

Fortunately for us, the times of laborious configuration are over. Now it is possible to utilize tools which automate most tasks that were previously time intensive or which required active monitoring. In this tutorial we demonstrate how to create a simple application in Java (Spring Boot) and how to containerize it and deploy it in a Kubernetes cluster. Our application, even though simple in functionality, will have resiliency built in and will recover automatically in the event of failure. Based on the example provided today, the whole application can easily be created and deployed into a testing cluster in under 30 minutes, without much configuration.

About this tutorial

This tutorial is intended for Java programmers with basic knowledge of server-side development, who are interested in running their code in Kubernetes cluster. Even though we will use Gradle for building the project, knowledge of this build system is not required — a developer with Maven-only experience will find the build code to be very familiar.

Prerequisites

Before we start, there are some prerequisites that you will need in your toolbox in addition to your favorite IDE and JDK. Those are:

  • Docker
  • Gradle
  • Helm
  • Kubernetes

We will be using these dependencies when building our solution (Gradle) and during the testing of our deployment (Docker, Helm, Kubernetes). If you already have these, skip ahead to the section The Application.

Docker & Kubernetes

If you are running Mac or Windows, the most convenient way to get both Docker and Kubernetes is to install Docker Desktop.

https://www.docker.com will force you to register and login in order to get Docker Desktop. To bypass this requirement go to this GitHub issue: https://github.com/docker/docker.github.io/issues/6910.

Once installed, enable Kubernetes in Settings:

Linux

For Linux users the installation process will be a bit more painful — you will need to install Docker CE using the package manager of your distribution. For Kubernetes experiments try Minikube.

Helm

Helm is a package manager. It is similar in relation to Kubernetes as yum is for CentOS or apt-get is for Debian. We will be creating a Helm chart in this tutorial, which is essentially a package/deployment descriptor containing information about which container should be installed into the Kubernetes cluster, how many instances should be running, which ports are to be exposed etc.

Helm logo

The installation process of Helm binary depends on your system, but generally there are two ways: use a package manager or download the binary manually (and link it to your path). Both approaches are described in these docs.

Gradle

We will use Gradle as a build tool. But we will be using Gradle Wrapper as a simple script that will install Gradle for us during the build.

The Application

Now that we have our toolbox ready, we can proceed with the application itself. As this is an entry level article, we will not do anything fancy — a simple Hello World will be enough. As we are on the fast track, we will use Spring Initializr, which will generate the scaffold for us. On the web page choose Gradle project, Java 11 and select Spring Web and Spring Actuator as dependencies. Then download the project.

If your IDE (such as IntelliJ Idea) supports Initializr out-of-the-box, feel free to use this feature.

Spring Inializr

Once you open the project in your IDE, your build.gradle file should appear as follows:

And the application class Initializr created for us:

Lets extend it a bit, to serve the /greeting endpoint:

And that’s it.

Now when we run the application — either directly from IDE, using java -jar build/libs/hello-world-kubernetes-1.0.0.jar, or ./gradlew bootRun, the application will start up on port 8080 (the default port in Spring Boot). When we access (GET) localhost:8080/greeting, we will retrieve our message. When we access (GET) localhost:8080/actuator/health, we will retrieve the following message with status 200:

This health endpoint is important for us, as we will use it as a liveness probe in Kubernetes. In simple words: if the application is unhealthy (endpoint not accessible or status is not in the range of <200; 399>), Kubernetes will kill the container and spin up a new instance.

Containerization

As you may have already noticed, the application is not containerized by default. To do so, we need to alter our build.gradle a bit (see the docker section of the code listing):

In the plugins section we have added docker-spring-boot-application plugin. The other change is addition of the docker section (extension), in which we have specified the baseImage and a tag. In this case the baseImage is OpenJDK 11 slim with Alpine linux (slim means that parts of the JDK distribution that are generally not necessary for cloud deployment are removed in order to make the image smaller). The tag, user friendly name of the image, is composed of name of our project and its version.

Now we can execute ./gradlew dockerBuildImage.

Once finished, we can list the locally present images by executing docker images -a and start the container using docker run -p 8080:8080 hello-world-kubernetes:1.0.0 (-p 8080:8080 maps the inner port 8080 to port 8080 on our localhost). Once we verify that everything works as expected, we can list the running containers using docker ps and stop it using docker kill {containerId}.

Behind the scenes

The plugin did some magic for us, let’s dive into what happened behind the scenes. If we take a look into the build/docker directory that the plugin generated for us, we will find that there is a file named Dockerfile and a couple directories: classes, libs, and resources. These directories contain exactly what you would expect based on their names — our application classes, jar dependencies and static resources. The important part for us is the Dockerfile, which tells the Docker binary how to construct our container:

As you can see from the code, first we declare that we are extending the base image (OpenJDK 11). The Workdir instruction tells Docker that we want to execute all the commands relative to the /app directory in the container. Then there is the instruction to copy all the libs, resources and classes to the /apps/… directory inside the container. Perhaps the most important instruction — entrypoint — tells docker which command it should execute in the container once it is started. And lastly, we tell Docker that the container should expose port 8080.

You may have noticed that the plugin uses an expanded version of our project (libs, resources, classes), while our original approach was to use a fat-jar (everything bundled to a single jar file). This is intentional as Docker is able to share layers across multiple images, when a layer roughly corresponds to an instruction in the Dockerfile.

This behavior is useful during development because, generally speaking, we do not need to change the libraries as often as we change our application code. The layering mechanism makes sure that all of our images share common libraries and resources, hence they are only needed to be stored on the disk once. This behavior saves significantly in terms of resources (as all of the dependencies of the project tend to be quite bulky).

Helm

Now, that we have created a Docker container and uncovered how it is built, our last step is to make the application executable on a Kubernetes cluster. Kubernetes itself is declarative, which means that we only need to specify the qualities of our deployment: run 1 instance, require this amount of resources (CPU, RAM), use rolling deployment, expose this port etc. Once the application is deployed, Kubernetes will make sure that the application always run in accordance with these requirements.

A deployment descriptor containing the requirements for our particular application can be written as a Helm Chart. To make things easy, we will use a scaffolding command helm create hello-world-kubernetes-chart. Now we must customize the scaffold to support our application. A good place to start is values.yaml file, which contains variables that we can tweak.

values.yaml

First we will notice the replicaCount variable, which is set to 1. This means that Kubernetes will always ensure that one instance of our application is running. If we intentionally kill the instance, Kubernetes will automatically spin up a new one. For this demonstration we will leave the value 1 for now. The thing we want to change is the image. We have to change it as it defaults back to the installation of a nginx web server. To prevent this we will alter the repository and tag in the following way:

Rest of the values.yaml file

We will not alter anything else in the file, but let’s look at the purpose of some the other variables, just to be clear about the options available.

  • Service section (line 15) is closely related to replicaCount and high availability in our application. We can view it as an internal load balancer with its DNS name managed by Kubernetes. By accessing our application through this DNS name, we abstract ourselves from the number of instances running — Kubernetes will make sure that the request will be routed to some healthy instance. And if an instance fails, it will be evicted from the routing table automatically.
  • Ingress is for the routing of external traffic to our cluster. Simply said its role is to act as a reverse proxy. But we will not use it for our testing service, so we’ll keep it disabled.
  • Resources are a thing to keep in mind, as we can use them to instruct Kubernetes to limit the maximum amount of RAM and CPU resources available to the application (useful for memory pressure), or we can instruct it to schedule (install) our application only to the node that has a defined amount of memory available (and reserve it).
  • NodeSelector, tolerations and affinity are used to schedule (or not schedule) the application of some particular node. This is useful for when you have some specialized hardware and want to make sure that an application uses it.

deployment.yaml

The second file we will change is the deployment.yaml that describes how Kubernetes should handle the installation of our application. The section that we are most interested in are the following lines:

Specifically we want to change containerPort to 8080, as that is the port of our Spring Boot application. Next, we need to change the path to livenessProbe and readinessProbe to /actuator/health. Liveness means that the application is running, but may not be in a state to accept requests — for example, it is warming up caches. Readinessprobe indicates that the application is ready to accept requests. This should be the end result:

NOTES.txt

The last section is purely aesthetic. The NOTES.txt contains instructions that will be printed out, once the chart is installed. This information includes port forwarding. By default the chart expects that our container exposes port 80, but in our case we expose port 8080, so let’s fix it:

Original NOTES.txt content

The proper command (on line 4) is kubectl port-forward $POD_NAME 8080:8080.

Running the application

Now that all configuration steps have been completed, it is time for our Moment of truth! Let’s run helm install hello-world-kubernetes-chart. We can immediately harvest the fruits of our labor, as the correct notes are (hopefully) printed out:

We will follow these instructions to port-forward traffic to our instance. Our application is now containerized and running inside Kubernetes and the endpoint on localhost:8080/greeting returns the expected result.

For inquisitive readers

Helm

Two Helm commands to start with are helm list (to get our helm deployments) and helm delete {name} to get rid of those we do not need. There are plenty of stable Helm charts available for popular applications in this repo. These stable charts cover whole spectrum of applications from infrastructure components such as Redis, PostgreSQL, RabbitMQ to top level applications like GitLab or Wordpress. All this software can be installed using as simply as helm install {chart_name}.

Application failure resiliency

If you wonder about the declarative properties of Kubernetes, the easiest way to verify them is to delete the pod (container with our application). First list the available pods using the command kubectl get pods. Then execute kubectl delete pods {pod_name}. Try to list the pods again, you can see that Kubernetes reacted immediately by spawning the new instance. This behavior is based on the deployment configuration (see our Helm chart above) that determined that one replica must always be running.

Conclusion

In this tutorial we have demonstrated how to easily containerize a Spring Boot application and how to run it on a Kubernetes cluster. Furthermore, we are now familiarized with basic usage of Kubernetes, which — together with a convenient Helm Chart — provides us simple access to advanced deployment features such as automatic recovery, replication, load balancing or resource management. Isn’t it amazing how all of this can be achieved using only a few lines of code?

Sources

Source codes for this tutorial can be found in this GitHub repository.

--

--