Deploy a Spring Boot Application Into Kubernetes
Hello folks,
In this article, we will deploy a simple Spring Boot based application inside a K8S cluster.
To take the best of this article you should have a basic understanding of these subjects :
- Docker because we will be using it as the runtime to containerize the app.
- A K8S cluster (standalone or MiniKube) running in your local machine or use some of the cloud providers like https://www.linode.com/.
- A basic understanding of Java and Spring boot
Prepare the simple app :
First things first let’s prepare the application. We will expose a greeting endpoint that we can later consume by calling the path /hello

To make it simple i implemented the endpoint directly in the entry point of the application which it’s not a good practice, you know SOC or SOR ( yes you’re right, one of the SOLID principales).
so let’s move to the next step
Prepare the image specification :
To make it easy i didn’t make a multi stage docker build as the aim of this article isn’t focused on Docker but on Kubernetes. bellow the image specification (aka the Dockerfile ).

Since i’m using Java 11 i based my app on the liberica-openjdk-debian image that has more than 500K+ downloads on Docker hub.
I’m exposing my application on the port 8090, copying the jar file that we got after the build to the app.jar in my container and running the jar file with the java -jar command.
Build and tag the image :
At the same level of the docker file, use the bellow command to build and tag the image
docker build -t simple-app:1
make sure that your image has been built successfully as follows :

A good practice is to run the container and check if the application is running successfully inside the container before taking a step further and deploying it to a K8s cluster ( yes the fail-safe approach ), so let’s do it

Here we go the application is running correctly inside to the container. now let’s consume our greeting endpoint:

At this point we are sure that our image is ready to be deployed by an orchestrator in our case it’s K8s.
Prepare the deployment specification :
the best way to make a deployment in K8s is by preparing a YAML file that describes the desired state in which our application will be running without any problem
let’s have a look on it :

apiVersion : describes the version of the API server of K8S that we will be consuming to create our deployment
kind: the kind of K8s object that we will be using for this specification
metadata : describes information about the app like name and labels (very important information the we will be exploring later on)
replicas : describes how many pods we need to run for the same application
containers: describes the container’s specification like the name, the image and the exposed port.
Run the deployment
To create a deployment inside a K8s cluster use the command bellow
Kubectl create -f k8s-deployement.yaml
This command will make a POST request to the API server to create the deployment with the desired state that we specified in the YAML file.


Here we go our deployment is created successfully.
Check that the application is running correctly inside the pods
First we need to fetch the ids of our running PODs, we can do that using :
kubectl get pods
Second use the PODs identifiers to fetch the logs from it using :
kubectl logs ${POD_ID}

Now our application is running smoothly inside the K8S cluster but we have a challenge : How we can consume our endpoint from two running instances of the same application with different IP addresses ? yes you are right, using a Load balancer.
In the next article i will talk about how K8S service plays the role of a LB and the mechanism used to explore instances of the same application running inside a K8S cluster.
Here’s the Github repository : https://github.com/khairaneMurad/demo-simple-api
I hope you enjoyed reading this article and stay tuned for the next one !