Debugging your Spring Boot application inside your Kubernetes cluster

In my previous article, I described the initial steps in creating a Spring Boot prototype that can be used with a Kubernetes cluster. In this article I am extending this prototype to allow it to be run and debugged inside a local Kind Kubernetes cluster.

Martin Hodges
12 min readApr 19, 2024
Scenarios for using the template

The first article set up the prototype to work standalone (#1 above) and connected to a database hosted inside our Kubernetes cluster (#2). In this article, we update the prototype to work inside Kubernetes (#3) in a way that allows us to debug our application in our IDE.

I have assumed that you have followed the previous steps and now have the Spring Boot application running in both a standalone or connected profile. This also means you have a local Kind Kubernetes cluster set up with Grafana monitoring, Postgres database and Vault secrets manager.

You can find the code and configuration files for this article in GitHub.

What next?

In this article we are going to set up the application so it can be run and debugged within your local Kind Kubernetes cluster.

As well as enabling remote debugging, we also fix the database credentials by using static credentials from Vault. This avoids any problems caused with long debug sessions and password rotations but still introduces the integration to Vault.

Port mapping

In all, we will be using all the ports we have exposed in our Kind cluster configuration, as shown in the diagram above.

k8s-debug profile

We now add this profile to our prototype. In this profile properties file (application-k8s-debug.yml), we move our application into the cluster itself. As we will be debugging our application within the cluster, we will:

  1. Connect to our cluster database via an internal service address
  2. Obtain the database passwords from Vault as static secrets
  3. Expose the debug port to our JVM in the cluster
  4. Enable logging for use with Grafana and Loki

Database connection details

First, we need to change the datasource to access the internal service exposed by our Postgres database cluster (db-cluster-rw.pg.svc). There are a few things to note about this:

  1. We do not specify the cluster, allowing you to use it in a cluster with a different name
  2. We use the -rw service as this represents the instance that is accepting read and write commands.
  3. We refer to pg as the namespace of the database

Database credentials

As we are still in active development with the k8s-debug profile, we will simply use a key-value pair secret to supply our application with a fixed database username (app-user) and password (app-secret). This avoids any problem of rotations during debugging. We will get these from Vault to include that integration.

In our properties file, we will set the username and password to the environment variables STATIC_DB_USERNAME and STATIC_DB_PASSWORD. Here is the snippet:

src/main/resources/application-k8s-debug.yml

spring:
datasource:
url: jdbc:postgresql://db-cluster-rw.pg.svc:5432/myapp?currentSchema=myapp
username: ${STATIC_DB_USERNAME}
password: ${STATIC_DB_PASSWORD}
...

We now have to set these environment variables in our deployment file. We will do this from the static-db-credentials Kubernetes secret in the default namespace. Here is the snippet from the file:

k8s/k8s-debug-deployment.yml

...
env:
...

- name: STATIC_DB_USERNAME
valueFrom:
secretKeyRef:
name: static-db-credentials
key: username
- name: STATIC_DB_PASSWORD
valueFrom:
secretKeyRef:
name: static-db-credentials
key: password

Creating the secret from Vault

I will now demonstrate how we can pull a secret from Vault and add it into our cluster as a Kubernetes secret. Whilst we are doing this here for our database credentials, it is a useful technique for anything else you might want to inject from Vault into a Spring Boot application.

First we will create a secret in Vault as a key-value pair secret. Remember that the Vault KV engine allows multiple properties to be stored within a single KV secret. In our case, the username and the password.

Get a command line within the vault-0 pod, login to Vault, enable a key-value secrets engine and then create the secret:

kubectl exec -it vault-0 -n vault -- sh
vault login <root token>
vault secrets enable -path=spring-boot-k8s-template kv-v2
vault kv put -mount=spring-boot-k8s-template db username=app-user password=app-secret

The KV engine we are using is using KV-V2, which provides versioned secrets. We have mounted it at spring-boot-k8s-template.

Now we have our secret at spring-boot-k8s-template/db with fields of username and password.

Next, we need to extract these into a Kubernetes Secret. We will do this using Kubernetes ExternalSecrets and use an External-Secret Operator (ESO) to manage the SecretStore. This store will fetch a secret from Vault and inject it into our cluster as a Kubernetes Secret. From there we can use the secret in any way a Kubernetes Secret can be used. For example, we can inject them into our containers as environment variables.

The ESO we will use is deployed via Helm. Let’s add the required repository to our local Helm:

helm repo add external-secrets https://charts.external-secrets.io
helm repo update

We will install the ESO into its own namespace which we will create now:

kubectl create namespace eso
helm install external-secrets external-secrets/external-secrets -n eso

We now create a SecretStore, which represents a supplier of external secrets, ie: our Vault. We do that with the following file:

k8s/secret-store.yml

apiVersion: external-secrets.io/v1beta1
kind: SecretStore
metadata:
name: vault-backend
namespace: default
spec:
provider:
vault:
server: "http://vault-active.vault:8200"
path: "spring-boot-k8s-template"
version: "v2"
auth:
# points to a secret that contains a vault token
tokenSecretRef:
name: "vault-token"
key: "token"

Note that we have deployed our SecretStore to the default namespace, which is where our Spring Boot prototype will be run. If you intend to run your application in a different namespace you will need to change it here too.

Also note that we have specified our mount point as path: “spring-boot-k8s-template” to match the name we used earlier when enabling our secrets engine.

Before we can use the SecretStore, we have to set up the token it will use (ie: the token key within the vault-token secret).

To create a static token for this profile, get a command line on the Vault pod and use a root access token to create the new token.

kubectl exec -it vault-0 -n vault -- sh
vault token create -period 0

If you have created it after logging in with a root token, the new token will be created and displayed in a result that looks something like:

Key                  Value
--- -----
token hvs.Lw3Kv46jTCT2OMXFXjHWettD
token_accessor phSvwm5lrGqQvclXS5qKdC1H
token_duration ∞
token_renewable false
token_policies ["root"]
identity_policies []
policies ["root"]

This token can then be saved as a Kubernetes secret (replace the < > fields with the appropriate value):

kubectl create secret generic vault-token --from-literal=token='<token from above>'

Note that Kubernetes takes care of the base64 encoding when creating the secret from-literal.

Now create the SecretStore with:

kubectl apply -f k8s/secret-store.yml

And check it has started:

kubectl get secretstore
NAME            AGE   STATUS   CAPABILITIES   READY
vault-backend 2s Valid ReadWrite True

Note that the status is Valid (it could log in to Vault) and that ready is True.

Now we create our external secrets for static-db-credentials containing our username and password (also in the default namespace) from Vault. Create the following file:

k8s/external-secrets.yml

apiVersion: external-secrets.io/v1beta1
kind: ExternalSecret
metadata:
name: myapp-db-username
namespace: default
spec:
refreshInterval: "15s"
secretStoreRef:
name: vault-backend
kind: SecretStore
target:
name: static-db-credentials
creationPolicy: Owner
data:
- secretKey: username
remoteRef:
key: spring-boot-k8s-template/db
property: username
- secretKey: password
remoteRef:
key: spring-boot-k8s-template/db
property: password

And apply this:

kubectl apply -f k8s/external-secrets.yml

And now check that the secret has been created:

kubectl get secret static-db-credentials -o jsonpath={.data}

The result should be a base64 encoded username (app-user) and password (app-secret) for the database.

Creating our image file

To run our application in our cluster, we need to create a Docker image.

To do that, we need an executable JAR file.

We include the following the following snippet in our build file:

build.gradle

jar {
manifest {
attributes "MainClass":"com.requillion_solutions.sb_k8s_template.MainApplication"
}
}

This now allows us to build the jar file through our IDE. In IntelliJ, if you do not have the Gradle tool icon shown in your project, you can open it with:

View -> Tool Windows -> Gradle

From there, expand Tasks then expand build. You will then see a list of options, double click on the build option.

This will create an executable JAR file in the build/libs folder within your project folder.

The next step is to create a Docker image from the JAR file. For this we need a Dockerfile that will define what will go in our image. As we want to remote debug this version of our template, we create an image specifically for this profile:

Docker/Docker.k8s.debug

FROM openjdk:17.0.2-slim-buster
RUN addgroup --system spring && useradd --system spring -g spring
USER spring:spring
ARG JAR_FILE=build/libs/*.jar
COPY ${JAR_FILE} app.jar
ENTRYPOINT ["java","-agentlib:jdwp=transport=dt_socket,server=y,suspend=n,address=*:8000","-jar","/app.jar"]
EXPOSE 8000 8080 8081

A few things to note:

  • By running it on a slim-buster image, it allows us to access the command line within the pod and use some of the tools we are used to
  • To avoid running as root, we create a new user (spring)
  • We copy *.jar to overcome any version information in the JAR filename
  • We enable remote debugging (-agentlib…)

We do note set the Spring profile here as it is better to set it through an environment variable in the deployment file.

Here is the snippet to set the Spring profile in the Kubernetes deployment manifest:

k8s/k8s-debug-deployment.yml

...
env:
# Note that the following environment variable is converted to a
# property override called spring.profiles.active when read by Spring
- name: SPRING_PROFILES_ACTIVE
value: k8s-debug
...

Within the project folder, build the Docker image with:

docker build -t sb-k8s-template:01 -f Docker/Dockerfile.k8s.debug .

Whilst these are manual tasks, they can be automated by your CI/CD pipeline for deployment into a non-local environment.

You are now ready for the next step. This involves loading the image into your cluster and then deploying it.

If you are using the Kind cluster I mentioned earlier, you can load it onto the nodes without the need to push it to a repository. Do this with:

kind load docker-image sb-k8s-template:01

We are now ready for the last step where we deploy a pod with this image to our cluster and add NodePort services to access it.

Deployment file

This is the last step in deploying and running our application in our Kubernetes cluster. So, let’s bring it all together in our deployment file:

k8s/k8s-debug-deployment.yml

apiVersion: apps/v1
kind: Deployment
metadata:
name: sb-k8s-template
namespace: default
spec:
replicas: 1
selector:
matchLabels:
app: sb-k8s-template
template:
metadata:
labels:
app: sb-k8s-template
spec:
containers:
- name: sb-k8s-template
image: sb-k8s-template:01
imagePullPolicy: IfNotPresent
ports:
- containerPort: 8080
env:
# Note that the following environment variable is converted to a
# property override called spring.profiles.active when read by Spring
- name: SPRING_PROFILES_ACTIVE
value: k8s-debug
- name: STATIC_DB_USERNAME
valueFrom:
secretKeyRef:
name: static-db-credentials
key: username
- name: STATIC_DB_PASSWORD
valueFrom:
secretKeyRef:
name: static-db-credentials
key: password
---
apiVersion: v1
kind: Service
metadata:
name: sb-k8s-svc
namespace: default
spec:
selector:
app: sb-k8s-template
type: NodePort
ports:
- port: 8080
targetPort: 8080
nodePort: 30000
---
apiVersion: v1
kind: Service
metadata:
name: sb-k8s-debug-svc
namespace: default
spec:
selector:
app: sb-k8s-template
type: NodePort
ports:
- port: 8000
targetPort: 8000
nodePort: 30500

Things to understand about this deployment file:

  • It deploys to the default namespace
  • It only requests 1 replica
  • It pulls the Docker image we created above (sb-k8s-template:01)
  • It never pulls the image as you should have loaded it directly
  • It uses the environment variables to set up the database username and password
  • It also deploys two NodePort services, exposing our application on port 30000 and our debug port on 30500

Now deploy it:

kubectl apply -f k8s/k8s-debug-deployment.yml

Check the pod starts up correctly with:

kubectl get pods

You should see something like:

NAME                               READY   STATUS    RESTARTS   AGE
sb-k8s-template-7c8fd874dc-p5r4n 1/1 Running 0 9m55s

Congratulations! You now have your Spring Boot application running inside your local Kubernetes cluster, connected to your Postgres database.

Logging

Before we leave this article, there are two more things to look at, remote debugging and logging.

Let’s look at logging first.

If you have been following along, you should have Loki and Grafana installed in your cluster. Because of the way the Promtail log collectors work at the Node level, no further configuration is required.

Let’s check if they now see your Spring Boot application.

To access the Grafana console, go to:

http://localhost:31300

You should be taken to a login screen. You may remember that to get the log in credentials, you need to find them from the Kubernetes secrets with:

kubectl get secret loki-grafana -n monitoring -o jsonpath={.data.admin-user} | base64 -d; echo
kubectl get secret loki-grafana -n monitoring -o jsonpath={.data.admin-password} | base64 -d; echo

Login with the username and password and go to Explore under the main menu. Select the source to be Loki.

Add in a filter that looks for an app label of sb-k8s-template. If you know your way around Grafana, you could use this filter code:

{app="sb-k8s-template"} |= ``

Click Run Query and you should see your Spring Boot logs appear!

If you fetch all fishes with:

curl localhost:30000/api/v1/fishes

You should see the request logged if you refresh the Grafana query. You will also see the SQL executed by Hibernate.

Whilst Grafana does not have the equivalent of tail, you can request that it refreshes its query every 5 or 10 seconds.

Remote Debugging

The last part to this profile is to debug your application whilst it is running in the cluster. I use IntelliJ and will provide the instructions here for that IDE. Eclipse is also capable of remote debugging.

We have already enabled remote debugging on the JVM with the additional parameters in the Dockerfile ENTRYPOINT line. We have also exposed the debug port with a NodePort service in our deployment manifest. That port is then passed through to our development machine through the Kind configuration.

This gives us the way into our application.

Note that this should never be done in a production environment. All the remote debugging configuration should be removed.

I am assuming that you have created a project in IntelliJ based on this Spring Boot prototype that we are developing.

Go to the main menu Run -> Edit Configurations….

In the Run/Debug Configurations dialogue box that pops up, click the + in the top left to add a new configuration. Select the Remote JVM Debug option.

Give this configuration a Name, such as sb-k8s-template-remote. Select Attach to remote JVM as the Debugger mode. Enter a Host of localhost and a Port of 30500.

The popup provides you with the JVM command line arguments but we ignore these as our NodePort provides a port translation that means these arguments are not correct for our scenario.

Click Ok.

Now we can debug our application running in the Kubernetes cluster. From the main menu, select Run -> Debug…. Click on the name you gave (eg: sb-k8s-template-remote) and the debugger should connect to your application and tell you that it has connected.

You are now able to place a breakpoint, say on the get all fishes controller endpoint and then trigger a request and see the program stop at that breakpoint.

Congratulations, you are now debugging your Spring Boot application inside your local Kubernetes cluster!

Trying it out

The simple application provided by the prototype has a number of end points that you can exercise with curl or a tool such as Postman. These endpoints are:

  • GET localhost:30000/api/v1/fishes … get all fishes
  • GET localhost:30000/api/v1/fishes/{fishID} … get specified fish
  • POST localhost:30000/api/v1/fishes … create fish
  • GET localhost:30000/api/v1/fish-tanks … get all fish tanks
  • GET localhost:30000/api/v1/fish-tanks/{fishTankID} … get specified fish tank
  • POST localhost:30000/api/v1/fish-tanks … create fish tank
  • PUT localhost:30000/api/v1/fish-tanks/{fishTankID}/fishes/{fishID} … put specified fish in the specified fish tank
  • DELETE localhost:30000/api/v1/fish-tanks/{fishTankID}/fishes/{fishID} … remove the specified fish from the specified fish tank

If you create these in Postman, it is worth creating environments that match your profiles and define a variable called port for each environment. Then replace the 30000 with {{port}} in the URLs above. Now you can use the same collections to test each profile by simply switching environments.

Remember, even though we are running this in our local Kubernetes cluster, it requires very little to be changed to run it in any cluster. We shall see this in my next article on preparing our application for production.

Summary

In this article, we extended the Spring Boot prototype that we started in the previous article. By adding in Docker and Kubernetes files, we were able to build a Docker image and deploy it to our Kubernetes cluster.

Once we had done this, we were able to monitor it using Grafana and Loki and we could also debug it remotely whilst it ran in our cluster.

We also used Vault to provide secrets to our application, which we used to access our database.

This gets us to a point where we can start exploring more complex integrations within our Kubernetes cluster.

I hope you enjoyed this article and learned at least one thing from it.

If you found this article of interest, please give me a clap as that helps me identify what people find useful and what future articles I should write. If you have any suggestions, please add them as notes or responses.

--

--