Spring Boot CI/CD on Kubernetes using Terraform, Ansible and GitHub: Part 8
Part 8: Adding a Spring Boot Application to your cluster
This is part of a series of articles that creates a project to implement automated provisioning of cloud infrastructure in order to deploy a Spring Boot application to a Kubernetes cluster using CI/CD. In this part we install the Spring Boot Application to the cluster.
Follow from the start — Introduction
In this article we are going to add the Spring Boot application you created in the last article to your Kubernetes cluster.
You can find the files described in this article in the k8s
folder of this repository:
https://github.com/MartinHodges/Quick-Queue-Application/tree/part8
Kubernetes Manifest Files
Before we start, we need to understand Kubernetes manifest files, or just manifests.
Kubernetes is configured using declarative manifests in which you tell Kubernetes what you want your cluster to look like.
This is not a list of instructions to carry out but a description of what you want your cluster to look like. In other words, your target state. Given this description, Kubernetes will work out what it needs to do to make your current cluster state match your target state.
When describing the target state, you describe each of the objects within your cluster. As objects are created, updated and deleted by way of the RESTful Kubernetes API, these objects are also called resources (the resource type is called a kind
in a Kubernetes manifest file).
There are many types of resource, such as these common ones you will come across (and some you will already have come across in this series):
- Deployment
- Namespace
- Service
- Ingress
- Persistent Volume
- Persistent Volume Claims
- Config Maps
- Secrets
There is a great explanation of these resources and more here. For now, we will just focus on the ones we need to deploy our Spring Boot application.
A manifest is a yaml file that defines one or more of the resources you want in your cluster. By applying the file to the cluster using kubectl
, Kubernetes will then reconfigure the cluster to meet your requirement, eg:
kubectl apply -f <your manifest>.yml
You can then look at the state and configuration of a resource with:
kubectl describe <resource type> <resource id> -n <your namespace>
Note the use of the namespace. All resources are created within a namespace. If one is not given default
is assumed. Even now, having worked with Kubernetes I sometimes wonder where my resource has disappeared to and then realise I have not defined the namespace I am looking at (-n <namespace>
). If you do not include a namespace, default
is assumed (although you can change this if you wish).
Sometimes you cannot remember the namespace you used. In this case use the -A
option to include all namespaces. You may be surprised how many resources you actually have once you start using your cluster!
You can lump all resources into a single manifest, with each resource defined as a separate yaml document (documents are demarkated by---
and …
lines). I prefer to use separate manifest files for separate resources.
In large deployments this might not work for you and you may end up with a lot of manifests with dependencies between them. When you get to this stage, you may wish to consider creating Helm files, which effectively package up manifest files into a single deployable unit. Whilst we have previously installed and used Helm, it is beyond this introductory series to use them here.
Docker Image
Kubernetes runs your application as an image within a container within one or more Pods within one or more nodes. In our case, we are using Docker containers with the Docker image you created and uploaded to Docker Hub in the previous article.
Once you have access to your Docker image in Docker Hub, you can now create the manifests that you need to load this into your cluster and to run it.
Deployment Manifest
When deploying a Spring Boot application to a cluster, there are two resources we need to create, a Deployment
and a Service
. We will create the Deployment
first.
Log in to your master node and create deployment.yml
:
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: qqapp
name: qqapp
spec:
replicas: 1
selector:
matchLabels:
app: qqapp
template:
metadata:
labels:
app: qqapp
spec:
containers:
- name: qqapp
image: <docker hub username>/qq_app:latest
ports:
- containerPort: 9191
imagePullPolicy: IfNotPresent
env:
- name: DB_HOST
value: postgres-postgresql.postgres.svc.cluster.local
- name: DB_USER
valueFrom:
secretKeyRef:
name: db-user-pass
key: username
- name: DB_PW
valueFrom:
secretKeyRef:
name: db-user-pass
key: password
imagePullSecrets:
- name: my-registry-secret
As normal, I will break this down and explain what is happening.
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: qqapp
name: qqapp
The first line of the deployment manifest is to say which version of the Kubernetes API we are expecting. Whilst you may see other values in older tutorials and examples, many have been deprecated and apps/v1
is a stable version that you should use.
The second line says that this resource is a Deployment
.
Names
The metadata section shown above, is associated with this resource (the Deployment in this case). The name (qqapp
) refers to the specific Deployment when you use clients, such as kubectl
. Names are unique within a namespace for a given resource type but not across namespaces. A resource only has one name.
Names are used to create sub-domains within the cluster. For this reason, they are limited to:
- contain no more than 253 characters
- contain only lowercase alphanumeric characters, ‘-’ or ‘.’
- start with an alphanumeric character
- end with an alphanumeric character
Note, however, that some resources for various reasons require the name to be limited in length to 63 characters. Where names are also used as URL path segments, they cannot include . or .. or / or %.
Labels
Unlike names, which refer to a particular instance of a resource, such as this Deployment
, labels are used to identify a group of resources that can be operated on together. Labels may be used by the user when using clients such as kubectl
but may also be used to identify dependencies between resources, as we shall see as we look at the rest of the deplopyment.yml
file.
The labels are a set of key-value pairs and a resource may have a number of them but any given key must only appear once for a given resource. In the example above, only one key-value pair is provided: app: qqapp
.
For example, keys might include:
app: qqapp
environment: test
tier: back-end
The key (eg: app
, environment
) can have an optional prefix, separated with a /, eg: my-app/app: qqapp
. The prefix and name have different constraints:
Prefix:
- contain no more than 253 characters
- contain only lowercase alphanumeric characters, ‘-’ or ‘.’
- start with an alphanumeric character
- end with an alphanumeric character
Name:
- contain no more than 63 characters
- contain lower or uppercase alphanumeric characters, ’_’, ‘-’ or ‘.’
- start with an alphanumeric character
- end with an alphanumeric character
In general, for names and label keys, if you stick to lower case letters and numbers, hyphens (-) and fullstops (.), you should be fine.
Annotations
We do not use annotations in this example but an annotations section allows you to create your own identifying key-value pairs that are ignored by Kubernetes itself.
Continuing with the deployment.yml file
spec:
replicas: 1
selector:
matchLabels:
app: qqapp
In this section, there is a specification of a ReplicaSet resource. A ReplicaSet is going to make sure the required number of Pods running your application matches the number of replicas requested (in this case 1).
It determines how many are running right now based on the selector
. In this example the selector uses matchLabels
. This selector will look for any Pods with labels that match all of the labels provided. In this example, we only have one.
As well as matchLabels
, there is also matchExpressions
which provide a greater range of options for selecting resources. In this example we only use matchLabels
.
You should be aware that matchLabels
and matchExpressions
are new and only apply to some resource types. Earlier resource types only have the selector
label itself followed by the key-value pairs to match.
In the example, our Deployment
will create one ReplicaSet which will ensure 1 and only 1 instance of our pod is schedules at any point in time.
So, if there is not 1 instance of our Pod, how does the ReplicaSet know what to create? Enter the next section.
template:
metadata:
labels:
app: qqapp
spec:
In this section, we define the template
of the Pod that the ReplicaSet uses to create our Pod. It is sometimes referred to as a Pod template.
The first thing to note is that it has a metadata
section that will add these labels to each Pod created from this template
. You can now see how the ReplicaSet knows how many Pods are running because its selector matches the labels
given to the Pods by this template
.
Following the metadata section, there is then a specification (spec
) of the Pod to create. This is pointy end of the stick and things get a little more complex so I am breaking it down into a separate section. Suffice to say, this is where you define how your application gets created within a Pod.
containers:
- name: qqapp
image: <docker hub username>/qq_app:latest
ports:
- containerPort: 9191
imagePullPolicy: IfNotPresent
First we see that the Pod consists of one (or more) containers
. Whilst Pods generally only consist of one container, there are cases where a single Pod may consist of multiple containers (eg: sidecar helpers). In our example, we only have one which we have named, you guessed it, qqapp
.
We then tell Kubernetes where to fetch the image from when it creates it. In this case <docker hub username>/qq_app:latest
. Of course, you will put your own username in here.
Briefly skipping ports, we can then see the imagePullPolicy
. This tells Kubernetes what to do when it creates the container.
Always
— always pull the image each and every time (this can delay the start up of the container)IfNotPresent
— only pull the image if it does not already exist on the node (this may mean that updates are not loaded as expected if the same version tag (eg:latest
) is always used)Never
— will only use the image on the node and will not download it
Back to the ports
. Here we tell the container to make the application port 9191 available to the cluster on the Pod’s IP address. Note that this does not make it available outside of the cluster network.
env:
- name: DB_HOST
value: postgres-postgresql.postgres.svc.cluster.local
- name: DB_USER
valueFrom:
secretKeyRef:
name: db-user-pass
key: username
- name: DB_PW
valueFrom:
secretKeyRef:
name: db-user-pass
key: password
In this section, the container is given a set of environment variables. These environment variables allow your application to be configured based on the details of your Kubernetes cluster. You may remember running your application in a container in the previous article, like this:
docker run -d -p 9000:9000 -p 9191:9191 -e DB_HOST=host.docker.internal -e DB_USER=postgres -e DB_PW=<password> <dockerhub username>/qq_app:latest
In this command, you set the environment variables for DB_HOST
, DB_USER
and DB_PW
. The env
section of the pod template sets these environment variables based on your cluster.
DB_HOST
This is the address of the postgres database. The address must be accessible to your application. It so happens that when you create the database, it creates a Service which can be accessed by the internal URL: postgres-postgresql.postgres.svc.cluster.local
.
DB_USER
This is the database user the application should use. The value is extracted from the username
key within the Kubernetes secret called db-user-pass
. The secret was created in part 6 of the series.
DB_PW
Like DB_USER
, DB_PW
is extracted from the password key of the db-user-pass
secret and used with the username to access the database.
imagePullSecrets:
- name: my-registry-secret
Finally this tells Kubernetes which secret it should use to log in to your Docker Hub account in order to pull the images it needs. The secret is a special kind of secret that is used to access your Docker Hub account.
You can set this secret up from the command line on the master node using:
kubectl create secret docker-registry my-registry-secret \
--docker-username=<DOCKER_USER> \
--docker-password=<DOCKER_PASSWORD> \
--docker-email=<DOCKER_EMAIL>
Replace DOCKER_USER
, DOCKER_PASSWORD
, DOCKER_EMAIL
with your own details. Note that you may want to ensure that these details are not added to your history by setting them within a script file (if you are using zshell, you can prefix your command with a space to ensure the command is not recorded) .
We now have a deployment file that can deploy your application to your cluster using the following on your master node:
kubectl apply -f deployment.yaml
This applies your deployment manifest to your cluster and the ReplicaSet created will schedule the creation of a Pod, with a container, with your docker image with your application.
It should return deployment.apps/qqapp created
You can then check with:
kubectl get pods
This should return the Pod running your application in the default
namespace:
NAME READY STATUS RESTARTS AGE
qqapp-7c7f88cdcc-8ftkn 1/1 Running 0 9s
Your pod name (qqapp-7c7f88cdcc-8ftkn
) will be different to this one. If it does not show Running or shows multiple restarts, then you may have a problem. You can use either of the following commands to investigate further:
kubectl describe pods
kubectl logs <pod name>
You now have your application running in your cluster. Your Spring Boot application is now running in a Pod but, whilst the container makes the port available to the internal network, you do not have access to it from your development machine. Port forwarding could be an option but each time the Pod is restarted, its IP address will change. We cover this problem in the next article.
Summary
In this article we created a Kubernetes deployment manifest that deployed our Spring Boot application into our cluster. Applying this manifest resulted in the creation of a Pod with a container running our application docker image.
We used Kubernetes Secrets and Service DNS records to connect our application to the postgres database we deployed earlier in the series.
Finally we looked a the status and logs of our application.
Next we will create a Kubernetes Service to access our application from our development machine.
Previous — Creating a Spring Boot application to add to your cluster
Next — Accessing a Spring Boot application using a Kubernetes service