Spring Boot CI/CD on Kubernetes using Terraform, Ansible and GitHub: Part 9
Part 9: Accessing a Spring Boot application using a Kubernetes Service
This is part of a series of articles that creates a project to implement automated provisioning of cloud infrastructure in order to deploy a Spring Boot application to a Kubernetes cluster using CI/CD. In this part we create a Service to expose the Spring Boot Application to the outside world.
Follow from the start — Introduction
In the previous article we deployed our Spring Boot application to our Kubernetes cluster. Although the application is accessible from within the cluster, it is not accessible externally.
You can find the files described in this article in the k8s
folder of this repository:
https://github.com/MartinHodges/Quick-Queue-Application/tree/part9
Kubernetes Services
Pods are ephemeral. This means that they do not last forever. If the application crashes, the container runs out of resources or the deployment changes, the Pod is destroyed and recreated (rescheduled). When this happens, the new Pod is given a new IP address internally.
Not only does this mean you cannot rely on the IP address of the Pod, the ReplicaSet may create multiple Pods. When the Pod is upgraded, it can destroy Pods and create new ones.
This means that, if you want to connect to your application, it is not clear how that can be done with Pods popping up and disappearing.
The answer is a Kubernetes Service. A Kubernetes Service provides access to an application by forwarding a request to a Pod that is running that application. When Pods are created and destroyed, Kubernetes adjusts routing to ensure that the Service can still pass requests to the application, regardless of where the Pods are. Kubernetes ensures that the Service itself has a fixed IP address within the cluster, even if the Service is restarted.
Service manifest
Like all Kubernetes resources, the Service is also created through a manifest file. For this application, create service.yml
on the master node:
apiVersion: v1
kind: Service
metadata:
name: qqapp
labels:
app: qqapp
spec:
selector:
app: qqapp
type: NodePort
ports:
- port: 9191
targetPort: 9191
protocol: TCP
nodePort: 30191
externalIPs:
- <master public IP address>
Like previously, let’s break it down into sections.
apiVersion: v1
kind: Service
metadata:
name: qqapp
labels:
app: qqapp
We require version v1 of the Kubernetes API and are creating a Service resource.
The metadata
provides a name for the service
(qqapp
) as well as a label
of the same value.
spec:
selector:
app: qqapp
type: NodePort
This section starts the specification of the service. The first thing that is specified is the selector
that will select which Pods will be connected using this service. Unlike ReplicaSets, there is no matchLabels
or matchExpressions
and only labels can be used to select Pods. Any Pods with matching labels will be passed requests through this Service.
There are several types of Service:
ClusterIP
— this Service provides a fixed IP on the internal cluster network allowing other applications to reach the applicationNodePort
— this Service exposes the application as a port on each node of the clusterLoadBalancer
— this Service is used with cloud providers who support Kubernetes services and requests the cloud provider to set up an external load balancer for the application (Binary Lane does not offer this feature)ExternalName
— this Service acts as proxy for services defined outside the cluster
In this case we are using the NodePort
service to map the application ports to each node.
ports:
- port: 9191
targetPort: 9191
protocol: TCP
nodePort: 30191
This tells the NodePort
service how to configure all the various ports.
targetPort
is the port on the application itselfport
is the port that the service will be available on inside the clusternodePort
is the port that service will be made available on, on each host node (Kubernetes requires this to be in the range 32,000–32,768)protocol
is the protocol that the Service will route to the specified application port
externalIPs:
- <master public IP address>
If one or more of your hosts are multi-homed and have access to the Internet via one of their network connections, you can make the service available on the Internet by defining an external IP address for the Service.
In this example, we are using the master node to host our service.
Creating the Service
Now that you have a manifest file to define your service, you can now add it to your cluster with:
kubectl apply -f service.yml
This should give you the response service/qqapp created
. Any error is likely to be a syntax error in the yaml file.
Accessing a service externally
You can now see the details of your Service with:
kubectl get services
You are likely to see something like this:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 14d
qqapp NodePort 10.5.223.741 112.13.32.652 9191:30191/TCP 86s
The first is the Kubernetes API service itself. The second is the service you just created. If you connected it to an external IP address, you should now be able to go to your development machine and enter:
curl http://<master ip address>:30191/api/v1/queues
You should receive back a reply of: []
If you have any problems, ensure that your request matches the nodePort
of the service and that the targetPort
matches the port that the Spring Boot Tomcat application server is listening on.
You can also access your service within your cluster using its local IP address and port
.
If you want more information about your service, use:
kubectl describe service qqapp
You can now test your application with a tool like postman or curl.
Accessing a service internally
You can also access your service from within the cluster using the DNS entry that Kubernetes sets up for each pod.
First get the name of your application Pod:
kubectl get pods
Now use that name to get an interactive shell into your Pod:
kubectl exec -it <pod name> -- /bin/bash
Now install curl:
apt update
apt install curl -y
Now access your service via its local DNS name:
curl qqapp.default.svc.cluster.local:9191/api/v1/queues
You should now get a response of []
.
All Pods and Services are given DNS entries across the cluster. Whenever a Pod or Service is created, all Pods have their /etc/resolve.conf
file updated with the new Pod or Service entry.
Endpoint Slices
For very large clusters, the update of these files can take time and so Kubernetes introduces the idea of endpoint slices. These slices each contain a subset of all the endpoints (Pods and Services) in the cluster and when updated, only the slice needs to be updated making the updates quicker and less resource intensive. When just playing with Kubernetes, you will not need more than one slice.
Congratulations, you now have a Spring Boot application and its database deployed to a Kubernetes cluster which you can access from the Internet.
Summary
In this article we created a Kubernetes Service manifest that we deployed in order to provide access to our Spring Boot application.
We saw how we could configure the Service to provide access externally as well as internally via a Kubernetes DNS entry.
Next we will now automate the Continuous Integration/Continuous Deployment (CI/CD) pipeline, starting with the CI process.