Load balance microservices using Kubernetes' Minikube

Learn how to deploy and balance applications with Minikube

Yogesh More
Globant
4 min readNov 7, 2022

--

Photo by Chun Kit Soo on Unsplash

In this article, I will walk you through how to deploy and load balance Spring Boot applications using Minikube (local Kubernetes environment).

For this work, you'll need the following:

  • Spring Boot version: 2.7.3
  • Minikube version: v1.25.1
  • Kubectl: v1.22.5
  • Skaffold: v1.39.1

Let's get started…

We will create a sample microservice using Spring Boot and deploy it on the local Kubernetes cluster. To start the local Kubernetes cluster, first start Minikube using the following command.

minikube start

Now let's create a simple Spring Boot application with one REST GET end-point that will return "Hello + Pod Name". The pod name will indicate here which instance is serving the request.

Using Spring start.io, create a project and name it "order-service".

Add the following end-point code to the order-service project.

@SpringBootApplication
@RestController
public class OrderServiceApplication {
@Autowired
Environment env;
@GetMapping("/status")
public String status() {
return "Status - returned by Pod - " + env.getProperty("HOSTNAME") ;
//HOSTNAME => POD name serving request
}
}

Now let's create a docker file named "Dockerfile" to create the container.

FROM openjdk:11.0.7-jre-slim
VOLUME /tmp
ADD target/order-*.jar app.jar
ENV JAVA_OPTS=""
ENTRYPOINT
[ "sh", "-c", "java $JAVA_OPTS -Djava.security.egd=file:/dev/./urandom -jar /app.jar" ]

Next, create a deployment file "order-service.yaml" containing a service Load Balancer service definition as shown below :

apiVersion: "v1"
kind
: "List"
items
:
- apiVersion: "v1"
kind
: "Service"
metadata
:
annotations: {}
labels: {}
name: "order-service"
spec
:
selector:
app: "order-service"
type
: LoadBalancer
ports:
- name: http
port: 8080
targetPort: 8080
nodePort: 32000
protocol: TCP
- apiVersion: "apps/v1"
kind
: "Deployment"
metadata
:
labels:
app: "order-service"
version
: "1.0.0"
name
: "order-service"
spec
:
replicas: 3
selector:
matchLabels:
app: "order-service"
version
: "1.0.0"
template
:
metadata:
labels:
app: "order-service"
version
: "1.0.0"
spec
:
containers:
- image: "order-service:1.0.0"
imagePullPolicy
: "Never"
name
: "order-service"
resources
:
limits:
cpu: 500m
requests:
cpu: 200m
ports:
- protocol: TCP
containerPort: 8080

You can deploy directly using the kubectl apply -f order-service.yaml command, or create a Skaffold file "skaffold.yaml" as shown below:

apiVersion: skaffold/v2beta11
kind: Config
metadata:
name: order-service
build:
artifacts:
- image: order-service
deploy:
kubectl:
manifests:
- k8s/order-service.yaml
portForward:
- resourceType: deployment
resourceName: order-service
port: 8080
localPort: 8080
manifests:
- k8s/order-service.yaml
portForward:
- resourceType: deployment
resourceName: order-service
port: 8080
localPort: 8080

Open the command prompt, navigate to the project directory, and run the following command.

skaffold run

On the successful run, verify that three instances of order-service are running.

POD status

Check the status of the newly created service:

Service status
service status

As we can see, the service "order-service" is of type Load Balancer, and its external IP assignment is pending. To access and assign the external IP, we need to create a bridge between Minikube and our machine by using a tunnel command.

Open a new command prompt and run the minikube tunnel command.

Tunnel command to bridge local and Minikube network

Note: To keep the bridge on, don't close the tunnel window.

Now rerun the get services command (kubectl get svc) and verify that an IP is assigned:

Load Balancer Service IP Assignment

Now open Postman, create one GET end-point and change the keep-alive configuration as shown below.

Click on headers → hidden:

Toggle hidden headers

Uncheck the connection field as shown in the below diagram. This will ensure that every time a new connection is made, we will see the effect of load balancing. (The K8s Load Balancer uses a random assignment algorithm; hence sometimes you might see the same pod is serving the request.)

IMP : Toggle keep-alive setting
Change Keep-alive

IMPORTANT: for browsers, a default keep-alive is 60 seconds (using the network tab of the browser, you can verify headers); hence if you are testing using browsers, then hit the end-point after that time to see the effect of load balancing.

Now let's hit the end-point as shown below and verify load balancing is working:

Observe the response (last word) section. As we can see, a different instance sends the response, and load balancing is working!

Conclusion

We have successfully deployed a microservice on Minikube and can get responses from different instances using the load balancer service.

#Minikube #Springboot #LoadBalancer

--

--

Yogesh More
Globant
Writer for

Open to learn, Passionate about creating scalable, resilient applications