Distributed Caching Pattern for Microservices with Redis Kubernetes )
Caching is one of the key implementation when it comes to production level deployment of services, which help us to increase the performance of the system by staying as a middle layer between a particular application and the persistence system where the actual data is kept.
When we go through the possible patterns of caching implementation in a microservice architecture, there are several patterns available including:
- Embedded Cache
- Embedded Distributed Cache
- Client Server Cache
- Cloud Cache
- Sidecar Cache
- Distributed Cache
In this article my intention is to discuss about the Distributed Cache Pattern and how we can deploy a Distributed Cache Solution using Redis in both Single Node and also with High available Setup.
As initial step we will look at about what this Distributed Cache Means — below quote has been extracted from .
A distributed cache is a system that pools together the random-access memory (RAM) of multiple networked computers into a single in-memory data store used as a data cache to provide fast access to data.
Below diagram depicts a typical use case where the distributed cache is used. Same use case we are going to Implement using Spring Boot, MysQL and Redis on top of Kubernetes.
What we are going to do here!
- Setting up a Single Node Redis Deployment
- Setting up a HA Redis Deployment ( Cluster Disabled and Using Sentinel for High Availability )
- Prepare a sample Spring Boot Microservice to test the Caching Between Application and the MySQL
Setting up a Single Node Redis Deployment
Below diagram depicts what are components needed and how they interconnected when we are doing a Single Node Redis Deployment in Kubernetes.
Spring Boot Application — Application created to test the Caching.
Redis Service — This is a Kubernetes Service of the Redis, used as the entry point from the Spring Boot Application.
Redis — This is the Redis POD which runs as StatefulSet in Kubernetes.
NFS Storage Provisioner — This is to provision the Persistence Volumes dynamically through Persistence Volume Claims. This is actually needed when we go with the Redis Cluster, but I have implemented the same with Single Node also as the it can be re-used. Here using the  nfs-subdir-external-provisioner, as the default nfs provisioner is no more maintained and also when using the old one facing an issue, its not properly starting the NFS Provisioner POD in Kubernetes ( minikube version: v1.22.0 ).
External NFS Server — Used for the Persistence Storage.
Minikube — For the local tetsing purpose we can use Minikube for setting up the Kubernetes Cluster.
Note: Here I’m not going to explain about setting up the Minikube, refer  if you are getting started on it.
Step-1: Preparing the Storage Class.
Step-2: Preparing the Role and Access Control.
Step-3: Preparing the Deployment File.
Step-4: Executing the Script and check the Deployment.
Step-5: Preparing the redis-config.yaml
Step-6: Preparing the Redis Deployment File
Step-7: Executing the Scripts for Redis Deployment
Step-8: Verify the Initial Setup
Now we are done with the Single Node Setup and we will verify this through Spring Boot Application when we are at the Section: 3.
Setting up a HA Redis Deployment
When it comes to high available setup, there are two options one is using the cluster enabled and cluster disabled option. There are pros and cons in these two setup, to get more information refer “comparing cluster options” link.
Here I’m going to setup the Cluster Disabled mode where the High availability is achieved by using Sentinel. Sentinel will monitor the Nodes and for example consider that the master node failed then it will make one of the slave as master to make it available.
In this setup there will 3-Redis Nodes, 3-Sentinel Nodes along with 2-NFS Provisioner to make the high availability.
Step-1: Deploy the NFS Provisioner by increasing the replicas to 2.
Step-2: Use the same Redis Config file and deploy it.
Step-3: Use the same Redis Deployment file and update the replicas to 3 and deploy the file.
Step-4: Prepare the Sentinel Deployment File.
Note: Need to update the <namespace> tag accordingly.
Step-5: Now execute the below commands to deploy the artifacts.
There will be 3-Redis PODs and 3-Sentinel PODs will run and you can verify using the same approach for Redis Nodes and to verify the Sentinel Nodes you can check the logs using the kubectl logs -f <podname> -n <namespace>.
Prepare a sample Spring Boot Microservice to test the Caching Between Application and the MySQL
Step-1: Implement the spring boot project to use the Redis Cache. My implemented sample can be found at “redis-sample”. Clone it and go to the project folder and execute the below command to build the image.
Highlights in the Application Code:
application-dev.yaml properties file.
To build the Image, execute the below command from the project root folder.
After a successful build can find the image as below:
Step-2: Now we have pushed our image and now need to deploy it to the minikube cluster.
Prepare the below Deployment File:
Deploy using the below command and verify whether all the PODs are in ready state.
Step-3: Verifying the Caching.
Using the minikube ssh command get into the minikube cluster and execute the below curl command.
Also Check the application logs:
you can see the first call have the logs by fetching the records from the backend database. But when you execute second time it will not reach there but you will get the response back.
To verify whether this is go through our Redis Cache, execute the below command and get into the redis POD.
You can see our execution records are available there. This confirms our integration works as expected.
That’s it… Further the application can be updated with updating and revoking cache options as well.