Event-Driven Approach and Database Caching

Doruk Su
Finartz
Published in
6 min readNov 26, 2020

This article about how to design and implement your micro services to communicate with other micro services using asynchronous messages. The aim of the article is to increase the performance of micro services by using data caching and make this caching infrastructure flexible by using event-driven architecture.

I will explain the subject by showing an example. Let’s suppose we have two services named Customer and Address. As required by the business rules, the customer service receives the address information of that customer from the Address service. Considering that address information is often needed, it becomes a very costly process for the customer-service to go to the address service constantly and the address-service to read this information from the database. On top of that, address information rarely changes and the most of the data reads from the address-service are done by the primary key of the customer record.

If we cache the reads for the address data without having to suffer the cost of accessing a database, we significantly improve the response time of the customer service calls.

However, there is an important issue to consider here. When an address data changes via update or delete at address-service, we want the customer-service to recognize that there has been a state change in the address-service. Because the data in cache may not be valid anymore.

Now let’s look at two different structures we can use to invalidate the cache. The first is that services communicate with each other in synchronous manner.I call this traditional synchronous request-response model. When address information changes, the address service can update the address information in the cache via REST endpoints provided by the customer service. You can see the drawing of this structure in picture below.

The scenario in the picture is as follows,

1-) Customer service client make a call for some business

2-) Business flow need address information, so customer service first checks cache for address data

3-) If address data is not in cache, customer service calls the address service to retrieve it

5-) Address data may be updated or deleted via calls to the address service

4-) When address data changed, address service calls customer service endpoint to invalidate related data in cache or directly calls to customer-service cache to update it.

Some problems and disadvantages exist with this type of architecture.Firstly, services are tightly coupled. The coupling introduces fragility between the services.Because if customer service is down or running slowly, address service is also affected by this situation because address service is now communicating directly with customer service. Secondly the coupling creates friability between the services. If customer service endpoint changes then also address service has to change.

Another approach is using asynchronous messaging to communicate state changes between services. With an asynchronous approach, we are going to put a queue structure between customer and address services. This queue will used by address service to publish when any state changes within address data. You can see the drawing of this architecture in the picture below.

The scenario of this picture is the same as the other except for two differences.

1-) Every time address data changes, the address service publishes a message out to queue

2-) The customer service is monitoring the queue for messages

When message comes into queue, customer service process it and invalidate related address data in cache. This approach provides four benefits: Loose coupling, durability, scalability and flexibility. I will explain these terms later in the article. Now let’s create our sample project and apply the asynchronous architectural structure. I am going to use following technologies in this sample project :

  • Spring Boot
  • Kafka
  • Redis
  • PostgreSQL

We are going to use Kafka for queue structure and Redis for cache mechanism. Because I am familiar with these technologies, I choose to use them. You can use other message brokers like RabbitMQ instead of Kafka.

First of all, let’s start by creating the address service which is our producer service.

<project>
...
<parent>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-parent</artifactId>
<version>1.4.4.RELEASE</version>
<relativePath/>
</parent>
<artifactId>address-service</artifactId>
<groupId>com.doruk</groupId>
<version>1.0-SNAPSHOT</version>
<packaging>jar</packaging>
<dependencyManagement>
<dependencies>
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-dependencies</artifactId>
<version>Camden.SR5</version>
<type>pom</type>
<scope>import</scope>
</dependency>
</dependencies>
</dependencyManagement>
<dependencies>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-data-jpa</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-web</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-stream</artifactId>
</dependency>
<dependency>
<groupId>postgresql</groupId>
<artifactId>postgresql</artifactId>
<version>9.1-901.jdbc4</version>
</dependency>
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-starter-stream-kafka</artifactId>
</dependency>
<dependency>
<groupId>org.projectlombok</groupId>
<artifactId>lombok</artifactId>
<version>1.18.16</version>
<scope>provided</scope>
</dependency>
</dependencies>
...
</project>

Address service have simple crud operations, it create, update, read and delete Address entities from Postgres database.

Writing the message producer in the address service

Every time address data is added, updated or deleted, the address service will publish a message to Kafka topic indicating that the address changed event occurred. The published message will include id of Address entity and will also include what action occurred.

Now we need to bind Address Service to message broker and we can do this by annotating main class with @EnableBinding annotation

The source interface is a Spring Cloud defined interface that exposes a single method called output(), that means service only needs to publish message. Publishing of message event occurs in the publishAddressChangeMethod. For each method in the service that changes address data, publishAddressChange method must be called.

Writing the message consumer in the customer service

Let’s now switch directions and look at how a service can consume a message. We are going to have the customer service consume the message published by the address service.

Unlike the other I defined this service as Sink, Sink interface is called input and is used to listen for incoming messages on a channel

Spring Cloud Stream will execute processAddressData method every time a message is received off the input channel. Here we have a simple switch case statement. According to action type that will come from producer it will execute appropriate function.

Creating the distributing caching mechanism

We have two services communicating with messages, but we didn’t do anything with this messages. Now we build cache mechanism in customer service so that when customer service need to check address record, first it checks in cache server. If address data not exist in cache database then customer service will call address service for record.We will use Redis server for caching mechanism and we need three dependencies for this. Redis is a fast in-memory database and cache, open source under a BSD license.

<dependency>
<groupId>org.springframework.data</groupId>
<artifactId>spring-data-redis</artifactId>
<version>1.7.4.RELEASE</version>
</dependency>

<dependency>
<groupId>redis.clients</groupId>
<artifactId>jedis</artifactId>
<version>2.9.0</version>
</dependency>

<dependency>
<groupId>org.apache.commons</groupId>
<artifactId>commons-pool2</artifactId>
<version>2.0</version>
</dependency>

Now let’s make necessary bean definitions.We use JedisConnectionFactory to provide the actual connection to the Redis server. The redisTemplate method creates a RedisTemplate that will be used to carry out actions against our Redis server.

We are using Spring Data to access Redis database so we need to define repository class. We have three methods in repository class which save, delete and find respectively.

We created our cache structure, now let’s do to invalidate our address data in cache when update or delete event occurs. Previously we had set up a structure that listens and analyzes messages from the channel with the Event Listener class. Now we will improve this class.

Each message coming to the processAddressData method will update the cache via redis repository according to the action it contains.

Advantages of using Event-Driven Architecture

There are important advantages of using event-driven approach that I mentioned earlier in the article.

  • Loose Coupling: Services are not dependent to each other. When address data changed, address service writes a message to a queue. The listener in customer service only knows that it gets a message, it doesn’t know about its source.
  • Durability: The presence of queue guarantee that a message will be delivered even if the customer service is down.
  • Flexibility: We can add new consumers without impacting the producer service.

Thank you for reading my post. You can find the complete code on https://github.com/doruk581/caching-eda-example

--

--