Switching from Monolithic to Event-Driven Architecture with Celery & Redis

Andrés Milla
Flux IT Thoughts
5 min readApr 25, 2024

--

This document explains how Redis was incorporated into the system of a client who specializes in human resources; particularly into a new development flow that is still being worked on.

Context

At the moment, our client is seeking to decouple the user flow from the monolithic app. This feature is currently being implemented within a monolithic app where the entire system flow lies, and due to the client’s large number of concurrent users, even simple read-only queries can lead to prolonged downtime and database locks.

Therefore, to address this issue, the decoupling of the user flow into a new service with the use of an event-driven architecture, particularly with SQS and SNS (AWS services) was suggested.

Incorporating Redis as a Message Broker

Since an event-driven architecture was chosen, the system’s core revolves around the events emitted by the monolithic app. Therefore, this process must be as optimized as possible to avoid idle time in communication.

This is where Redis arises as a potential option, given its robust capabilities as a message broker. Redis is an open-source, in-memory data warehouse used by millions of Developers for various purposes such as a database, cache, data streaming engine, and message broker.

In the past, our client would implement the SQS listener, thus handling the received event on the same thread. This has numerous disadvantages:

1.Main thread blockage: When handling events on the same thread, any time-consuming or blocking operation (such as network operations) can negatively impact others in the event queue. This could slow down the app’s responsiveness.

For example, in the attached diagram, we can see that the total time for handling all events would be 60 + 60 + 30 = 150 seconds, since events are handled in a cascade style.

2.Limited scalability: If a large number of events is received, we cannot escalate the solution through infrastructure because everything is processed on a single thread. This presents a significant issue, especially considering the client’s high volume of users per minute, which prompted the development of a new service.

To address this problem, Celery was incorporated into the project with Redis as the back end. This made it possible to spawn a Celery thread for each received event, as it can be seen in the diagram below.

In this context, we addressed the aforementioned issues by minimizing the blockage on the main thread through thread association with received events. Additionally, Celery facilitates horizontal scalability, thus enabling tasks to be executed on multiple nodes or processes. We can have multiple workers running in parallel to handle tasks concurrently. This significantly improves the app’s scalability since we can dynamically adjust the number of workers based on workload demands.

Furthermore, it is worth noting that failure recovery is simpler when using Celery, as it includes built-in mechanisms to handle failures and retry tasks in case of errors. We can configure retry policies and handle errors robustly, thus improving the resilience of our system.

Lastly, the use of Redis as the back end enables the management and visualization of task execution. This is extremely useful for debugging and observability because, if we are connected to Redis, we will be able to visualize the status of each created task and generate associated metrics.

Incorporating Redis as a Caching Mechanism

As it was mentioned in the “context” section, the client wanted to reduce its database usage since it was frequently accessed by a large number of concurrent users.

An effective technique to address this issue is to implement a caching mechanism. The key idea behind this mechanism is to temporarily store frequently requested information in a faster location, such as in a cache. By doing this, we avoid constantly searching for information in the database every time someone needs to access it. The cache acts as a shortcut, thus allowing the app to quickly retrieve information from the memory instead of performing slow operations in the database.

To achieve this, Redis was configured as a cache instance. Redis is a popular choice for implementing caching due to its speed, versatility, and ability to handle large amounts of data in environments with numerous concurrent users. Specifically, the https://github.com/jazzband/django-redis library was used to introduce Redis as the cache’s back end in Django.

This helped to reduce database accesses from public end points (such as catalogs) and to store temporary information that requires a significant amount of time to be calculated.

For example, to obtain resources for a user, a service end point needs to be called, thus creating significant network idle time. By using caching, these resources are cached for a specified time upon the first request, and subsequent users will not suffer from the aforementioned idle time.

Conclusion

In conclusion, this document addresses key strategies for optimizing event processing times in environments with high concurrency. The implementation of Redis’ message broker architecture together with Celery proves to be effective at managing task execution in a distributed manner, thereby improving the system’s responsiveness.

Additionally, it explored how integrating Redis as a caching mechanism has played a fundamental role in reducing both the database and network’s load. By temporarily storing data that is commonly stored in the main memory, Redis has made it possible to minimize the need for frequent accesses to the main database, thus contributing to a significant improvement in the overall app performance.

These combined strategies offer a comprehensive solution to tackle challenges associated with scalability and efficiency in systems with a large number of concurrent users, thus highlighting the importance of choosing the right tools like Redis to optimize data and event handling in real time.

Links & References

Know more about Flux IT: Website · Instagram · LinkedIn · Twitter · Dribbble · Breezy

--

--