Competing Consumers Pattern Explained

The Competing Consumers pattern explains how multiple consumers compete for messages on the same message channel to process multiple messages concurrently.

This pattern is useful when you want to process a discrete set of tasks asynchronously by distributing them among parallel consumers. In return, you’ll get a scalable, reliable, and resilient message processing system.

Let’s explore that with an example.

Problem Context

Let’s take an example of a component P requesting component C to perform a task that usually takes 5 minutes to complete on average.

P to C synchronous invocation

Having synchronous communication between P and C is frowned upon due to several reasons. Most importantly, P can’t be blocked until C completes the task. Also, a 5-minute task is too long to handle during a short HTTP request window.

As a solution, we can make this communication asynchronous by placing a message queue between P and C. P encapsulates tasks as a message and sends it to the message queue. C polls the queue to pick up tasks and processes them asynchronously. Thus, P is not blocked while C is processing a task.

P encapsulates the task as a message and sends to the queue. C polls the queue and processes.

However, having a single instance of C is not scalable. If C goes down, there’s no consumer to replace it and pick up his workload. Also, C needs to catch up with the rate at which P puts messages into the queue. Just imagine, if C needs 5 minutes to complete a task, what happens if 100,000 tasks are waiting in the queue? It’ll take days.

How can we scale this up to gain a better throughput, scalability, and availability?


The Competing Consumers pattern enables multiple concurrent consumers to process messages received on the same messaging channel.

In our example, we can have multiple instances of C, competing for messages on the same queue. They will concurrently process more messages to drain the queue faster.

When a message is available on the message queue, any of the consumers could potentially receive it. The messaging system’s implementation determines which consumer receives the message, but in effect, the consumers compete with each other to be the receiver.

The figure illustrates work items distributed among a pool of consumers via a message queue.

The Competing Consumers Pattern

Benefits of the pattern

Distributing asynchronous work items in a consumer pool is beneficial in terms of throughput, reliability, and scalability.

1. Scalability

The consumer pool can be scaled up or scaled down by looking at the length of the queue. If each consumer runs in a VM, container, or as a serverless function, appropriate auto-scaling measures can be taken to ensure smooth scaling and cost optimisations.

2. Reliability

If the consumer pool is exhausted (all consumers are occupied or not responsive), message producers can still put messages in the queue. Thus, making the system functional at least partially.

The message queue acts as a buffer, absorbs messages until the consumer pool becomes available to process messages. That prevents message loss and ensures at-least-once delivery guarantee.

3. Resiliency

If a consumer fails while processing a message, the message will be returned to the queue immediately, to be picked up by another consumer.

When to use this pattern?

Competing Consumers pattern is not a silver bullet for solutions that require multiple consumers to process messages concurrently on the same message queue. The reason is the nature of consumers. Not all consumers are made equal.

Let’s explore several use cases that would be ideal to use this pattern.

1. The application workload is divided into tasks that can run asynchronously

This pattern works well if the task producer and task consumer communicate asynchronously. That is — the task producing logic doesn’t have to wait for a task to complete before continuing.

If the task producer expects a response from the task consumer in a synchronous manner, this pattern is not a good option.

2. Tasks are independent and can run in parallel

The tasks should be discrete and self-contained. There shouldn’t be a high degree of dependence between tasks.

3. The volume of work is highly variable, requiring a scalable solution

4. The solution must provide high availability, and must be resilient if the processing for a task fails

Ideal for reliable message processing use cases.





EdU is a place where you can find quality content on event streaming, real-time analytics, and modern data architectures

Recommended from Medium

Publisher — Subscriber Pattern with Azure

Joomla Hosting: What Are Its Advantages and Disadvantages

How to Scrape Jobs Listings from Google Jobs

Google Jobs page screenshot

2D Game Development: Player Movement

Extreme programming to the extremes

Things Learned the Hard Way in Software Development

Interview with Christian Posta: Istio 1.7 Will Be the Most Stable Version for Production

Quick Start Guide to .NET Reverse Engineering

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store
Dunith Dhanushka

Dunith Dhanushka

Editor of Event-driven Utopia( Technologist, Writer, Developer Advocate at StarTree. Event-driven Architecture, DataInMotion

More from Medium

Designing applications at scale: Microservices and events — Part 1

In this blog, I am going to explain you how State store in Kafka streams is managed in…

Nifty tool-chain for CQRS application development with read model projection

Microservices, Facades, and everything in between