Asynchronous communication in Microservices

Microservice oriented architecture provides ideal platform for continuous delivery and offers increased resilience. They foster faster innovation to adapt to changing market conditions, increase developer productivity and improved scalability in real time. Each microservice is implemented as an atomic and self sufficient piece of software and implementing a microservice architecture will often require to make multiple calls to many such single responsibility and independent pieces.

Though we can have synchronous request/response calls when the requester expects immediate response, integration patterns based on events and asynchronous messaging provide maximum scalability and resiliency. In order to build scalable architectures, we need event-driven and asynchronous integration between microservices.

There are a lot of options for asynchronous integration. Some of the widely used ones are:

Kafka

RabbitMQ

Google Pub/Sub

Amazon Services

ActiveMQ

Azure Services

Communication Types

Message Queuing

In this system, messages are persisted in a queue. One or more consumers can consume the messages in the queue, but a particular message can be consumed by a maximum of one consumer only. Once a consumer reads a message in the queue, it disappears from that queue. If there are no consumers available at the time the message is sent, it will be kept until a consumer is available that can process the message.

Publish subscribe

In the publish-subscribe system, messages are persisted in a topic. Consumers can subscribe to one or more topics and consume all the messages in that topic. In the Publish-Subscribe system, message producers are called publishers and message consumers are called subscribers.

Implementations

Kafka:

Kafka is the most popular open source distributed publish-subscribe streaming platform that can handle millions of messages per minute. The key capabilities of Kafka are:

  • Publish and subscribe to streams of records
  • Store streams of records in a fault tolerant way
  • Process streams of records as they occur

Those features make Kafka a natural choice for a number of use cases which are listed here

The Key Components of Kafka architecture are: Topics, Partitions, Brokers, Producer, Consumer, Zookeeper.

Core APIs in Kafka include:

  • Producer API allows an application to publish a stream of records to one or more Kafka topics.
  • Consumer API allows an application to subscribe to one or more topics and process the stream of records produced to them.
  • Streams API allows applications to act as a stream processor, consuming an input stream from one or more topics and producing an output stream to one or more output topics, effectively transforming the input streams to output streams. It has a very low barrier to entry, easy operationalization, and a high-level DSL for writing stream processing applications. As such it is the most convenient yet scalable option to process and analyze data that is backed by Kafka.
  • Connect API is a component that you can use to stream data between Kafka and other data systems in a scalable and reliable way. It makes it simple to configure connectors to move data into and out of Kafka. Kafka Connect can ingest entire databases or collect metrics from all your application servers into Kafka topics, making the data available for stream processing. Connectors can also deliver data from Kafka topics into secondary indexes like Elasticsearch or into batch systems such as Hadoop for offline analysis.

More details here

Kafka is written in Scala and Java. It was originally developed by LinkedIn and donated to the Apache Foundation.

Kafka as a Messaging System:

Kafka architecture
Kafka Broker with Topics and Partitions

Kafka is a distributed, replicated commit log. Kafka does not have the concept of a queue which might seem strange at first, given that it is primary used as a messaging system. Queues have been synonymous with messaging systems for a long time. Let’s break down “distributed, replicated commit log” a bit:

  • Distributed because Kafka is deployed as a cluster of nodes, for both fault tolerance and scale
  • Replicated because messages are usually replicated across multiple nodes (servers).
  • Commit Log because messages are stored in partitioned, append only logs which are called Topics. This concept of a log is the principal killer feature of Kafka.

Kafka uses a pull model. Consumers request batches of messages from a specific offset. Kafka permits long-pooling, which prevents tight loops when there is no message past the offset. A pull model is logical for Kafka because of its partitions. Kafka provides message order in a partition with no contending consumers. This allows users to leverage the batching of messages for effective message delivery and higher throughput.

It is a dumb broker / smart consumer model — does not try to track which messages are read by consumers. Kafka keeps all messages for a set period of time.

RabbitMQ:

The core idea in the messaging model in RabbitMQ is that the producer never sends any messages directly to a queue. Actually, quite often the producer doesn’t even know if a message will be delivered to any queue at all.

Instead, the producer can only send messages to an exchange. An exchange is a very simple thing. On one side it receives messages from producers and the other side it pushes them to queues. The exchange must know exactly what to do with a message it receives. Should it be appended to a particular queue? Should it be appended to many queues? Or should it get discarded. The rules for that are defined by the exchange type (direct, topic, headers and fanout)

RabbitMQ architecture

The super simplified overview:

  • Publishers send messages to exchanges
  • Exchanges route messages to queues and other exchanges
  • RabbitMQ sends acknowledgements to publishers on message receipt
  • Consumers maintain persistent TCP connections with RabbitMQ and declare which queue(s) they consume
  • RabbitMQ pushes messages to consumers
  • Consumers send acknowledgements of success/failure
  • Messages are removed from queues once consumed successfully

It is a smart broker / dumb consumer model — consistent delivery of messages to consumers, at around the same speed as the broker monitors the consumer state.

RabbitMQ uses a push model. Push-based systems can overwhelm consumers if messages arrive at the queue faster than the consumers can process them. So to avoid this each consumer can configure a prefetch limit (also known as a QoS limit). This basically is the number of unacknowledged messages that a consumer can have at any one time. This acts as a safety cut-off switch for when the consumer starts to fall behind.This can be used for low latency messaging.

The aim of the push model is to distribute messages individually and quickly, to ensure that work is parallelized evenly and that messages are processed approximately in the order in which they arrived in the queue.

RabbitMQ is written in Erlang. Pivotal develops and maintains RabbitMQ.

Google cloud Pub/Sub:

Cloud Pub/Sub is a fully-managed real-time messaging service that allows you to send and receive messages between independent applications. You can leverage Cloud Pub/Sub’s flexibility to decouple systems and components hosted on Google Cloud Platform or elsewhere.

Here is the overview of the flow in cloud Pub/Sub system:

  • A publisher application creates a topic in the Cloud Pub/Sub service and sends messages to the topic. A message contains a payload and optional attributes that describe the payload content.
  • The service ensures that published messages are retained on behalf of subscriptions. A published message is retained for a subscription until it is acknowledged by any subscriber consuming messages from that subscription.
  • Cloud Pub/Sub forwards messages from a topic to all of its subscriptions, individually. Each subscription receives messages either by Cloud Pub/Sub pushing them to the subscriber’s chosen endpoint, or by the subscriber pulling them from the service.
  • The subscriber receives pending messages from its subscription and acknowledges each one to the Cloud Pub/Sub service.
  • When a message is acknowledged by the subscriber, it is removed from the subscription’s message queue.
Google Pub/Sub architecture

Publishers can be any application that can make HTTPS requests to googleapis.com: an App Engine app, a web service hosted on Google Compute Engine or any other third-party network, an installed app for desktop or mobile device, or even a browser.

Pull subscribers can also be any application that can make HTTPS requests to googleapis.com. Push subscribers must be Webhook endpoints that can accept POST requests over HTTPS.

More details here

Amazon Services:

Amazon MQ is a managed message broker service for Apache ActiveMQ that makes it easy to set up and operate message brokers in the cloud.

Amazon Simple Queue Service (SQS) is Amazon’s cloud-based message queuing system made available as part of Amazon Web Services (AWS). Unlike the other brokers mentioned here, there is barely any setup/deployment required for an application to use SQS. All you need is AWS credentials to be able to use it. Since it is a SaaS (Software-as-a-service), this makes it a much lower-cost option since there is no infrastructure cost. You only pay for what you use. SQS also has the notion of in-flight messages. This means that the message is pulled off the queue but never deleted until the ‘delete‘ command is sent for that message ID. So if you lose your worker mid-processing of the event, that event isn’t lost for good.

One drawback of the SQS implementation is the need for polling of a message queue to determine if new messages have appeared. This is a bit of an issue being as you must now model your application to perform a polling cycle in order to determine if new messages are available.

In other words, it offers serverless queues — you don’t have to pay for the infrastructure, just the messages you send and receive.

Amazon Simple Notification Service (Amazon SNS) is a web service that makes it easy to set up, operate, and send notifications from the cloud. Amazon SNS follows the “publish-subscribe” (pub-sub) messaging paradigm, with notifications being delivered to clients using a “push” mechanism that eliminates the need to periodically check or “poll” for new information and updates. With simple APIs requiring minimal up-front development effort, no maintenance or management overhead and pay-as-you-go pricing, Amazon SNS gives developers an easy mechanism to incorporate a powerful notification system with their applications.

It is comparable to serverless topics. It will notify your services when a message arrives, but if you’re offline you can miss it. SNS can feed into SQS, so if you have some service that may be up and down, you can guarantee it gets SNS messages by queuing them in SQS for it to consume on its schedule.

ActiveMQ:

ActiveMQ is a Java-based open source project developed by the Apache Software Foundation. ActiveMQ makes use of the Java Message Service (JMS) API, which defines a standard for software to use in creating, sending, and receiving messages.

ActiveMQ sends messages between client applications — producers, which create messages and submit them for delivery, and consumers, which receive and process messages. The ActiveMQ broker routes each message through one of two types of destinations:

A queue, where it awaits delivery to a single consumer (in a messaging domain called point-to-point), or a topic, to be delivered to multiple consumers that are subscribed to that topic (in a messaging domain called publish/subscribe, or “pub/sub”)

ActiveMQ gives you the flexibility to send messages through both queues and topics using a single broker. In point-to-point messaging, the broker acts as a load balancer by routing each message from the queue to one of the available consumers in a round-robin pattern. When you use pub/sub messaging, the broker delivers each message to every consumer that is subscribed to the topic.

ActiveMQ architecture

Azure Services:

Service Bus is a brokered messaging system. It stores messages in a “broker” (for example, a queue) until the consuming party is ready to receive the messages.

Event Grid uses a publish-subscribe model. Publishers emit events, but have no expectations about which events are handled. Subscribers decide which events they want to handle.

Event Hub is an event ingestor capable of receiving and processing millions of events per second. Producers send events to an event hub via AMQP or HTTPS. Event Hubs also have the concept of partitions to enable specific consumers to receive a subset of the stream. Consumers connect via AMQP.

More details here

For an extensive list of queueing systems check here


When to use What

Prefer Kafka — When your application needs highly scalable messaging, able to receive a very high number of events, coming from different sources, and delivering them to different clients over a common hub. Message consumers can consume the data as they wish, and re-read and replay messages on demand. For this use case, Apache Kafka’s ability to scale to hundreds of thousands of events per second, delivered in partitioned order, for a mix of online and batch clients is the best fit.

Prefer Kafka if you need your messages to be persisted as well.

Prefer RabbitMQ — When you need a finer-grained consistency control/guarantees on a per-message basis (dead letter queues, etc.). When your application needs variety in point to point and publish/subscribe messaging. When you have complex routing to consumers and integrating multiple services/apps with non-trivial routing logic.

Hopefully this blog post will help you choose the technology that is right for you and if you have chosen one recently, do let me know on what reason you chose it over the others.