Message-Driven Microservices with Spring Cloud Stream

Felipe Assoline
May 5, 2020 · 7 min read
Photo by Maximilian Weisbecker on Unsplash

We have been developing applications in micro-services architecture for some time now, but we often don’t think about the complexity that the communication between our services generates. Tons of lines of code just for communicating services, endless boilerplates, and difficulties in maintainability are well known day-to-day problems when dealing with such architectures.

At Wavy, we have a complex message-oriented architecture, and we use several message brokers to connect our applications: Apache Artemis, Apache Kafka, RabbitMQ, IBM MQ, among many others. All of these have great libraries for developing our services, but even so, the usage of these libraries could be a “pain” when you work with so many message brokers like us.

When using many tools like this, we most likely end up writing too much code to maintain all the pipes, and these tools don’t always have the same features clarified — such as consumer groups or partitioning.

Here’s an example of such communications using Pipes and Filters.

Pipes and Filters standard integration diagram

How can we abstract all of the boilerplate needed for messaging-oriented services, such as consumers, producers, consumer groups, and serializations?

The Spring Cloud Stream

Spring Cloud Stream is a framework that leverages the creation of message-oriented microservices. Basically, it abstracts all concepts and boilerplates necessary to create applications in this type of architecture, simply by adding a new dependency to the project.

With Spring Cloud Stream, you can easily use RabbitMQ or Apache Kafka as a glue between all your applications. If you are curious, take a look at this list of some of the most popular binding implementations which Spring Cloud Stream supports.

To add it to your pŕoject, we have to just use the simplicity of Spring Boot’s convention over configuration pattern, which gives much more agility in creating new systems, and last but not least, improving the onboarding of new developers on the team, making it is much easier to assimilate all the concepts involved in applications communications without having to immediately understand the complexities of the messaging tools that are used.

Core Concepts

The Spring Cloud Stream ecosystem is based on some core concepts:

  • Destination binders: components responsible for providing the integrations with the messaging systems;

The Spring Integration

Before the Spring Cloud Stream (SCS), a small introduction to Spring Integration (SI) is advised, it’s the base framework for the whole SCS concept. There’s no to master the SI to use SCS, but knowing it can make things much easier. SI was developed with Enterprise Integration Patterns book in mind, a mandatory knowledge of every architect, engineer, and developer who works with integrations between systems.

Spring Cloud Stream in practice

Imagine a scenario similar to what we have at Wavy, where you need to develop a new subscription club. The user requests a new subscription approval in the frontend, you have to approve his payment with a third-party payment solution, and inform him the result of the approbation, finally sending him an email when his payment was approved.

Here’s a diagram demonstrating this flow.

We are going to have 3 micro-services for this system:

  • Subscription API: receives the order from a new customer of the subscription club

In a talk with your team, you decided to use these microservices in a message-oriented architecture, where each of them will be packed as a Spring Boot “.JAR”.

Before starting, we have to define which type of applications we are creating. In Spring Cloud Stream we can create 3 types of applications by default, which are represented as interfaces inside the framework:

Source: The applications that initiate a processing flow, they can be either HTTP, a GRPC, a consumer of a queue that comes from a legacy system or an integration with your client, they can even be a consumer from FTP file; this type of application has only 1 outbound.

Processor: These are the applications responsible for processing the messages placed in the stream by Source applications. These applications always have 1 inbound and 1 outbound on the stream.

Sink: Sink applications always have only 1 inbound. They are responsible to finish the processing, like persisting something in a database, saving a file to an FTP, sending a message notifying the client, etc, the possibilities are too much to put them here;

These binding works for almost all our use cases, but what if we want to implement custom bindings, like having more than one consumer or producer? Well, we can write custom bindings as well :) here’s an example of how we can achieve this:

To be able to use all these interfaces, Spring Cloud Stream reads the @Input and @Output annotations string values to figure out which configurations to search in the properties file

Let’s back to our solution.

Creating the Source: The Subscription API

This application will be the gateway to our processing flow, an API Rest which receives an order and sends it to the PaymentService to approve the payment.

The magic happens with the annotation @EnableBinding ({Source.class}) and the injection of a Source object that has all the methods that allow us to produce the message for our binder.

Creating the Processor: The Payment Service

In our example, the processor is responsible for approving the payment requested by the Source.

Here we can use another resource that Spring Cloud Stream offers to facilitate our development, simply adding a header to the message which indicates its priority. Which will be used by the Email Service later.

Creating the Sink: The Email Service

Our third application, the solution’s sink, will be the service to send an email to the user after the PaymentService has been processed.

It is possible to observe that, from the header added in the previous application, we can separate all our consumers, separating their business logic based on the content of the messages.

An entire demonstration code can be found here.

With just a few lines, we created an entire solution without worrying about communication between services and all the boilerplate involved.

Consumer groups

In Wavy we process millions of messages in a single day, so when dealing with microservices, we have the necessity of doing so in a scalable way.

To allow the possibility that countless instances of your application can consume the same topic — without duplicating messages -, we need to use a concept named consumer groups, which is already accepted by the standard Kafka API and very well abstracted by the framework when we are using the RabbitMQ.

If you want messages to be consumed by all applications connected to that binder, you need to use different groups for each application.

A good practice here is always set up a group, because without this, as stated above, you will have duplicate messages when the need for scale arises.

The properties configuration for Spring Cloud Stream are the same, regardless of the messaging tool used (Kafka or RabbitMQ).

Below we have some examples from RabbitMQ interface of how the specific queues for each system look with and without group settings.

Queues at RabbitMQ for consumer applications without using Group configurations, that is, anonymous groups.
Queues at RabbitMQ for consumer applications using Group configurations.

Serialization and Avro Schemas

When dealing with message-driven applications, the speed of serialization and the size of these messages becomes very important.

In our demo, we work with JSONs to exchange information, but JSON, besides being very heavy to process (for taking with it all of the object’s metadata and in many cases, repeating it countless times), it also has a high serialization time.

Here is where Avro can save us. Spring already brings Avro as a serialization system very connected to the Cloud Stream ecosystem.

As we can see on the project’s website, Apache Avro is defined as a data serialization system. Much like the Google Protocol Buffer, it differs from it in several aspects and will basically offer you rich data structures in a binary format, which can be quickly serialized, and with a very small size.

Let’s create an avsc file (Avro Schema) for our SubscriptionRequest:

You can take a look at Avro’s documentation to better understand this file schema :)

When working with Avro, we have to configure our application as well.

You should have notice a configuration for the Schema Registry, this service is responsible for storing your schema in a repository outside your message flow.

With all that, the only concern in Spring Cloud Stream is adding which version of the schema will be used, without having to transport it with the message.

Spring Cloud Stream natively supports 2 Schema Registries, the Spring standard, which is very simple to create using a Spring Boot application as a base (with simply adding an @EnableSchemaRegistryServer in your application class) or Confluent Schema Registry, which is also supported by the native Kafka API and allows you to have all the features of Kafka, such as KSQL and etc, with much smaller messages than if using JSON.

Below an example of how to create a Schema Registry Server with Spring Boot:

Final considerations

I hope that this quick overview of Spring Cloud Stream have brought you a greater understanding of this framework. With it, we can create an entire message-oriented microservice architecture with minimum code, making the most of the convention over configuration provided by the framework and focusing all your efforts and your team’s only on code that brings real value for you and your business :)

Credits for translation: @gabrielcencic

Sinch Blog

Stories about how we create incredible experiences for people through technology

Sinch Blog

Chatbots, Conversational Commerces, Artificial Intelligence, Products and Engineering are our passions. See stories by Sinchers about it and our Culture. Dream Big, Win Together, Kep It Simple and Make It Happen: this is how we create experiences.

Felipe Assoline

Written by

Is a Lead Software Engineer at Wavy Global. He already developed systems for Health, Edu, and Telecom industries, most of then using JVM and open-source techs.

Sinch Blog

Chatbots, Conversational Commerces, Artificial Intelligence, Products and Engineering are our passions. See stories by Sinchers about it and our Culture. Dream Big, Win Together, Kep It Simple and Make It Happen: this is how we create experiences.