Event-Driven Microservices using Spring Cloud Stream and Web Sockets

PART 1: THE FUNDAMENTAL CONCEPTS

Attyuttam Saha
CodeX
9 min readJul 30, 2021

--

Being a programming enthusiast, we all are pretty hyped with new technologies that seem to be coming endlessly every day and one such buzz is Event-Driven Microservices which ensures that your system is more de-coupled and microservices are more independent. This sounds pretty cool but implementing it and understanding it becomes a real pain. So, I created a simple full-stack application that serves you real-time news using this concept and my job was eased up thanks to the beautiful libraries provided by Spring and React JS.

These are the primary things that I will be talking about in this post:

  • Microservices
  • Event Driven Architecture
  • Queues and topics
  • Docker basics
  • Spring Cloud Stream
  • Web Socket using STOMP(Simple Text Oriented Messaging Protocol)

My primary goal is to build a event driven architecture for communication between microservices and I will be using Spring Boot and React JS for this purpose and I assume that you have a pretty good understanding about Spring Boot and a yay bit about React JS will suffice.

Microservices

According to Wikipedia:

A microservice is not a layer within a monolithic application (example, the web controller, or the backend-for-frontend). Rather it is a self-contained piece of business functionality with clear interfaces, and may, through its own internal components, implement a layered architecture. From a strategy perspective, microservices architecture essentially follows the Unix philosophy of “Do one thing and do it well”.

The major advantages of microservices are:

  • Microservices are highly maintainable
  • Can easily be tested
  • Microservices are loosely coupled as they are independent services dedicated towards performing a single well-defined task and each microservice can be deployed independently.

Using microservices not only ensures that the delivery of complex application happens pretty smooth and fast in several iterations but it also ensures that the application can remain technology independent and changing from one tech stack to other does not become a nightmare, this helps an application to always remain up-to-date.

Event Driven Architecture

According to Wikipedia:

Event-driven architecture (EDA) is a software architecture paradigm promoting the production, detection, consumption of, and reaction to events.

Speaking in simple terms, when we talk about EDA, we can think of a picture where we have say multiple microservices which needs to talk to each other and they do this by using something called events.

Now what are events ?

An event can be defined as an indicator of change of state. So, the microservices act upon a change which it receives from another microservice. For example, when you click a button, an action is performed so the click can be considered as an event.

According to RedHat, the benefits of event driven architecture is:

An event-driven architecture can help organizations achieve a flexible system that can adapt to changes and make decisions in real time. Real-time situational awareness means that business decisions, whether manual or automated, can be made using all of the available data that reflects the current state of your systems.

I will be providing a diagram for visualizing right after I’m done with the next topic.

Queues and Topics

Messaging queues and topics are basically components into which we can put some messages and from which we can read messages. They both have some fundamental difference in characteristics.

Let us visualize the same:

Message put into the queue can only be read by a single consumer

So, in case of queues, when a message producer puts some message into the queue, only one consumer can read the message and there can only be one subscriber to a messaging queue.

Message put into the queue can only be read by a multiple consumers

On the other hand, message put onto a topic by the producer can be read by several consumers as multiple consumers can subscribe to a single topic.

The entire picture of Event Driven Microservices

Now, we have the idea of events and how is it supposed to be used in event driven architecture paradigm and also the fundamental concepts of queues and topics.

If we recall, in Event Driven Microservices, a microservice produces an event which when received by the desired microservice(s), the receiving microservice performs some operation and may further produce an event or not and this is the basic law of interaction in an event driven architecture paradigm.

So, we use queues and topics to enable the exchange of events between microservices and since microservices are subscribed to queues and topics, whenever there’s an event put on the messaging service, the microservice gets notified right then and performs the desired activity without any external invocation.

Now, according to RedHat, EDA may either be based on pub/sub model or event streaming model, but as in the project we follow a pub/sub model, let us visualize the pub/sub model of EDA (the indian-news application that we are going to be build):

The Indian News Application based on Event Driven Architecture

So, now as we will be diving further into the workings of the application, it is supposedly a good time to talk about the basics of docker, spring cloud stream and web sockets.

Docker

Docker can be thought of as a version control system for your app’s operating system. Using Docker you can run your application on your laptop in an environment that has the same configuration as that of your server in which the app is going to run in production.

As I myself am pretty new to this technology, I will talk in terms of a beginner’s understanding and keeping that in mind, we can simply think that we can use docker when we want to deploy an application or a tool in an isolated environment with all the dependencies installed without bothering ourselves to setup the dependencies in our local environment.

Please note that Apache Kafka acts as a queue as well as a topic, for more detail please check out this awesome article by Abhishek Gupta.

In my case, as my app needed to run the Kafka topic, using Docker helped to run the server to host the Kafka topic in an isolated environment without having to setup any configuration in my system.

So, I believe you could grasp why I had to use docker in my project.

Before moving to the usage of Docker, it is essential to understand about Docker objects:

  • Container Image
  • Container

Container Image

According to Docker’s documentation:

A Docker container image is a lightweight, standalone, executable package of software that includes everything needed to run an application: code, runtime, system tools, system libraries and settings.

A container image is basically a template which provides all the details of the software, configuration tools etc. and this image can be executed to develop an isolated filesystem.

Container

According to Docker’s documentation:

A container is a standard unit of software that packages up code and all its dependencies so the application runs quickly and reliably from one computing environment to another.

A container is simply another process in your machine that has been isolated from all the other processes in your machine.

Available for both Linux and Windows-based applications, containerized software will always run the same, regardless of the infrastructure. Containers isolate software from its environment and ensure that it works uniformly despite differences for instance between development and staging.

Relation between Container and Container Image

The definitions are albeit pretty solid but difficult to understand right, it becomes a bit more simpler to understand them when you get to know the relation between them.

When running a container, it uses an isolated filesystem. This custom filesystem is provided by a container image. Since the image contains the container’s filesystem, it must contain everything needed to run an application — all dependencies, configuration, scripts, binaries, etc. The image also contains other configuration for the container, such as environment variables, a default command to run, and other metadata.

I use the following three points to relate images and containers:

  • An Image is a read-only template with instructions for creating a Docker container.
  • A Container is the runnable instance of a Image.
  • Images become Containers at runtime and in the case of Docker containersImages become Containers when they run on Docker Engine.

Please Note: The terms Container Image and Image have been used interchangeably but both represent the same thing.

Docker Compose

Although we can build containers and run them one command at a time but the easiest way is to use docker compose as with this, all we need is to create a YAML file and then with a single command you can create an entire environment and bring it down as well.

The advantage is all your requirements for the environment is documented in a single file and others can contribute to that as well.

Now, we have understood the basics of docker but as this post is more focused on explaining the event driven architecture, I will not delve into the docker setup, explanation of the YAML file for docker etc., feel free to comment your thoughts and doubts on the post if you fail to understand the concepts when I will be explaining the code in Part 2 of this post.

Spring Cloud Stream

According to the Spring docs:

Spring Cloud Stream is a framework for building highly scalable event-driven microservices connected with shared messaging systems.

The framework provides a flexible programming model built on already established and familiar Spring idioms and best practices, including support for persistent pub/sub semantics, consumer groups, and stateful partitions.

For a simple understanding of Spring Cloud Stream(SCSt), Oleg Zhurakousky sums it up pretty well in his blog:

SCSt has always been about pure microservices and binding them to sources and targets of data (i.e., messaging systems) . Simple as that.

If you abstract yourself far enough from knowing the internals of SCSt, you quickly realize that it is really a binding and activation framework. It binds a piece of code (provided by the user) to source/target of data exposed by the binder and activates such code according to binder implementation (for example, message arrival and so on). That is pretty much it.

So, SCSt is basically a framework provided by spring that abstracts the boiler plate code to setup the messaging service, so that the user can focus only on the business logic and put it on the queue/topic or read from it without writing too much of code to setup the messaging system. SCSt is thus a flexible messaging abstraction that takes care of the complex messaging platform integration so you can concentrate on writing simple clean business logic.

The core building blocks of Spring Cloud Stream are:

  • Destination Binders: Components responsible to provide integration with the external messaging systems.
  • Destination Bindings: Bridge between the external messaging systems and application code (producer/consumer) provided by the end user.
  • Message: The canonical data structure used by producers and consumers to communicate with Destination Binders (and thus other applications via external messaging systems).

We will be using Apache Kafka Binder in our application for interaction with the queue.

Web Sockets using STOMP

Web Sockets provided by spring is a full-duplex, two-way persistent communication scheme between a web browser and a server. The pipeline of communication established by a web socket remains connected until the client or the server decided to close the connection.

WebSocket is a thin, lightweight layer above TCP and it makes it suitable for using subprotocols to embed messages. One of the subprotocols that we will be using to establish the two-way connection is STOMP.

According to Spring docs:

STOMP is a simple text-oriented messaging protocol that was originally created for scripting languages such as Ruby, Python, and Perl to connect to enterprise message brokers. It is designed to address a subset of commonly used messaging patterns. STOMP can be used over any reliable 2-way streaming network protocol such as TCP and WebSocket. Although STOMP is a text-oriented protocol, the payload of messages can be either text or binary.

You can learn more about WebSockets and STOMP from the Spring docs.

End of Part 1

This part basically focuses on the fundamental concepts that I will be using to build the application and since there was quite a lot to explain even for the simple basics, I decided to dedicate one single post for the same. Please feel free to comment if you find any discrepancies or your thoughts.

Go to Part2.

--

--

Attyuttam Saha
CodeX
Writer for

Software Engineer, MCA from NIT Warnagal, loves to read and watch horror and talk about programming.