Three Popular Methods to Communicate Between Microservices

arpit jain
Sixt Research & Development India
9 min readJan 12, 2020

Nowadays the web is moving from monolith towards microservices. If you have not yet adopted this architectural pattern, you are likely to be far behind the world!

To have a microservice architecture that works well for your organization, effective and clear communication among distributed component becomes an essential. While monoliths are broken down into smaller domain-specific applications, one can afford looser coupling of components along network boundaries. Once those components are defined, the following steps define how those components communicate across the network.

In this article, I will be covering some basics before choosing an appropriate communication style for implementing your applications. So, start with asking yourself the below questions:

  • Would your application be interacting to a different type of clients like a web browser, mobile client etc.?
  • Are your APIs going to be private or public?
  • Would your application need something that HTTP cannot do? Currently, HTTP/1.1 is widely used and HTTP/2 is gaining adoption.
  • Does your application need persistent or long-lived connections with other applications?
  • Would your application interact synchronous way or asynchronous way?
  • How much of your payload data be over the network?
  • What is the scale you are looking for?

Now, let us have a look at the e popular approaches adopted these days — REST, RPC and Event-Driven and deep dive into what they are.

REST — REpresentational State Transfer

REST is an architectural style for APIs. REST insists on uniform interfaces which fundamentally makes a call to a resource. The resource now becomes the domain that has the data and does not concern itself with the functionality. REST API’s only focus is on the data that belongs to that a specific domain.

REST is also widely used by lot of web services and clients because it makes it easier for users to interact with the other web services.

Communication on REST often happens using a JSON, which is human readable. This makes it easier for developers to determine if the client input is sent correctly to the server, and back. HTTP has become the de facto standard for creating uniform REST APIs. What results is a communication style that inherits the semantics of HTTP.

Hypermedia — layered as text, images, audio, video, graphics — are identified, retrieved, and manipulated using CRUD via the operations POST, GET, PUT, and DELETE (we can think of PATCH as a special case of PUT). For more details please refer my previous post — REST

But one of the main advantages of REST is that it does not need to set up a client. You just make a call to a server address. This even works if you just copy a REST server address (of a GET method) in your web browser. Other techniques, like gRPC, often require you to set up a client.

Pros of REST:

  • Easy to understand and implement, as it does not require prior experience or knowledge by the users.
  • REST is the software industry “standard” and widely adopted (because everybody is using something a bit differently).
  • Requests and responses are usually human-readable.
  • REST is supported by a huge number of libraries (both for client and server).

Cons:

  • There is no standard way of defining and implementing topics like design-specific queries, versioning, search endpoint, etc.
  • Streaming REST is often difficult.
  • No standard schema for endpoints — people try to fix this using e.g. Swagger or RAML.

RPC — Remote Procedure Call

RPC style of communication allows for a more specialized semantics but is also less opinionated about agreeing to a standard protocol of information exchange. Rather, clients and servers are stubbed so that remote procedure call would seem like local procedure calls, but over a network boundary. Instead of accessing remote services by sending and receiving messages, a client invokes services by making a local procedure call. The local procedure hides the details of the network communication.

The machine making the procedure call is termed as the ‘client’ and the machine executing the called procedure is called the ‘server’. For every procedure being called, there should be a code which suggests the machine that has to be contacted for the procedure. Such kind of code is called a ‘Stub’.

From client-side, for every procedure that gets called, we would need a unique stub. However, the stub on the server-side can be more general and only one stub is needed for handling more than one procedure.

An RPC endpoint is useful for working with a narrow view of the data. This reduces the bandwidth you use on the network and simplifies the service. You can be as specific or as narrow as you want. As long as it does what you need! One of the positives of RPC is that it helps define a way of creating services that ensure one job done well.

gRPC by Google and Square represents an incremental step in the progress of scaling RPC for cloud solutions. gRPC can use protocol buffer for data serialization. This makes payloads faster, smaller and simpler. Just like REST, gRPC can be used cross-language which means, that if you have written a web service in Golang, a Java written application can still use that web service, which makes gRPC web services very scalable.

gRPC runs on top of TCP, which means it outsources the problems of connection management and reliably transmitting the request and reply messages of arbitrary size. Second, gRPC actually runs on top of a secured version of TCP called Transport Layer Security (TLS) means it outsources responsibility for securing the communication channel, gRPC actually runs on top of HTTP/2 (which is itself layered on top of TCP and TLS), meaning gRPC outsources yet two other problems: (1) efficiently encoding/compressing binary data into a message, (2) multiplexing multiple remote procedures calls onto a single TCP connection. It makes use of binary data rather than just text which makes the communication more compact and more efficient.

It is also type-safe. This basically means that you can’t give an apple when e a banana is expected. When the server expects an integer, gRPC won’t allow you to send a string because these are two different types.

Workflow with gRPC is quite simple, first, you need to define .proto file defining services, requests and response formats, and then you copy this file to all projects which will communicate with each other. The only thing you need to do is to convert your domain objects to the generated classes. Protocol buffer is used to define endpoints schemas. Protocol Buffers or protobufs, are a way of defining and serializing structured data into an efficient binary format, also developed by Google. Protocol buffers were one of the main reasons we chose gRPC as the two work very well together. We previously had many issues related to versioning that we wanted to fix. Microservices mean we have to roll changes and updates constantly and so we need interfaces that can adapt and stay forward and backwards compatible, and protobufs are very good for this. Since they are in a binary format, they are also small payloads that are quick to send over the wire.

Pros:

  • Great speed because of the binary form.
  • RCP is supported by many languages (you can communicate e.g. Java with Python).
  • It supports streaming, both for method parameters and responses.
  • RCP provides generators which based on .proto definition generates serializers and deserializers.
  • RCP has build-in support for API changes -Protobuf schema evolution

Cons:

  • It is less known and requires higher learning curve. Users may need to know Protobuf etc.
  • Not human readable and it requires additional tools to manually test API.

Event / Message Driven — Asynchronous

This is entirely asynchronous communication where the client after sending the request does not wait for a response and removes the coupling between services. In this case, we are simply raising an event that depends on whether a consumer will take an action or decide to stay chill.

Also, this is a very independent solution for internal communication but has a downside due to complex and time-consuming implementation.

In the Event-based communication, a microservice publishes an event when something notable happens like updating a business entity. Other microservices subscribe to those events. When a microservice receives an event, it can automatically update its own business entities, thereby allowing more events to be published. This is the essence of the eventual consistency concept. This publish/subscribe system is usually performed by using an implementation of an event bus. The event bus can be designed as an interface with the API needed to subscribe and unsubscribe to events and to publish events.

Apache Kafka is a popular choice these days. It is a message broker that embraces asynchronous message-based communication. Modelling processes with asynchronous communication may be a bit complicated but, has its own advantages. You would not have to depend directly on other services when they are offline and can still operate e.g, post messages to Kafka.

Kafka supports various message types. It of course supports good old JSON and Protobuf. Further, it integrates with Confluence Schema Registry, so you can keep your message schemas in the external service.

Kafka can be used also with Even Sourcing, where events are distributed among various topics and dependent services can build current data view from them.

Pros:

  • Async communication has its own big advantages.
  • It supports different message formats like JSON, Protobuf etc.

Cons:

  • Modelling application using messages is more complicated.
  • You need to set up a Kafka cluster.

Conclusion:

For a REST endpoint, you must treat it like a resource that provides domain data. The reward is you are now segregating data into separate domains. This makes it useful for when you have any number of apps requesting data. This approach attempts to decouple data from application or business logic.

If you ever need to break up service into two simpler services, but you don’t want to completely change how the different parts interact, do not hesitate to reach out for an RPC solution. Or, if performance is paramount and you want the option to use many different languages, go for either gRPC. RPC style endpoints are great when you want only one job done well. This makes it useful for one or two app clients because it is a niche service. RPC endpoints can implement business logic inside the service, given that it only does one thing. This adds simplicity and clarity to the service.

However, the loosely coupled, highly scalable nature of asynchronous messaging-based systems fits well with the overall ethos of microservices. More often and despite some significant design and implementation hurdles, an event-based messaging approach would be a good choice while deciding upon a default communication mechanism in a microservices-based system.

So this brings us back to the question, which communication approach is best when designing your microservices? It all depends on the requirements!

REST, RPC, and Event/Message driven are not mutually exclusive; they can all work together in your microservice architecture. Every successful cloud-based tech company employs these communication styles effectively to some degree.

We have covered the different cases and circumstances in which each style comes into play and where each is appropriate.

When choosing either approach or style it is important to know the differences. There is no right or wrong here. What is more important, is to know which approach solves for the job at hand.

There are a lot of options! However, you don’t need to make a single choice for communication between all your services. Generally, what to choose when?

  • If you need to communicate UI (browser) with your service — choose REST
  • If you need to provide public API to your service/product — choose REST
  • If you need to communicate different internal services — try to model your processes using messages, if not possible then choose gRPC or Event/Message Driven.
  • If you are dealing with high volumes of messages via HTTP, consider adopting RPC. If you find latency or network saturation to be any sort of bottleneck then this advice applies even more so.

Again, this is my personal opinion and deciding the best option may vary as per applications and their unique requirements. Hope I was able to explain my views here clearly. Leave a comment to let me know your thoughts.

Originally published at https://www.linkedin.com.

--

--