gRPC Intro, concepts and insights (1/3)

Anderson
6 min readJan 9, 2023

--

Hello, I want to give something of my current learning about gRPC. Why I choose this topic?

It’s because gRPC is a real game changer in microservices communication, which is a common architecture today. I want to contribute to the communities in moving forward to HTTP/2 protocol

This is the first publication about it, which is pure teorical; the second part will be an implementation in java/kotling using spring boot

What is gRPC?

According the official site

gRPC is a modern open source high performance Remote Procedure Call (RPC) framework that can run in any environment. It can efficiently connect services in and across data centers with pluggable support for load balancing, tracing, health checking and authentication. It is also applicable in last mile of distributed computing to connect devices, mobile applications and browsers to backend services.

So, modern open source high performance Remote Procedure Call (RPC), What is this?.

Remote Procedure Call (RPC), is a software communication protocol that one program can use to request a service from a program located in another computer on a network without having to understand the network’s details. RPC is used to call other processes on the remote systems like a local system. A procedure call is also sometimes known as a function call or a subroutine call.

The interface definition language (IDL) — the specification language used to describe a software component’s application programming interface (API) — is commonly used in Remote Procedure Call software. In this case, IDL provides a bridge between the machines at either end of the link that may be using different operating systems (OSes) and computer languages.

This is a fairly old protocol, the foundations of which began to appear in the 1970s.

Why it’s gaining so much popularity these last years?.

There are plenty of reasons why they are popular such as:

  • Abstraction is easy (it’s a function call)
  • Code can be generated for pretty much any language
  • Supported on a lot of languages
  • Performance, Data is binary and efficiently serialized (small payloads)
  • Http calls are often confusing
  • Very convenient for transporting a lot of data
  • Protocol Buffers are language agnostic and allows for easy API evolution using rules
  • Uses HTTP/2 protocol

And all of these reaons above are covered in one major reasons why grpc is very popular, it’s because microservices are very popular

Then, what make gRPC great if its base in a older concept like RPC and how is capable to perform so well as I mention before?. Its because uses a HTTP/2 protocol

HTTP/2

HTTP/2 is the second version of the HTTP protocol aiming to make applications faster, simpler, and more robust by improving many of the drawbacks of the first HTTP version.

The primary goals of HTTP/2 are:

  • Enable request and response multiplexing
  • Header compression
  • Compatibility with the methods, status codes, URIs, and header fields defined by the HTTP/1.1 standard
  • Optimized prioritization of requests, making sure that loading for optimal user experience is as fast as possible
  • Support for server-side push
  • Server-side backwards compatibility, making sure servers can still serve clients only supporting HTTP/1.1 without any changes
  • Transforming to a binary protocol from the text-based HTTP/1.1

Protocol Buffers

As we know, IDL (Interface definition language), its the bridge between the communication client/server

Protobuf its the IDL for gRPC

Protocol buffers provide a language-neutral, platform-neutral, extensible mechanism for serializing structured data in a forward-compatible and backward-compatible way. It’s like JSON, except it’s smaller and faster, and it generates native language bindings.

You define a file with the .proto extension which will contain the specification of your service like

syntax = "proto3";

message Person {
optional string name = 1;
optional int32 id = 2;
optional string email = 3;
}

service PersonService {
rpc save (Person) returns (Person) {};
}

Protocol Buffers is used to define the:

  • Messages (data, request and response)
  • Service (Service name and RPC endpoints)

Why Protocol Buffers for communications over JSON?

JSON payload: 50 bytes

{
"age": 35,
"firstName": "Test",
"lastName": "Result"
}

Protocol Buffers: 20 bytes

message Person {
int32 age = 1;
string firstName = 2;
string lastName = 3;
}

Overview:

  • Save in network binding
  • Parsing JSON is actually CPU intensive (because the format is human readable)
  • Parsing Protocol Buffers (binary format) is less CPU intensive because it’s closer to how a machine represents data
  • By using gRPC, the use of Protocol Buffers means faster and more efficient communication, friendly with mobile devices that have a slower CPU

Streaming API

There are four types of API that can be implemented in gRPC

  • Unary: is the classic http request/response
  • Server streaming: This is the case where client will send a single request and server can send back multiple response
  • Client streaming: client will send multiple requests and server will only send back a single response
  • Bi-directional streaming: This is streaming where both client and server will sequences of message all together without waiting for the response.

Interceptor

gRPC supports the usage of interceptor for its request/response. It will intercept the messages and allow you to modify it.
Does that sound familiar? If you have been playing around with http process on Rest API, it’s basically similar to one called middleware.
gRPC library will usually support this and allow an easy time for implementation.
Usually used for:

  • Modify the request/response before being passed on. It can be used to provide mandatory information before being sent to the client/server.
  • Allow you to manipulate each function call such as adding additional logging to track response time.

Load Balancing

If you guys aren’t familiar with the term of load balancing, in short it is a mechanism to allow client request to spread out among multiple servers.
Load balancing usually is done in proxy level (i.e.: Nginx). So why am I talking about this here?
The reason is because gRPC itself support a method of load balancing by client. The implementation is already in the library (at least in Golang) and it can be used with ease.
The implementation itself is not some sort of black magic, it has some sort of DNS resolver to get IP list and it owns LB algorithm.

Call Cancellation

Grpc client is able to cancel grpc call when it doesn’t need the response anymore, rollback on server is not possible though.
This feature is especially useful for server side streaming where multiple server requests might be coming. gRPC library comes equipped with its Observer method pattern to know if a request is cancelled and allow it to cancel multiple corresponding requests at once.

Supported languages

Sources and Related Links

Next step, please follow the next article, we’re going to create a project with gRPC and spring boot, a very simple and quick implementation with a Github repository

Thanks a lot for your time reading my article till the end
Keep on learning and always keep your thirst knowledge, since this world comes with never ending supply of knowledge.

If you think this article was useful and help you in some way, please give a clap and follow me 😆 🍍

--

--

Anderson

Senior Java Developer at Capitole Consulting Spain. I'm from Venezuela, I love my job and the challenge of keeping up with all the new things to learn every day