gRPC Explained: Part 1 Introduction

Ankit Dwivedi
6 min readSep 24, 2023

--

Welcome to the first part of the blog series on gRPC. In this series, we will explore the ins and outs of gRPC, starting with a comprehensive introduction.

gRPC and it's mascot

Before we dive into the details of gRPC, it’s important to clarify the relationships between various terms in the realm of remote communication, which can sometimes be confusing.

RPC — Remote Procedure Call

As per Wikipedia “In distributed computing, a remote procedure call (RPC) is when a computer program causes a procedure (subroutine) to execute in a different address space (commonly on another computer on a shared network), which is written as if it were a normal (local) procedure call, without the programmer explicitly writing the details for the remote interaction”

In simpler terms, It’s a way for one computer program to ask another program, even if they’re on different computers, to do something for it. It’s kind of like calling a function in your program, even though it’s happening on different machines. It’s a procedure call as if it’s on the same machine but it’s not on the same machine and the RPC library/framework takes care of abstracting all of that complexity.

RPC flow

The RPC framework is responsible for shielding the underlying transmission method (TCP or UDP), serialization method (XML/JSON/binary) and communication details. Service callers can call remote service providers just like calling local interfaces without caring about the underlying communication. Details and calling procedures.

REST

REST short for Representational State Transfer, is a well-established architectural style for designing networked applications. RESTful APIs use HTTP requests to perform CRUD (Create, Read, Update, Delete) operations on resources, typically represented as URLs. REST APIs are known for their simplicity and use of standard HTTP methods like GET, POST, PUT, and DELETE.

HTTP

HTTP (Hypertext Transfer Protocol) is the foundation of data communication on the World Wide Web. It defines how messages are formatted and transmitted and how web servers and browsers should respond to various commands. HTTP has evolved over the years, with different versions offering various features and improvements:

  1. HTTP/1.0: The first version of HTTP was simple and lacked many modern features. In HTTP/1.0, each request required a new TCP connection, leading to inefficiency.
  2. HTTP/1.1: Improved upon HTTP/1.0 by introducing keep-alive connections, allowing multiple requests and responses to be sent over a single TCP connection, thus reducing latency. However, it still had limitations, such as the "head-of-line" blocking issue, where one slow request could block subsequent requests in the same connection, leading to the "waterfall" effect.
  3. HTTP/2: HTTP/2 introduced a binary framing mechanism that allowed for multiplexing, prioritization of requests, and header compression. These enhancements significantly improved the efficiency and speed of web communication. It eliminated the "head-of-line" blocking problem by allowing multiple concurrent streams within a single connection.
  4. HTTP/3: The latest version, HTTP/3, further improves performance by using the QUIC transport protocol. It focuses on reducing latency, especially in situations with high packet loss or unreliable networks. HTTP/3 is designed to be more resilient and efficient than its predecessors.

So now that we know these terms. Let’s get started!

What is gRPC?

gRPC (what does “g” stand for here?) is an inter-process communication technology that allows you to connect, invoke, operate, and debug distributed heterogeneous applications as easily as making a local function call.

When you develop a gRPC application the first thing that you do is define a service interface. The service interface definition contains information on how your service can be consumed by consumers, what methods you allow the consumers to call remotely, what method parameters and message formats to use when invoking those methods, and so on. The language that we specify in the service definition is known as an interface definition language (IDL).

gRPC uses protocol buffers as the IDL to define the service interface. Protocol buffers are a language-agnostic, platform-neutral, extensible mechanism to serializing structured data

gRPC Architecture

What powers gRPC’s lightning-fast performance?

  1. HTTP/2: In 2015, HTTP/2 replaced HTTP/1.1, offering multiplexing that allows multiple requests and responses to share a single connection for enhanced efficiency.
  2. Request/Response Multiplexing: Thanks to HTTP/2’s binary framing, gRPC can handle multiple requests and responses within a single connection, revolutionizing communication efficiency.
  3. Header Compression: HTTP/2’s HPack strategy compresses headers, reducing payload size. Coupled with gRPC’s efficient encoding, this results in blazing-fast performance.

Protocol Buffers aka Protobuf

Protobuf defines data structures and function contracts. Both the client and server need to speak the same Protobuf language, and that’s how they understand each other. Protocol Buffers (ProtoBuf) serve three primary functions within the gRPC framework: defining data structures, specifying service interfaces, and enhancing transmission efficiency through serialization and deserialization.

What are the advantages of gRPC

Apart from having really cute mascot pancakes. The adoption of gRPC is driven by its distinct advantages:

  1. Efficiency in Inter-Process Communication: Unlike JSON or XML, gRPC employs a protocol buffer-based binary protocol for communication, enhancing speed. It builds on HTTP/2, making it one of the most efficient inter-process communication technologies available.
  2. Well-Defined Service Interfaces and Schema: gRPC encourages a contract-first approach, prioritizing service interface definitions before diving into implementation details. This simplicity, consistency, reliability, and scalability define the application development experience.
  3. Strongly Typed and Polyglot Support: gRPC uses protocol buffers to define services, clearly specifying data types for communication between applications. This fosters stability and reduces runtime and interoperability errors. Additionally, gRPC seamlessly integrates with various programming languages, offering developers the flexibility to choose their preferred language.
  4. Duplex Streaming and Built-In Features: gRPC natively supports both client- and server-side streaming, simplifying the development of streaming services and clients. It also comes with built-in support for essential features such as authentication, encryption, resiliency (including deadlines and timeouts), metadata exchange, compression, load balancing, and service discovery.
  5. Cloud Native Integration and Maturity: As part of the Cloud Native Computing Foundation (CNCF), gRPC seamlessly integrates with modern frameworks and technologies, making it a favoured choice for communication. Projects like Envoy in CNCF support gRPC, and many monitoring tools, including Prometheus, work effectively with gRPC applications. Furthermore, gRPC has been battle-tested at Google and widely adopted by major tech companies.

In the upcoming parts of this blog series, we will delve deeper into gRPC’s features, use cases, and practical implementation. So, stay tuned!

Connect with me on LinkedIn. If you have any questions or topics you’d like me to cover, please feel free to reach out.

References:

--

--

Ankit Dwivedi

Engineer at Stripe | ex- Google | Building largescale systems