Unveiling MACH: Inter-service Communication using gRPC on Google Cloud — Edition 2

Abhi Sharma
Google Cloud - Community
6 min readApr 12, 2024

Welcome back to the second edition of Unveiling MACH (Micro-service architecture) series of blog posts. In this article, we are going to explore Remote Procedure Calls and gRPC in general and how it is used for inter process communication between micro-services. We will also try to understand various google cloud services utilizing gRPC to communicate. If you are new to micro-services, I insist you to read my introductory blog on creating micro-services using MACH architecture

Remote Procedure Call (RPC)

In a client-server architecture or distributed systems, efficient communication between services is crucial, especially when services operate on different machines or as distinct processes. While modular programming often involves frequent local procedure calls(method invoking), there arises a need to invoke routines and functions deployed on separate processes or machines. This requirement is met by Remote Procedure Call (RPC), a mechanism that enables communication between distinct components of a system. With RPC, developers can seamlessly initiate procedures and functions on remote servers as if they were local, facilitating a cohesive and interconnected system.

gRPC and Protocol Buffer

gRPC is a cutting-edge remote procedure call (RPC) framework developed by Google, designed to facilitate efficient and reliable communication between distributed services. Utilizing Protocol Buffers as its interface definition language (IDL) and HTTP/2 as its underlying protocol, gRPC offers high-performance, language-agnostic service communication, making it an ideal choice for modern microservices architectures. With its support for multiple programming languages and seamless integration with Google Cloud Platform (GCP) services, gRPC empowers developers to build scalable and resilient distributed systems that meet the demands of today’s cloud-native applications and services.

Protocol Buffers, commonly referred to as Protobuf, represent a powerful and efficient mechanism for serializing structured data. Offering a language-neutral and platform-neutral means of defining data structures and services, Protocol Buffers provide a lightweight and extensible format for data interchange. By defining data schemas using a concise and readable language, Protobuf enables efficient serialization and deserialization across a variety of programming languages, making it an ideal choice for inter-process communication, data storage, and network communication in diverse software ecosystems. Protobuf requires fewer CPU resources since data is converted into a binary format, and encoded messages are lighter in size. So, messages are exchanged faster, even in machines with a slower CPU, such as mobile devices.

Source: Mulesoft

Protocol Buffers or ProtoBufs are defined using .proto files which are then complied to client and service side code snippets of your desired programming language. In the next few sections, we are going to see protos in action and we will be creating our first gRPC service.

Guide to build your first gRPC service

Define .proto file

The .proto file is like a blueprint of the service, it describes all the services, requests, and responses, along with their primitive data types. To learn more about protocol buffers and the syntax, refer the official documentation here.

The below code is a very basic sample of a .proto file

syntax = "proto3";

option go_package = "demo-app/service"

service GreetingService{
rpc Greeting(GreetingServiceRequest) returns (GreetingServiceResponse) {}
}

message GreetingServiceRequest {
string name = 1;
}

message GreetingServiceResponse {
string message = 1;
}

The service : GreetingService has a single RPC : Greeting, which expects GreetingServiceRequest as Input and returns GreetingServiceResponse as a response from the server.

The request : GreetingServiceRequest takes a string: name, while the response: GreetingServiceResponse generates a string: message

For successful communication, it is critical for both the client and server to have access to the .proto file, to encode/decode the binary message transmitted respectively.

Once the .proto file is defined, you need Protoc, which is a ProtoBuf compiler that can create server and client side code from the logical model defined in the proto file. To install Protoc on your machine, follow the instructions.

Use ProtoBuf Complier to Generate Client and Server side code

gRPC supports a wide range of different programming languages which give developers flexibility to code in their favorite language and yet facilitates communication between services. Languages supported by gRPC are C#, C++, Dart, Go, Java, Kotlin, Node, Obj-C, PHP, Python and Ruby. Below code will compile the proto file and generate the server side code for Go programming language.

protoc <proto file path> --go_out=<go file path> --go-grpc_out=<grpc-file-path>

protoc greetings.proto --go_out=./ --go-grpc_out=./

--go_out = Argument to specify the output directory where complier will write go files. This code includes the message definitions and the gRPC service definitions.

— go-grpc_out = Argument to provide directory or file path for gRPC service. This code includes the gRPC server and client implementations.

The above command will generate 2 go files i.e greeting.pb.go and greeting_grpc.pb.go

For the Greeting service in this blog, we are using Go programming language. For other languages and how to use Protoc to generate code follow language specific guidelines here.

Update and run the service

Once the code files are generated from the proto. The next step is to update the code in the server file. Open main.go file and implement the changes as shown in the image below.

We created a listener object, and an instance of our service greService:=&GreetingServiceServer{}

The .Serve() function initiates our server to listen to gRPC calls on port 8089 using the listener object.

Now we are ready to call our service. We can create a client or use any RPC tool to call our service.

In the below image we used JMeter to call Greeting gRPC service.

invoking gRPC service using JMeter

Deploying the gRPC service on Google Kubernetes Engine

Google Kubernetes Engine (GKE) stands as a premier choice for deploying and managing containerized services at scale. GKE offers a robust and flexible platform for orchestrating micro-services architectures, making it an ideal environment for hosting gRPC services. By seamlessly integrating with Google Cloud Platform (GCP) services and providing advanced features such as auto-scaling, rolling updates, and built-in monitoring, GKE simplifies the deployment and management of complex distributed systems. In this section, we will discuss all the steps needed to deploy a gRPC service on Google Kubernetes Engine.

Steps to deploy gRPC on GKE

  1. Create a standard Kubernetes Cluster using GCP console.
GKE Cluster

2. To containerize the service, create a DockerFile inside the root directory and expose port 8080 for application listener.

3. Create a yaml file for kubernetes deployment.

apiVersion: apps/v1
kind: Deployment
metadata:
name: demo-app-deployment
spec:
replicas: 3 # Adjust the number of replicas as needed
selector:
matchLabels:
app: demo-app
template:
metadata:
labels:
app: demo-app
spec:
containers:
- name: demo-app
image: your-docker-image:tag # Replace with your Docker image
ports:
- containerPort: 8080 # Assuming your application listens on port 8080

4. Apply the deployment to GKE using command

kubectl apply -f greetings_deploy.yaml

In this blog, we’ve discussed the basics of gRPC and Protocol Buffers, which form the foundation of our understanding in constructing fast and reliable micro-services. In the upcoming blogs, we will delve into Google Kubernetes Engine (GKE) and how multiple micro-services communicate within a cluster, each running in separate containers. Join us on this journey of understanding micro-services using the MACH architecture.

A special thanks to Drishti Gupta for her valuable contribution to this blog.

--

--