The Making of a 100% Google Powered Microservice Architecture — Part 2
This is the second part of the making of a 100% Google Powered Microservice architecture. The complete series are:
- Part 1: The Language — Golang
- Part 2: The Serializer — Protocol Buffers
- Part 3: The Framework — gRPC
- Part 4: The Data — Cloud Sql Postgres
- Part 5: The Container Orchestrator — Kubernetes & GKE
- Part 7: The API Gateway — Google Cloud Endpoint
- Part 8: The Authentication: Firebase Auth
- Part 9: The Message — Cloud Pub/Sub
- Part 10: The Trace — Stackdriver Trace
- Part 11: The Monitor — Stackdriver Monitoring
- Part 12: The Aggregated Logs — Stackdriver Logging
- Part 13: The Continuous Deployment — Google Cloud Builder
- Part 14: The Uptime Checks & Alerting Policies — Stackdriver Monitoring
- Part 15: The Error Reporting — Stackdriver Error Reporting
The Serializer — Protocol Buffers
According to their docs.
Protocol buffers are Google’s language-neutral, platform-neutral, extensible mechanism for serializing structured data — think XML, but smaller, faster, and simpler. You define how you want your data to be structured once, then you can use special generated source code to easily write and read your structured data to and from a variety of data streams and using a variety of languages.
When building a microservices, it’s important to really think about the format of exchanging data during inter-service communication. JSON is obviously the most popular but it’s not suitable for communication between services. One will argue that JSON is human readable and plays well with the browser. However, when communicating between services, human readability doesn’t make any difference. JSON can be used when communicating with the browser. Essentially we will use JSON for external communications with browsers and protocol buffers for internal communication between microservices. Protocol buffers also enforces the use of schemas and enables us to easily build services that are backward compatible.
Protocol buffers(protobuf) is a binary serialization format that is highly efficient. The speed of serializing data using protobuf is much faster compared to JSON. Also, the size of the serialized data is much smaller. You can check out this, this and this link to see a performance comparison between protocol buffers and other serialization formats.
With increased efficiency comes reduced cost of resources needed to pass data around. Other cost that comes to mind include:
- Schemas: Protocol buffers enforces schema which can be used as a contract between services. This will help save cost by speeding up development and testing since a service can easily reference the contract to know what the expected data will look like.
- Backend Compatibility: With protocol buffers, you can add, remove or modify fields in the schema in a backward compatible way. This will help save cost by reducing/eliminating the need to create new versions of our software each time we make changes. However, backward compatibility is not always possible with protobuf since you can make changes that are not backward compatible.
- Powerful IDL: Protocol Buffers has a powerful interface definition language(IDL) that can be used to define service interface. This is especially necessary in a microservice where you are expected to define all endpoints to be exposed along with expected input and return values from endpoint before you begin implementation.
In order to generate code from protocol buffers definition files, you will need to install protoc and protoc plugin for Go:
- Download a pre-built binary from the release page. Move the protoc binary file to a location in your PATH environment variable so that you can invoke protoc compiler from any location.
- Install the protoc plugin for your language. For Go, run the go get command to install the protoc plugin for Go:
go get -u github.com/golang/protobuf/proto
go get -u github.com/golang/protobuf/protoc-gen-go
API First Development using Protocol Buffers
As we discussed in part 1, we will be building a blog api. This api will consist of two services, authorization and blog service. With protocol buffers powerful interface definition language(IDL), we can adopt API-first development. API-first development will allow parallel development by all teams without the need to wait for changes to be released by one team or another.
To follow along, create a new branch from v0.0.0 tag and add a folder called authorization in api-spec folder.
Create a file called authorization.proto in authorization folder and paste the following content.
The above .proto file starts with the version of Protocol Buffer and a package declaration. We use the latest proto3 version of the Protocol Buffers language. The package is declared with a name “authorization”. Once we generate Go source code from the proto file, Go will use this name as it’s package name.
As you can see, we are defining different message types. These message types will be used as input and outputs in the RPC methods we will define later. Standard data types such as int32, float, double, and string are available as field types for declaring elements in message types. The “ = 1”, “ = 2” markers on each element of the message types specifies the unique “tag” that field uses in the binary encoding. You will also notice a repeated field, such fields are basically arrays. So a repeated string fields is a field that contains an array of strings. A default value is used if element value is not specified: zero for numeric types, the empty string for strings, false for bools. The Protocol Buffers language guide is available from here.
Within the same folder, add another file called authorization-svc.proto and paste this content.
Here, we are creating the RPC methods that will be exposed by authorization service. From the above .proto file, you will notice that the name of the service is
AuthorizationSvc. The service can actually be named anyway you like. This file imports the previous .proto file we defined so that we can use the messages defined there as input/outputs of the RPC methods. We also imported
google/api/annotations.proto. This imported proto enables us to annotate our RPC methods with RESTFUL endpoints. Note that this service won’t be exposing RESTFUL endpoints. However, the annotations will be used by the api gateway to expose RESTFUL endpoints to this service.
Create another folder called blog inside api-spec folder and add a file called blog.proto with the following content.
Within the blog folder, another file called blog-svc.proto with the following content.
You can find the complete source code for this part 2 here.
Although data can be transferred in plain text using JSON and XML, protocol buffers and other binary serialization formats(avro and thrift) are better suited for communication between services.
In the next post, we will discuss a high performance, open source, universal RPC framework called gRPC. Stay tuned, the real game is about to begin.
If you liked this, click the💚 below so other people will see this here on Medium. Also, if you have any question or observation, use the comment section to share your thoughts/questions.