A Dance with Protocols: Kotlin, Spring and Protocol Buffers in Action

Benedikt Jerat
Feb 26 · 6 min read

In all of my recent projects as a consultant, a microservice architecture was selected to reduce the interdependency between business functionality by organizing services around business capabilities. In these projects, JSON over HTTP (in a more or less REST-ful way) is usually the first choice for inter-service communication.

Don’t get me wrong! JSON is absolutely fine for many use cases, being a human readable data interchange format with a wide range of adoption. But especially in high-performance environments or for pure machine-to-machine communication, there are more concise and efficient alternatives. One of them is Protocol Buffers.

Protocol Buffers is a language-neutral and platform-neutral data interchange format for encoding structured data, developed by Google for internal use and made publicly available in 2008. As of this writing Protocol Buffers is released in its version 3.

Now let’s see what Protocol Buffers can do for us compared to the common solution of using JSON.

Prototypical Web Application

In the context of this blog post I present an example application that exposes a REST endpoint, which is producing and consuming messages in the binary protobuf format.

Kotlin is used as the programming language of choice. Although Protocol Buffers does not provide a native support for Kotlin (yet!), the interoperability of Kotlin and Java comes in handy here.

The microservice itself uses the well-known Spring Boot framework for defining and exposing the REST endpoint.

The following versions are used:

  • Kotlin: 1.3.20
  • Spring Boot: 2.1.2.RELEASE
  • Protocol Buffers: 3.6.1

The application is available on GitHub: https://github.com/Digital-Frontiers/kotlin-spring-protobuf

By the way, if you are uncertain why anyone would ever want to program in Kotlin, please take a look at the blog post of my colleague regarding 7 things any java developer should know when starting with kotlin.

Protocol Buffers

The example is taken loosely from a recent project where deployment events were generated whenever a deployment was (successfully or unsuccessfully) completed. The application itself exposes a simple REST endpoint to retrieve, query and save deployment events:

  • Find specific deployment event: /api/deployments/42 (GET)
  • Find deployment events by target: /api/deployments?target=ACCEPTANCE (GET)
  • Save new deployment event: /api/deployments (POST)

(Deployment) Targets specify the staging environment in which the specific deployment will be performed (from development to production stage).

The following .proto file defines the binary message format for deployment events in the Protocol Buffers IDL (Interface Description Language):

This defines a message type DeploymentEvent with fields denoting the technology type to deploy, the artifact version, the target environment and so on. The DeploymentsEvents type is the list analogon in Protocol Buffers.

Each field of a message is assigned a unique field number to identify it in the binary representation. Message types start with index 1. Enum types start with index 0 and the enum value at index 0 is treated as default value, so Protocol Buffers will omit it in the binary representation for efficiency.

Similar to other serialization frameworks (Apache Thrift, Avro, …), the message format will have to be compiled into a target language first. In our case this will be Java instead of Kotlin due to the lack of support from Protocol Buffers. For our compact example the generated class file is already over 2000 lines long, which can be a problem for resource-constrained systems. For mobile apps you might want to depend on the Protobuf Lite Runtime instead. Take a look at the protobuf-gradle-plugin GitHub project for an example gradle file to set everything up for the Protobuf Lite Runtime.

But don’t let the size of the generated classes put you off, because it’s actually very easy to work with them. For each message type, a Builder class is generated with which instances can be created:

One of the sad points of missing native Kotlin support: No data classes are created, with which one can work so comfortably in Kotlin due to named parameter instantiation, copy constructor and auto-generated equals, hashCode and toString methods.


To get everything started, an initial build is required to generate the protobuf classes. For this the build of the module depends on the generation of the protobuf sources by the Gradle Protobuf plugin:

By default, the gradle protoc plugin expects .proto files to be located at src/main/proto. The generated sources are then located at build/generated/source/proto/main/java inside the target folder:

To be able to use these generated sources directly, the main.java.srcDirs directive adds that folder to the sources of the project.

Overall, the project build is very straightforward. Executing ./gradlew build in the project root folder generates the protobuf sources first and compiles the kotlin classes afterwards.

The complete gradle file looks like this:

Serving Protobuf Messages in REST

The Spring Framework makes the implementation of the REST controller very simple and convenient. The only difference to a JSON-based REST Controller is setting the content type to application/x-protobuf.

The serialization and deserialization of binary protobuf messages is encapsulated by Spring as long as the content type for the methods is set correctly. Otherwise Spring assumes application/octet-stream, which is just raw binary data that is not interpreted any further. Not very helpful in our case.

For all of this to work, all we have to do is make Spring aware of the ProtobufttpMessageConverter, which is provided by Spring since version 4.1:

Testing & Consuming Protobuf

Due to the great support of Protobuf by the Spring Framework, the controller tests can be written quite easily. Here, MockMvc is used to test the web layer, Mockito for mocking dependencies away and AssertJ for its nice fluent assertion API. JUnit 5 is chosen as the underlying test framework.

This test verifies that serializing and deserializing protobuf messages in the controller is indeed working as expected and the content types are setup correctly.

If you are more into Consumer-Driven Contract Testing, you will have to update your Spring Cloud Contract dependency to at least version 2.1.0-RELEASE that added the required support for binary payloads. For a comprehensive example using Spring Cloud Contract take a look at the following GitHub project provided by Spring: https://github.com/spring-cloud-samples/spring-cloud-contract-samples/tree/master/consumer_proto

Now, let’s start the application and consume some protobuf data. Being a Spring Boot application, it can be started by simply using the following command:

For a real microservice architecture, we would have one or more services that act as a client of the provided REST endpoint. For some fast testing, cURL is absolutely sufficient.

Persisting a new deployment event is achieved as follows:

The proto.txt contains a simple deployment event in the human-readable Protobuf Text Format:

This sample needs to be encoded into the binary format first by calling the protobuf compiler protoc. The result is sourced to the curl command. The result of the controller method is sourced back to the protobuf compiler, which decodes the binary data into the text format again. This invocation should lead to a output like:

Of course, the cURL command is somewhat more complex than with a service that understands JSON, but the call to protoc is always the same. I’d say that’s a justifiable expense.

On the subject of trying out the service: An in-memory database is filled with a few initial events, so you can play around with the application directly.

For more examples, just take a look at the project README.md file.


Using a binary format for service-to-service communication is quite straightforward with the chosen technology stack of Protocol Buffers, Kotlin and the Spring Framework. In terms of network transfer size, protobuf messages are about 4 times smaller than even compressed JSON files. With its wide range of supported languages and platforms, Protocol Buffers is in my opinion a real alternative to the otherwise omnipresent JSON (I won’t even mention SOAP here).

Maybe this blog post could convince some of you to choose a binary data exchange format for inter-service communication. Since using them is not complicated at all, this would at least be my first starting point for reducing network latency and gaining faster communication.

Thanks for reading! Feel free to comment or message me, when you have questions or suggestions.

Digital Frontiers — Das Blog

Dies ist das Blog der Digital Frontiers GmbH & Co. KG (http://www.digitalfrontiers.de). Hier veröffentlichen wir zu Themen, die uns interessieren und bewegen.

Thanks to Joachim Baumann and Frank Scheffler

Benedikt Jerat

Written by

Digital Frontiers — Das Blog

Dies ist das Blog der Digital Frontiers GmbH & Co. KG (http://www.digitalfrontiers.de). Hier veröffentlichen wir zu Themen, die uns interessieren und bewegen.

Welcome to a place where words matter. On Medium, smart voices and original ideas take center stage - with no ads in sight. Watch
Follow all the topics you care about, and we’ll deliver the best stories for you to your homepage and inbox. Explore
Get unlimited access to the best stories on Medium — and support writers while you’re at it. Just $5/month. Upgrade