Evaluating Performance of REST vs. gRPC
I’ve been waiting for a break to investigate the performance benefits of gRPC over REST for quite a while now. I currently have a few micro-services which are talking to each other in JSON over REST, and feel like they’re reaching the VPS hardware throughput limits, and I’d soon need to upgrade my tier to better balance the performance.
This seemed like the ideal time for some experimentation. I needed to evaluate just how much of a performance benefit I could gain with gRPC (if any) and more importantly, how much of an effort I’d have to put in to fully migrate my existing talkative micro-services to gRPC.
What is gRPC?
For those who’ve not heard of it before, gRPC is a language agnostic framework introduced by Google for remote procedure calls, that gives serious thought for performance and scale. It’s been around for quite a while now, but most people (maybe just I) have put it on the back-burner due to the initial time that needs to be spent on getting the IDL right, and the extra stub code that needs to be maintained. While REST on the other hand, is a breeze to implement using ASP.NET Core WebAPIs.
Similar to how JSON is widespread for REST communication, gRPC relies heavily on Protocol Buffers, a language neutral storage format for serialized structured data. This is what pulls gRPC apart from the rest (pun intended). The Protocol Buffers can be used in all major language flavors thanks to the protoc compiler which generates the necessary native class definitions from the proto file definition. In addition to this gRPC also uses HTTP/2 for communication, which brings added goodies like HTTP header data compression and multiplexed requests over TCP connections.
There were a few constraints that I wanted implemented when bench-marking the REST vs. gRPC calls.
- Test pure communication throughput, and not involve any business logic in the process. I felt that would take away the focus from the main point.
- Not to have any Database connectivity and instead mock the data with Static fields. Thus eliminating caching issues and random execution time skews.
- Disable logging so it will not have any impact on the test runs. If not, it could’ve led to highly misleading readings.
- ModelLibrary contains the REST & GRPC related models. Also to make the test cases more generalized, I’ve chosen a data set which includes string, int, double & date time. The NASA meteorite landings data set of 1000 data points found at https://data.nasa.gov/resource/y77d-th95.json.
The following Messages and Service definitions were also created for the gRPC Service using Protocol Buffer definitions.
I’m also testing the gRPC’s streaming performance here with the “GetLargePayload” service.
- RestAPI contains the WebAPI which exposes the following three methods targeting three specific scenarios, receiving a small payload, receiving a large payload & sending a large payload
Also as mentioned above under constraints, get rid of extra console logging from the WebAPI so it won’t affect the benchmark measures.
- GrpcAPI contains the Service Implementation and some bootstrap code to startup the gRPC Server. It was interesting for me to also test how gRPC streaming compares to REST, so I’ve also implemented an extra method “GetLargePayload” which iterates and streams one data point at a time
- RESTvsGRPC contains the Benchmark harness which calls all these methods in two batches of 100 & 200 iterations each to reduce measurement inaccuracies of small execution times. So the below benchmark execution times are actually multiplied by 100 & 200 times the real value.
As expected gRPC came out on top, except when streaming data. Streaming was slightly worse than calling REST. gRPC also showed even better performance when Sending data than Receiving. I assume this is due to the HTTP/2 header compression, but I’ve yet to verify that claim by analyzing the HTTP post data.
Just for the fun of it, I ran the same benchmark on my Windows box and received similar results. However the whole run-time was a minute slower on Windows.
Try it out
If you need to try out the benchmark for yourself or tweak it, feel free to clone my repository from below.
Evaluating Performance of REST vs. gRPC. Contribute to EmperorRXF/RESTvsGRPC development by creating an account on…
Then start the two API Services as below
And run the Benchmark on them
My personal take away
gRPC is roughly 7 times faster than REST when receiving data & roughly 10 times faster than REST when sending data for this specific payload. This is mainly due to the tight packing of the Protocol Buffers and the use of HTTP/2 by gRPC.
However I had to spend roughly 45 mins implementing this simple gRPC Service, where I only spent around 10 mins building the WebAPI. This is mainly due to REST becoming mainstream a long time back, and most major frameworks (i.e. ASP.NET Core MVC) having built in support to quickly spin up such services (through convention & patterns). Though not as much, gRPC too has made some headway since I last used it, as this time I didn’t have to manually run the “protoc” compiler and it’s now built-in to the Visual Studio build chain. All I had to do was to use the “Protobuf” ItemGroup in my csproj file. Also we can expect more tight integration of gRPC in ASP.NET Core 3.0.
But these are good numbers for me to make the switch on my micro-services to gRPC. Let me know how your findings turn out to be!