The need for speed — Experimenting with message serialization

Hugo Vieira da Silva
10 min readAug 2, 2018

--

As a young developer dealing with the average size payload and the average “few” requests, I struggled to accelerate the communication between my client app and server. I stopped using jQuery’s ajax built-ins for my own HTTP wrapper. I looked for ways to compress the payload, and when GZIP wasn’t enough, and implemented a somewhat hack-ish compressor (spoiler alert: it wasn’t enough either). I even renamed JSON attributes to things that made no sense, just to try and squeeze a few microseconds here and there. But, as most developers do at one point or another in their lives, I had to admit it was mostly (not all, mind you, but mostly…) my ugly, sluggish, sh*tty code’s fault.

Many years (and battles) later at Unbabel, we find ourselves asking the same question. What can we do to speed up requests?

We definitely know better than my younger self did back then. We have a clean codebase and a well thought architecture with a clear impact on speeding up the requests. However, we could push it further by optimising communication, and one way to do it was by cutting down payload size.

I mentioned GZIP earlier — a method of compressing HTTP requests — which would be one way of getting there, but this time we focused on payload serialization (FYI compression and serialization are not the same thing). In short, serialization is a method to transform a structured data or object into a binary format that can be transmitted and then reversed with deserialization, mapping the binary back into its original structure.

Instead of jumping to conclusions and implementing any of this off the bat, we decided to find out which serializer fitted best with our use case by testing them out on a similar environment.

Tech 101 — a prelude

Protobuf is an open source serialization method created by Google, and, as described in the website, it is language and platform neutral. This method relies on the use of a contract between both ends, giving you a way to validate the message, which implies that you have to declare the data structure before passing in any data, since only values are serialized.

MessagePack is known for its simplicity, fast setup and long list of supported languages. Unlike Protobuf, you don’t have to specify the data structure beforehand. That means that there is no schema validation and any message can be serialized.

Flask is a Python web framework, or as they call it, “microframework”. We chose it because it’s easy to setup, fast to adapt and closer to our use case, since many of our micro-services use it.

Locust is an open source load testing tool. All configuration for the user behaviour is coded in Python, although you can control the tests on the given web application.

Note: We usually use JMeter but I wasn’t able to keep the binary unchanged during each request. (If anyone knows how to fix it, please drop me a line!)

If you’d like to know more about the setup, README.

And now for the fun part

After reading on the subject, we decided to go with 2 serializers — Protobuf and MessagePack. Both are supported by our code stack, flexible for our use case and ready for production. We launched a small Flask service with just the bare minimums to then bombard it with hundreds of thousands of requests, and used different endpoints according to the test case (in some experiments, no serializer is used), which, after it processes the payload, have the server return a simple status code — the happy 200.

Once the test finishes, we get a report from where we analyse the following values:

  • # requests: number of requests sent in total
  • # reqs/sec: number of requests sent per second
  • Median (ms): median response time
  • Average (ms): average response time

In total, we ran 18 tests, using all serializers on all three payload variants, with and without processing the request message (using deserialization). The results don’t take into account the serialization step, since I handled that beforehand. In any case, it’s still a value we should account for.

Each test run had the duration of 10 minutes, where we simulated 300 users sending requests every 1 second. Between each test, the service was restarted and warmed up.

Experiment 1 — Small message

Message sample

https://github.com/hugofvs/message-serializers/blob/master/json_samples/small_message.json

Message sizes

As expected, messages serialized with Protobuf and MessagePack algorithms occupy less space than JSON. Besides, it’s pretty cool to see a decrease of 68% from removing the data structure from the message with Protobuf, and how fast MessagePack is during deserialization.

Test Runs

1.1 — Only request time (no deserializer)

1.1.a — JSON

1.1.b — Protobuf

1.1.c — MessagePack

1.2 — Request time + Processing data (with deserializer)

1.2.a — JSON

1.2.b — Protobuf

1.2.c — MessagePack

Summary

Although there is a difference on the message size, it is not enough to have an impact on the request times. That’s because the payload by itself is just a small percentage of the total request size when you count with request headers.

Once we add the deserializer to the equation, it slightly affects the response times of the three serializers, causing a small decrease on the number of requests sent.

Experiment 1 doesn’t offer enough data to conclude whether or not these serializers are the ones we’re looking for. The only thing we know is that Protobuf and Messagepack don’t show a big influence on small requests.

Experiment 2 — Medium message

Message sample

https://github.com/hugofvs/message-serializers/blob/master/json_samples/medium_message.json

Message sizes

Test Runs

2.1 — Only request time (no deserializer)

2.1.a — JSON

2.1.b — Protobuf

2.1.c — MessagePack

2.2 — Request time + Processing data (with deserializer)

2.2.a — JSON

2.2.b — Protobuf

2.2.c — MessagePack

Summary

Much like with Experiment 1, there’s a clear difference on the message size when using the different serializers, but this time we can also see the effect it has on the request times.

During the tests with only request time (2.1), the results are not so linear as one would thought. The first thing you notice is that JSON was the slowest of the three, which is understandable since it also has the heavier payload. But that rule does not apply to the others. Although MessagePack’s (2.1.c) payload size is not as small as Protobuf’s (2.1.b), it still got better results in the other metrics. This is probably because the difference in payload sizes is not relevant enough, and fluctuations in the connection’s speed still make a lot of difference.

On the other hand, in a test run with deserialization (2.2) it is pretty clear which won. Protobuff got to be the most performant. It is 2x faster than JSON and 1.6x faster than MessagePack.

We finally started getting some interesting results, but it’s still not enough to make a decision on whether to replace JSON or what to replace it with. In order to gather more data, we decided to turn it all the way to eleven and add some stress to it.

Experiment 3 — Large message

Message sample

https://github.com/hugofvs/message-serializers/blob/master/json_samples/large_message.json

Message sizes

Test Runs

3.1 — Only request time (no deserializer)

3.1.a — JSON

3.1.b — Protobuf

3.1.c — MessagePack

3.2 — Request time + Processing data (with deserializer)

3.2.a — JSON

3.2.b — Protobuf

3.2.c — MessagePack

Summary

For this experiment, we analysed an extra metric: number of failed requests. The reason is that this payload is too heavy for the service and volumes, resulting in a high percentage of requests left hanging or blocked.

Without processing the message, the response times didn’t change much from Experiment 2. But once we look at # failed requests or the graphs with requests/second, you can easily notice the the weight payload has on your service. Although Protobuf had the best results, it was not that significant when compared to the others.

When we added deserialization (3.2), the story changed entirely. Surprisingly, JSON is no longer a viable option. It took on average 2 seconds to process the payload and return the response. And it’s in this scenario that we can finally see MessagePack shine are rise over Protobuff, being 1.8x faster whilst dealing with a bigger message size (and if you don’t want to do the math, it was 8x faster then JSON).

TL;DR

I’m glad we took the time to test the technology. Before these experiments, I had a naive and oversimplified view of the problem and variables involved. I thought MessagePack would easily exceed the others and was definitely not expecting to get such different results between experiments.

I can now say with some certainty that while JSON is enough to deal with small payloads, when it gets to a certain size, depending on the use case and environment, Python’s JSON serializer becomes a bottleneck.

Protobuff compiles the smallest request body and in most scenarios surpasses JSON without even bringing schema validation to the table (maybe it would outrun MessagePack too). This is great, but, in my opinion, it has 2 big no-nos. First, it requires the user to specify the payload schema to then generate code in the project’s language, which can be an expensive investment. Second, it was consistently the slowest serializer/ deserializer we tested.

In short, MessagePack seems to offer the best tradeoffs. It has an incredibly good performance, versatility and, in contrast with Protobuff, it is super simple to set up and start working with, independently of the code base. Also, in the results where MessagePack did not exceed, it was still pretty good, showing a good balance between cost and benefit, that is, if you’re not in desperate need for schema validation.

We saw how using different serializers reduces the size of the data, and that by itself can come in handy in some other situations such as storage. For example, Pinterest used (maybe still does!) a combination of Memcache and MessagePack to cache their feeds. And Redis added support for MessagePack within its server-side Lua scripting.

In any case, if there’s something to take out of this (I was told there should be), it’s that is not enough to just know some new cool technology, nod along and go about your day with your assumptions unchallenged. You need to find out more, test it out, have a grasp before committing to it, and, if you’re lucky, learn a thing or two in the process.

References

https://en.wikipedia.org/wiki/HTTP_compression

https://en.wikipedia.org/wiki/Serialization

https://en.wikipedia.org/wiki/Protocol_Buffers

https://en.wikipedia.org/wiki/MessagePack

http://flask.pocoo.org/

https://developers.google.com/protocol-buffers/

https://msgpack.org/index.html

https://news.ycombinator.com/item?id=4090831

https://www.quora.com/Who-uses-MessagePack-in-productio

https://auth0.com/blog/beating-json-performance-with-protobuf/

--

--

Hugo Vieira da Silva

Co-founder and Transcription Team Manager at Unbabel. Uber geek but still sociable. Waiting for a future where cities are for humans, instead of cars.