Put Your HTTP Requests on a Diet

A how-to guide to compressing web client HTTP requests

Christopher Scott
Axiom Zen Team
3 min readNov 8, 2017

--

Photo by Brooke Lark

If you find yourself in a situation where you are sending large HTTP request payloads from the web browser to your API, you’ve probably made an architectural or design mistake. That said, our team was recently in a situation where regularly sending 200–300KB requests gave us the least number of tradeoffs.

As a consequence, we saw long request transfer times. Unlike how browsers automatically decompress HTTP response bodies via the HTTP content-negotiation mechanism (by way of the Accept-Encoding header), the browser was not automatically compressing our request bodies. Surely there was a similar mechanism that we could enable for this case?

We were wrong

Our search for anything relating to client-side request body compression was mostly fruitless. As much as we like to avoid shaving yaks at Axiom Zen, this beast needed a haircut.

Yak shaving is what you are doing when you’re doing some stupid, fiddly little task that bears no obvious relationship to what you’re supposed to be working on, but yet a chain of twelve causal relations links what you’re doing to the original meta-task.

Before we arrived at client-side GZIP compression as our solution, we explored a number of alternatives:

Msgpack looked promising. However, it’s not efficient for large requests, which is exactly our problem. Furthermore, there seemed to be quite a bit of negative sentiment towards Msgpack from developers on various sites.

gRPC came to mind as we’ve used it as a microservices communication protocol with success; however, it’s not supported on the browser-side yet, so we had to move on.

Finally, JSONC showed potential, but it required non-trivial changes to our data structure by definition, and the project has numerous open issues regarding round-trip encoding on GitHub. We decided to pass.

GZIP as our G6

Given that the GZIP algorithm has been around for a while, we were hopeful that someone implemented it in Javascript. We dared to dream that it would also be Javascript that could also be run in the browser. Luckily, our search landed us on the pako npm page, and given its tests and benchmarks, it looked like it was just the package we were looking for.

As far as integration with your client-side code base, if you use axios, a common Javascript HTTP request library, your best bet is probably integrating it as a global request transformer:

Notice that we only GZIP requests over a certain size: gzipping small amounts of data will actually increase the overall size, with 1024 bytes being a semi-arbitrary magic number. We recommend fiddling with it to find what works best for you.

For the receiving side, Axiom Zen typically writes backend code in Go (golang). This turned out to be an excellent showcase of the power and elegance of interfaces in Go, as seen in this toy example:

Notice how the json.NewDecoder() function doesn’t care what io.Reader it is sent.

Another thing to note (not shown here) is that if you need to do cross-origin resource sharing (CORS), you’ll need to add the Content-Encoding header to your list of Access-Control-Allow-Headers when handling pre-flightOPTIONS requests.

The results

We saw a 10x size reduction (JSON data being quite repetitive by nature) with a tiny CPU penalty client-side. Though our approach may require tuning from your end, we were satisfied with the results, which were especially noticeable when talking to distant servers.

Let us know what you think about our approach in the comments below!

Written by Chris Scott and the Axiom Zen engineering team.
Edited by
Yasmine Nadery and Bryce Bladon

See more stories like this one on Axiom Zen’s blog.

--

--