Halving Response Times: Lessons Learned

Renato Nascimento
The Miners
Published in
6 min readMar 17, 2017

Response times under 100 ms are capable of lightening anyone’s heart.

It’s even better when you reach that milestone as a side effect of some rounds of refactoring — even when performance isn’t an explicit priority, good engineering certainly shows measurable results.

I will talk you through what I have learned after reaching a nice 85 ms average for a Rails API (which used to average up to 270 ms) 🚗💨💨

Note: I’ll assume you are treating your DB well, okay? Improper N+1 queries or lack of otherwise useful indexes may be a major bottleneck. Also we’re supposing you have well-dimensioned instances for your application, considering of course the throughput it has to deal with.

Slowest transactions should be your priority

Some widely-used metrics will guide you to take care of your slow transactions first. For example, apdex.

T is your ideal time in seconds, any request finishing below T will give full points to your application. T = 0.2 s is commonly adopted.

Between T and 4T, well, that’s considered tolerating. Your application will be penalized in 50% on each request in that range.

Above 4T, we enter the frustated range. Your application receives a zero score for those.

I personally consider this metric way better than just looking at the average.

Slower transactions will block your web workers, potentially queueing up requests and having a multiplied impact on the overall responses time, especially during throughput peaks.

In addition to that, if you use Heroku, be extra-cautious on this topic, as this article states:

Even worse, however, is that due to the random-routing algorithm Heroku uses for load balancing, a single slow dyno brings the entire app to its knees. It’s well-known that intermixing fast and slow response times in a single Heroku app wreaks havoc on overall app performance.

Caching is hard

Just look around your code and you’ll see many opportunities to apply fragment caching.

Wait, there are other types of caching? Yes! But fragment caching is the most versatile. Plus it naturally escalates to russian-doll caching. Take the word from experts:

As a tip to newcomers to caching, my advice is to ignore action caching and page caching. The situations where these two techniques can be used is so narrow that these features were removed from Rails as of 4.0. I recommend instead getting comfortable with fragment caching.

Well, make sure you know how to proper implement caching, otherwise you may blow things up!

That’s why:

When a key does not have enough information, you may find yourself wondering how you’re going to invalidate stale cache keys, and that’s where things start to become dangerous.

There are only two hard things in Computer Science: cache invalidation and naming things.

— Phil Karlton

Cache invalidation is complex and error-prone. Let’s see how we avoid having to use it.

A good key for any given data structure will contain at least those values:

  1. A kind;
  2. An identifier;
  3. A timestamp;
  4. A version.

The first two are simple to understand — among all your keys, you have to know what you’re looking for.

The third, a timestamp. It’ll prevent your app to serve stale data.

For Rails devs - you can use the method #cache_key to see these three in practice:

What if I change my data? It’s simple, a brand new key will be used.

Your cache system will eventually forget the stale keys, as those aren’t being used anymore. It’s that simple!

And last, but not least, we also need a version.

Well, some developers may change details of a given data structure. During a deploy, the code may change and may not play well with the old version of the same data structure (which will still be returned by the cache).

This can cause a bug in production, and, take it from me, eventually it WILL cause a bug if you miss this simple step:

The v1 and v2 on the snippet above identify the version of the data structure. Once your code changes, old versions will stop being retrieved.

However, a structural change may be subtle, yet problematic if it goes unnoticed by the developer. Use automated tests in your favor :)

More info on this article by DHH.

Algorithmic optimization

I consider myself an old-school programmer. I like to spend time reading about some optimization problems (like this and this) that are near to irrelevant in today’s industry.

But the fact is that, algorithms or data structures that are, at a first glance, better suited for a problem, often present the same efficiency, or even are outperformed by their simpler counterparts, in real world situations. Well, if you don’t believe me (as you shouldn’t), watch this keynote by Bjarne Stroustrup, on a well-known case:

So, I would mind the complexity of my blocks of code only if they seem extremely bad, or if they deal with a n too large. Otherwise, the rule of thumb here is trust and use your native standard library as much as possible, especially if you are dealing with Ruby.

Ruby (and Rails) does not offer all the algorithms and data structures you would find in a textbook. But, trying to outperform the algorithms it does offer has not been an easy task to me.

A much better way to say whether it is meaningful or not to optimize a piece of code, though, is by profiling an application.

A flame graph.

Let’s say you just came up with a improvement for an algorithm, what would be the impact on the whole stack, given that realistic requests are being made?

Or, the other way around, what code fragments have the biggest impact on your stack, so we should look for an optimization?

We need answers to both questions. If you’re are a Ruby dev, playing around with rack-mini-profiler is definitely a good starting point. Install the gem and run (even locally) your app in production mode. Study your results and have fun exploring the flame graphs (like the one above 😛).

Functional style pays off

This topic is the most experimental one (for me) on this article.

An advantage of using Ruby is that your OO code can live alongside some well written functional-like code. Developers should not have problems composing functions, as operations like reduce and map are extensively available in the standard library.

When the business logic is beyond trivial, which was my case, you might want to start playing with some side effect free functions, composing them as needed, and using some patterns like stateless services objects.

Why? Things get simpler when you do not use free variables. Otherwise, you need to ensure that all your variables will look good at all times, for any method. Going functional means you just need to worry about your input parameters and what’s being returned (and nothing else), as pure functions are free of side effects.

But, is functional more performatic? Well, not necessarily. But functional will help you to say on what conditions your performance falls short, while improving predictability and testability, by eliminating those free variables.

If you want to get more detailed arguments around this topic, check this article by Dave Copeland.

That’s it!

Those topics helped me as a guideline for optimizing a Rails API. Hope they will be useful for you too!

If you like this article, also check this following one 😜

--

--

Renato Nascimento
The Miners

Reliability Engineer @ In Loco — reading and writing about computer science :)