N+1 virtue from one necessity

Couple month ago in one of ruby podcasts were mentioned this “funny” article: N+1 is a rails feature. After that somebody mentioned it in a comments as a real argument so it could be a feature in some cases. Guys you cannot be serious!

N+1 is always a bug! Always!

In my previous article about collection rendering optimization — Rails nitro fast collection rendering — I did simple math abstraction on page with collection render:

Let’s analyse all-time of page render by splitting it in three pieces: collection-instanination-time (=ci-time), collection-render-time (= cr-time) and all-other-time (=ao-time).

and after that assumption I did simple formula to calculate your possible real effort:

possible effort: ( cr-time + ci-time + ao-time ) / ( a*ci-time + b*cr-time + ao-time ). a < 1 and b < 1 are perfomance optimization coefficients.

In this terms they are trying to get a-cofficient go to zero. Assume we have N records, M of them is touched, N-M are cached. And what we are realy comparing is M+1 queue against N-M useless instantinations.

What is really worse M+1 queue or N-M instatination? And how to properly deal with them both?

First if you realy need to choose, I’ll go for N-M instatination. Choose queue over instatination to have iron solid M+1 queue on the page is far from reasonable. If this is shity stale nobody interested in feed, OK this can bring some speedup, as you may remember — instantination times matters, but I’ll will never sleep tight knowing that my page has M+1 queue inside.

If you want to do it right — do not make a virtue from a necessity and face the real problem.

The real problem is to separate cached records from noncached and add includes, joins and all other magic only for those who realy in need of it, than we will be able to omit M+1 request, and omit N-M instatination.

This surely can be done if you store your cache and data in one storage.

You can read code inside my nitro_pg_cache gem ( db_cache_collection_r definition ). The algorithm works in unusual way:

‘reverse’ ( db_cache_collection_r ) — is not similar to usual cache algorithms it used ‘reversed’ logic: we create special SQL-query only for non-cached elements, render them, and then we use aggregation on a previously given collection. This special SQL-query used all includes, joins, select which was in original query so we successfully escaping N+1 problems same way as usual cache did. This approach gives us more speed even on whole noncached collection.

M+1 problem is gone and N-M useless instantination also gone — profit!

If you have question on idea or realization — feel free to ask, also I recommend you to read about collection rendering optimization it may be useful!

Like what you read? Give AlekseyL a round of applause.

From a quick cheer to a standing ovation, clap to show how much you enjoyed this story.