Large tables behave differently

David Santoro
Carwow Product, Design & Engineering
1 min readApr 14, 2016

On my bugs duty I’ve noticed that a query was timing out.

The ruby code was simple:

The query that produces is:

Somehow that query is taking a very long time. It’s very strange, we have indices for author_type, author_id and created_at and there aren’t very many messages per user. So why is it slow!?

I tried to run this query instead:

I have only changed the limit from 1 to 10 and it’s way faster!

Somehow Postgres decides to scan the created_at index until it finds the first message that match the conditions.

Once I’ve noticed the oddity I’ve managed to find a stackoverflow question related to the same problem. Read that for a more in-depth explanation: http://stackoverflow.com/questions/21385555/postgresql-query-very-slow-with-limit-1

Due to the increased number of users, the data in our DB have been growing exponentially. It’s a nice problem to have, but several queries that were working fine with smaller amount of rows are now struggling and require our attention.

I think it’s time for me to read a couple of books on Postgres performance.

Interested in making an Impact? Join the carwow-team!
Feeling social? Connect with us on Twitter and LinkedIn :-)

--

--