Sam Malayek
1 min readDec 8, 2021

--

Great post on Cursor Pagination -- the best out there!

I just need to be a bit annoying (sorry!) and make a note regarding your 2-column cursor pagination time complexity. I ran your test on a table with 7 million (7,037,382) rows and got the following `EXPLAIN ANALYZE` result: https://explain.depesz.com/s/2xYK0

It seems like the time complexity should be closer to `N` (for the parallel sequential scan) + some function of `L` for the sort, gather merge, and limit.

Maybe in this specific case, `N` should be `N/2` because of the multiple workers. In either case, our tests are certainly not identical. It looks like you have less characters in your columns and if your data was sorted on insert, then you've got neighboring rows sitting in the same blocks on disk. Maybe this explains why you were able to achieve an awesome 4 second load time! I get 17 seconds with or without concatenation. Anyways, analyzing DB queries and their time complexities is tricky business given the various optimizations and decisions the DB engine can make, not to mention the differences between DB engines themselves.

Note: Tested on `bitnami/postgresql` Docker image running Postgres 11.14.

BTW: I extended this very good blog post and elaborated on previous page tokens: https://welcometosoftware.com/cursor-pagination-previous-page (the post also increases the precision of your mostly-correct 1-column cursor pagination time complexity)

--

--