How Postgresql analysis helped to gain 290 times performance boost for a particular request

Julija Alieckaja
7 min readMay 7, 2018

--

Once I’ve been requested to investigate performance issues of a search query in a project I’ve been participating in.
Data model our team was dealing with among other things included three types of entities: episodes, seasons and series. Series can have multiple seasons, and each season can have multiple episodes.
The search request itself was intended to find all episodes with its name matching the search query or episodes which belong to series with a name matching the same query (case insensitive). The query itself is pretty simple, but yet in average it was taking about 2 seconds with spikes up to 60 seconds resulting in page timeouts.

SELECT "episodes".* FROM "episodes" 
INNER JOIN "seasons"
ON "seasons"."id" = "episodes"."season_id"
INNER JOIN "series"
ON "series"."id" = "seasons"."series_id"
WHERE (series.name ILIKE '%friends%'
OR episodes.name ILIKE '%friends%')

Simplified tables schema is following:

Table "public.series"  Column   |            Type             
-----------+-----------------------------
id | integer
name | citext
created_at | timestamp without time zone
updated_at | timestamp without time zone
Indexes:
"series_pkey" PRIMARY KEY, btree (id)
"index_series_on_name_gin_trgm" gin (name gin_trgm_ops)
Table "public.seasons" Column | Type
-----------+-----------------------------
id | integer
created_at | timestamp without time zone
updated_at | timestamp without time zone
number | integer
series_id | integer
Indexes:
"seasons_pkey" PRIMARY KEY, btree (id)
"index_seasons_on_series_id" btree (series_id)
Table "public.episodes" Column | Type
-----------+-----------------------------
id | integer
name | citext
created_at | timestamp without time zone
updated_at | timestamp without time zone
season_id | integer
Indexes:
"series_pkey" PRIMARY KEY, btree (id)
"index_episodes_on_name_gin_trgm" gin (name gin_trgm_ops)
"index_episodes_on_seasons_id" btree (season_id)

An attentive reader will immediately notice combination of citext data type fields with trgm indexes in the schema.

The citext module provides a case-insensitive character string type, citext. Essentially, it internally calls lower when comparing values. Otherwise, it behaves almost exactly like text.

This works reasonably well, but has a number of drawbacks:

- It makes your SQL statements verbose, and you always have to remember to use lower on both the column and the query value.
- It won’t use an index, unless you create a functional index using lower.
- If you declare a column as UNIQUE or PRIMARY KEY, the implicitly generated index is case-sensitive. So it’s useless for case-insensitive searches, and it won’t enforce uniqueness case-insensitively. (More)

Indeed, the trigram indexes on thecitext fields were not used.

=# explain analyze 
SELECT "episodes".* FROM "episodes"
INNER JOIN "seasons"
ON "seasons"."id" = "episodes"."season_id"
INNER JOIN "series"
ON "series"."id" = "seasons"."series_id"
WHERE (series.name ILIKE '%friends%'
OR episodes.name ILIKE '%friends%');
QUERY PLAN
---------------------------------------------------------------
Hash Join (cost=1681.89..178323.06 rows=9372 width=604) (actual time=21.474..2033.074 rows=422 loops=1)
Hash Cond: (episodes.season_id = seasons.id)
Join Filter: ((series.name ~~* '%friends%'::citext) OR (episodes.name ~~* '%friends%'::citext))
Rows Removed by Join Filter: 920117-> Seq Scan on episodes (cost=0.00..154827.81 rows=2058881 width=604) (actual time=0.039..555.465 rows=2058882 loops=1)
-> Hash (cost=1339.35..1339.35 rows=27403 width=22) (actual time=20.981..20.981 rows=27432 loops=1)
Buckets: 32768 Batches: 1 Memory Usage: 1692kB-> Hash Join (cost=484.28..1339.35 rows=27403 width=22) (actual time=5.404..16.126 rows=27432 loops=1)
Hash Cond: (series.id = season.series_id)
-> Seq Scan on seasons (cost=0.00..477.94 rows=27494 width=8) (actual time=0.006..3.005 rows=27523 loops=1)
-> Hash (cost=289.68..289.68 rows=15568 width=22) (actual time=5.319..5.319 rows=15568 loops=1)
Buckets: 16384 Batches: 1 Memory Usage: 956kB-> Seq Scan on series (cost=0.00..289.68 rows=15568 width=22) (actual time=0.007..2.606 rows=15568 loops=1)Planning time: 4.426 ms
Execution time: 2033.295 ms
(15 rows)

First solution coming to a mind is to switch to string data type. Yet since there is a business requirements to be able to order records by name case insensitively I’d need to add several new indexes by LOWER expression, add aLOWER function in all places where search and names ordering are enabled and also where there’s a direct name comparisons. So I decided to consider few other possible solutions as well.
Another option is to keep citext data type and to replace trgm index with tsvector as this index type does support citext (more). Also I’ve decided to add a regular btree index for ordering as there wasn’t any. Ok, tsvector approach seemed like a good option and I decided to give it a try.
Series table:

Table "public.series"  Column   |            Type             
-----------+-----------------------------
id | integer
name | citext
created_at | timestamp without time zone
updated_at | timestamp without time zone
Indexes:
"series_pkey" PRIMARY KEY, btree (id)
"index_series_on_name" btree (name)
"index_series_on_name_gin_tsvector" gin (to_tsvector('english'::regconfig, name::text))

The new query using tsvector

SELECT "episodes".* FROM "episodes" 
INNER JOIN "seasons"
ON "seasons"."id" = "episodes"."season_id"
INNER JOIN "series"
ON "series"."id" = "seasons"."series_id"
WHERE
(to_tsvector('english', series.name) @@ plainto_tsquery('friedns')
OR to_tsvector('english', episodes.name) @@ plainto_tsquery('friends')

Instant drawbacks — dictionary language must be specified. Another one is that tsvector is a good chose for finding full strings, but it can’t effectively search for substrings, so it won’t fit for the cases when search should be able to find f.e. "Dr. Strangelove" by word "love".
Getting back to my examples, with tsvector index the query’s performance went down from 2 seconds to 9.8 seconds! “What the heck” I thought and ran the SQL analyzer.

=# explain analyze
SELECT "episodes".* FROM "episodes"
INNER JOIN "seasons"
ON "seasons"."id" = "episodes"."season_id"
INNER JOIN "series"
ON "series"."id" = "seasons"."series_id"
WHERE
(to_tsvector('english', series.name) @@ plainto_tsquery('friends')
OR to_tsvector('english', episodes.name) @@ plainto_tsquery('friends');
QUERY PLAN--------------------------------------------------------------------Hash Join (cost=1681.89..1117828.09 rows=9372 width=613) (actual time=27.600..9885.350 rows=409 loops=1)Hash Cond: (episodes.season_id = seasons.id)Join Filter: ((to_tsvector('english'::regconfig, (series.name)::text) @@ plainto_tsquery('friends'::text)) OR (to_tsvector('english'::regconfig, (episodes.name)::text) @@ plainto_tsquery('friends'::text)))Rows Removed by Join Filter: 920130-> Seq Scan on episodes (cost=0.00..154827.82 rows=2058882 width=613) (actual time=0.041..601.468 rows=2058882 loops=1)
-> Hash (cost=1339.35..1339.35 rows=27403 width=36) (actual time=23.603..23.603 rows=27432 loops=1)
Buckets: 32768 Batches: 1 Memory Usage: 1692kB-> Hash Join (cost=484.28..1339.35 rows=27403 width=36) (actual time=5.835..18.186 rows=27432 loops=1)
Hash Cond: (series.id = seasons.series_id)
-> Seq Scan on seasons (cost=0.00..477.94 rows=27494 width=8) (actual time=0.035..2.805 rows=27523 loops=1)
-> Hash (cost=289.68..289.68 rows=15568 width=36) (actual time=5.784..5.784 rows=15568 loops=1)
Buckets: 16384 Batches: 1 Memory Usage: 956kB-> Seq Scan on series (cost=0.00..289.68 rows=15568 width=36) (actual time=0.005..2.504 rows=15568 loops=1)Planning time: 2.662 ms
Execution time: 9885.518 ms
(15 rows)

Seq Scans only and a complete ignore of tsvector index.
Out of curiosity I replaced OR with AND condition.

=# explain analyze
SELECT "episodes".* FROM "episodes"
INNER JOIN "seasons"
ON "seasons"."id" = "episodes"."season_id"
INNER JOIN "series"
ON "series"."id" = "seasons"."series_id"
WHERE
(to_tsvector('english', series.name) @@ plainto_tsquery('banana')
AND to_tsvector('english', episodes.name) @@ plainto_tsquery('banana');
QUERY PLAN--------------------------------------------------------------------
Nested Loop (cost=13.57..13073.82 rows=24 width=613) (actual time=1.248..8.646 rows=82 loops=1)
-> Nested Loop (cost=13.14..753.63 rows=137 width=4) (actual time=0.894..2.926 rows=14 loops=1)
-> Bitmap Heap Scan on series (cost=12.85..173.36 rows=78 width=4) (actual time=0.048..0.073 rows=13 loops=1)
Recheck Cond: (to_tsvector('english'::regconfig, (name)::text) @@ plainto_tsquery('banana'::text))Heap Blocks: exact=13-> Bitmap Index Scan on index_series_on_name_gin_tsvector (cost=0.00..12.83 rows=78 width=0) (actual time=0.042..0.042 rows=13 loops=1)
Index Cond: (to_tsvector('english'::regconfig, (name)::text) @@ plainto_tsquery('banana'::text))
-> Index Scan using index_seasons_on_series_id on seasons (cost=0.29..7.42 rows=2 width=8) (actual time=0.217..0.217 rows=1 loops=13)
Index Cond: (series_id = series.id)
-> Index Scan using index_episodes_on_season_id on episodes (cost=0.43..89.92 rows=1 width=613) (actual time=0.268..0.399 rows=6 loops=14)
Index Cond: (season_id = seasons.id)
Filter: (to_tsvector('english'::regconfig, (name)::text) @@ plainto_tsquery('banana'::text))Rows Removed by Filter: 8
Planning time: 0.907 ms
Execution time: 8.739 ms
(15 rows)

Woah, only 8 ms with almost the same circumstances. But why doesn’t it use the indexes with OR condition?

The explanation of this behavior if following:

When AND operation is used, filtering expressions of the WHERE clause can be applied to the corresponding table before the join operation. So an index scan is performed using the filtering expression on the particular table.
When OR operation is used, filtering expressions can not be applied before the join is preformed as the existence of a row in the output from a particular table is not fully dependent on the expressions in the WHERE clause related to that table. So it has to fetch all the rows from both tables. And Postgresql planner uses seq scan for such cases.

SQL datastores do not allow to create nested indexes on several tables to make it possible to effectively search by several tables using indexes together with OR operation, so we have to find a workaround.

Possible solution is to use SQL unions.

SELECT "episodes".* FROM "episodes" 
INNER JOIN "seasons"
ON "seasons"."id" = "episodes"."season_id"
INNER JOIN "series"
ON "series"."id" = "seasons"."series_id"
WHERE (to_tsvector('english', episodes.name) @@ plainto_tsquery('friends'))
UNION SELECT "episodes".* FROM "seasons"
INNER JOIN "series" ON "series"."id" = "seasons"."series_id"
INNER JOIN "episodes" ON "seasons"."id" = "episodes"."season_id"
WHERE (to_tsvector('english', series.name) @@ plainto_tsquery('friends'))

This notation works, all required indexes are used and the query takes ~450ms. Unfortunately UNION has poor support within ORM that is used in the application and as the queries were built dynamically using ORM embedded tools, so this solution would have required a lot of changes in code for the search query objects.

In my case name fields do not change too often, so it made sense to create a denormalized field of string data type with trgm index in the episodes table to store episode and series names combined and sync it on episode/series name update callbacks.

Table "public.episodes"    Column   |            Type             
-------------+-----------------------------
id | integer
name | citext
created_at | timestamp without time zone
updated_at | timestamp without time zone
season_id | integer
common_name | character varying
Indexes:
"series_pkey" PRIMARY KEY, btree (id)
"index_episodes_on_name" btree (name)
"index_episodes_on_common_name_gin_trgm" gin (common_name gin_trgm_ops)
"index_episodes_on_season_id" btree (season_id)

The query analysis:

explain analyze 
SELECT "episodes"."name"
FROM "episodes" WHERE
episodes.common_name ILIKE '%friends%';
QUERY PLAN--------------------------------------------------------------------Bitmap Heap Scan on episodes (cost=109.58..904.27 rows=204 width=32) (actual time=3.169..7.504 rows=905 loops=1)Recheck Cond: ((combined_name)::text ~~* '%friends%'::text)
Heap Blocks: exact=792
-> Bitmap Index Scan on index_episodes_on_common_name_gin_trgm (cost=0.00..109.53 rows=204 width=0) (actual time=3.071..3.071 rows=905 loops=1)
Index Cond: ((combined_name)::text ~~* '%friends%'::text)Planning time: 7.433 ms
Execution time: 7.667 ms
(7 rows)

In the result we gained a huge performance improvement (on average ~290 times faster).

This case shows up how query and data model design minor mistakes can lead to performance bottlenecks and how extremely useful explain analyze command can be.

--

--