Scenario
Many reference tables contain more than 100k up rows and are mutable. And we want to join contents from those reference tables and a source table to dump them into a new table. Moreover, there are more than ten reference tables to be joined.
MySQL configuration:
- Engine: InnoDB
- Version: 8.0
Query
CREATE TABLE result_table SELECT * FROM source_table
LEFT OUTER JOIN ref_table_01 USING (`id`)
LEFT OUTER JOIN ref_table_02 USING (`id`)
LEFT OUTER JOIN ref_table_03 USING (`id`)
But an Error 1206 occurs.
[Error 1206] The total number of locks exceeds the lock table size
What is Error 1206?
Error number:
1206
; Symbol:ER_LOCK_TABLE_FULL
; SQLSTATE:HY000
Message: The total number of locks exceeds the lock table size
InnoDB
reports this error when the total number of locks exceeds the amount of memory devoted to managing locks. To avoid this error, increase the value ofinnodb_buffer_pool_size
. Within an individual application, a workaround may be to break a large operation into smaller pieces. For example, if the error occurs for a largeINSERT
, perform several smallerINSERT
operations.
The root cause of Error 1206 is as it is named. There are too many locks, and they exceed the size of the lock table.
Possible Solutions
A straightforward solution is to increase the global lock table size and reduce the sizes of every operation.
Increate the lock table size
Increase innodb_buffer_pool_size
to 8GB
Reduce operation size
Divide a large operation into many small queries using WHERE
and BETWEEN
and batch run.
Query:
CREATE TABLE result_table SELECT * FROM source_table
LEFT OUTER JOIN ref_table_01 USING (`id`)
LEFT OUTER JOIN ref_table_02 USING (`id`)
LEFT OUTER JOIN ref_table_03 USING (`id`)
WHERE `id` BETWEEN 1 AND 1000
// next
WHERE `id` BETWEEN 1001 AND 2000
// next
WHERE `id` BETWEEN 2001 AND 3000
If queries are executed in sequence, the lock table size issue is eliminated and rarely occurs. But we want to run parallel to speed up, so the problem remains.
Nonlocking Read
Due to the workaround's failure, we start diving into the error's root causes.
Why are there so many locks?
There is no lock if using a pure SELECT statement. But we use aCREATE TABLE ... SELCET
statement which will be called with strong locks. This locking mechanism is mentioned in Consistent Nonlocking Reads.
The type of read varies for selects in clauses like
INSERT INTO ... SELECT
,UPDATE ... (SELECT)
, andCREATE TABLE ... SELECT
that do not specifyFOR UPDATE
orFOR SHARE
:- By default,
InnoDB
uses stronger locks for those statements and theSELECT
part acts likeREAD COMMITTED
, where each consistent read, even within the same transaction, sets and reads its own fresh snapshot.- To perform a nonlocking read in such cases, set the isolation level of the transaction to
READ UNCOMMITTED
orREAD COMMITTED
to avoid setting locks on rows read from the selected table.
Fortunately, a solution is also provided in the post. Due to our data is immutable, we can set transaction levels of queries to READ_UNCOMMITTED
.
The final query is
SET SESSION TRANSACTION ISOLATION LEVEL READ UNCOMMITTED
CREATE TABLE result_table SELECT * FROM source_table
LEFT OUTER JOIN ref_table_01 USING (`id`)
LEFT OUTER JOIN ref_table_02 USING (`id`)
LEFT OUTER JOIN ref_table_03 USING (`id`)
Appendix
Transaction Isolation Level
For consistent reads, we can study Transaction Isolation Levels, and the default level in InnoDB is REPEATABLE_READ.
The default isolation level for
InnoDB
isREPEATABLE READ
.
There are Dirty Read
, Non-repeatable Read
, and Phantom Read
anomalies at different levels. Please read this post.
For Phantom Read
, it still is possible in REPEATBLE READ
. Here is a good explanation in this post written in traditional chinese.