Speed up SQL Bulk Insert with transactions

Jonny Fox
Factory Mind
Published in
2 min readAug 2, 2017

Sometimes you have to insert sample values ​​within a SQL table to test its performance on a massive number of rows in order to identify some corrective actions: adding clustered indexes or not, views, statistics, etc…

Are you looking for a script that quickly inserts a large number of rows using transactions to separate data in chunks? Here it is 😎

The code is self-explanatory:

  1. Declare the variables that store the number of desired items (@items), the size of the chunks(@chunk_size) and the counter of the number of executed inserts(@counter)
  2. Open the first transaction and initialize the while loop
  3. If the @counter reached the value of @chunk_size or arrived at the end of the loop (eg if the number of items is not divisible by the size of the chunk), the transaction is closed, printing its number and the rows inserted until then
  4. Execute the actual INSERT statement
  5. Increment the @counter variable
  6. Close the last transaction

With this script, having to put 1 million rows in a table with about 20 columns, I lowered the run time by 5 times (from 4 ~ 5 minutes to 58 seconds 😃).

As you can see, I’ve also added two “utility functions” that can include random numbers (for example, to simulate foreign keys or numbers); notice that the same principle can be used to insert random strings.

Have fun and do not forget to recommend the article if you liked it 💚!

--

--