Migrating Bitcoin into PostgreSQL Database

Karan Ahuja
Blockwala
Published in
6 min readJul 7, 2018

Prologue: The following article is focused on how to efficiently insert/migrate big data into PostgreSQL databases. It does not go into the basics of setting up bitcoin nodes and PSQL databases. The reader is expected to have knowledge of decentral and central databases.

To directly view the explorer click here.

Recently I had to migrate Bitcoin’s block data into PostgresQL . Sounds simple enough at first glance, but it quickly became a nightmare. Let’s understand why:

Perspective

Bitcoin genesys block was mined on 3rd Jan 2009. Here is how it looks:

GetHash()      = 0x000000000019d6689c085ae165831e934ff763ae46a2a6c172b3f1b60a8ce26f
01000000 - version
0000000000000000000000000000000000000000000000000000000000000000 - prev block
3BA3EDFD7A7B12B27AC72C3E67768F617FC81BC3888A51323A9FB8AA4B1E5E4A - merkle root
29AB5F49 - timestamp
FFFF001D - bits
1DAC2B7C - nonce
01 - number of transactions
01000000 - version
01 - input
0000000000000000000000000000000000000000000000000000000000000000FFFFFFFF - prev output
4D - script length
04FFFF001D0104455468652054696D65732030332F4A616E2F32303039204368616E63656C6C6F72206F6E206272696E6B206F66207365636F6E64206261696C6F757420666F722062616E6B73 - scriptsig
FFFFFFFF - sequence
01 - outputs
00F2052A01000000 - 50 BTC
43 - pk_script length
4104678AFDB0FE5548271967F1A67130B7105CD6A828E03909A67962E0EA1F61DEB649F6BC3F4CEF38C4F35504E51EC112DE5C384DF7BA0B8D578A4C702B6BF11D5FAC - pk_script
00000000 - lock time

Its an innocent looking block with 1 transaction, 1 output and 1 input (coinbase). Hail Satoshi!

Current block (as of writing this article) stands at a height of 530802. So imagine ~530,000 of these blocks. Sounds quite simple to traverse and migrate in a SQL or no-SQL database. Unfortunately (fortunately mostly) Bitcoin is not that simple.

Bitcoin is p2p network and as more users show interest and come on the network, creating more wallets, buying more tokens, generating more inputs and outputs the size of the network and blocks increases. Example: block 530802 has ~2000 transactions. And these transactions have n inputs and outputs.

Currently the block data dir of Bitcoin’s node stands at a staggering size of 188GB.

Now let’s not forget the blk files only hold outputs and references of outputs as inputs. It doesn’t actually hold any input values. If we parse and migrate this data to a PostgreSQL ,it amounts to ~200GB. And its ever growing.

The Server

We choose a m2-large EC2 server with UbuntuOS.

Attached an EBS (SSD, 500 GB ).

Lastly, setup a Bitcoin node.

For any DevOps Engineer this takes hardly 1 hour, excluding node sync timing.

The Script

We wrote a script (Python/JS/any) which parses the bitcoin data dir (the blockchain). Script’s Goal is :

  • Traverse blocks
  • Insert block in psql
  • Traverse transactions
  • Insert transaction in psql
  • Traverse outputs
  • Insert outputs in psql
  • When above is done, create inputs

For efficiency purposes we made it multi threaded.

The First Attempt — configuring script to read blockchain data efficiently

End result of running the above script:

  • OOM errors even though we had 8 gb dual core with 1gb swap memory
  • Hardly any data written on the database.

Reasons:

  • Firstly, Bitcoin data is non-indexed. You don’t know which block is at what height. That information is maintained in a levelDB that only one thread can access simultaneously. We made the mistake of accessing this directory parallely and other threads went to wait state
  • Script never closed the connection to the database. Every spawned thread makes a new connection, which requires more memory and if left open at the end of execution of the thread, it goes into “idle” state until GC comes to check for strong pointers and vacuum the memory.

So there is very less we can do in script itself to speed up things here, like:

  • Made the script single threaded.
  • Pointed the script to unordered blocks (we don’t get the height of blocks here) but the read is still sequential. In other words we can sync transactions outputs and blocks in parallel but individually they still sync sequentially.
  • Do bulk writes, not single row write. Data in bulk can amount to hundreds of thousands of columns depending on your RAM and read source. PSQL doesn’t have any known limit on bulk write.

Re-runing the script I found insert operations to be faster , but not still not enough to write Gigabytes of data in small amount of time. Lets check our PostgreSQL.

The Final Attempt — configuring Postgres 9.5 to write big data

This is the final showdown, the reason I wanted to write this article. I dug up PostgreSQL configurations. This is what I did to make the migrating of data lightning fast:

Changes to the Table Schema:

The schema of table has constraints on its columns since creation. Every table had a Primary Key. If none was provided, PSQL assigned an auto increment integer id which is also the primary key.

Every table had a Foreign key, as you can imagine. Indexes were made on most columns (complex queries and business needs). Primary keys are indexed by default.

Although these constraints and indexes are God sent in any read case but during write, and especially bulk writes these constraints can become hell.

Indexing in PSQL is done on B-Trees. When we bulk write thousands of rows, the index also updates itself. Internally B-Tree rebalances itself after insertion.

Inserting such huge data in an Index leads to huge memory consumption and wait times. Imagine sorting multi-million node b-tree after thousands of new entry!

So if you are going to write huge amount of data in PSQL, simply drop all indexes and constraints. When your data is ready, index the columns according to your queries.

Cost of creating a new Index is much less than updating indexes in batch writes.

Changes to postgres.conf file:

  • commit_delay = 20000 Async commits are done after 20 seconds. Commits are very expensive to memory and should be done at huge interval if your script is writing on database continuously.
  • commit_siblings = 25 Minimum number of concurrent open transactions to require before performing the commit_delay delay. A larger value makes it more probable that at least one other transaction will become ready to commit during the delay interval. The default is five transactions.
  • maintenance_work_mem = 512MB Specifies the maximum amount of memory to be used by maintenance operations, such as VACUUM, CREATE INDEX, and ALTER TABLE ADD FOREIGN KEY. It defaults to 16 megabytes (16MB). Since only one of these operations can be executed at a time by a database session, and an installation normally doesn’t have many of them running concurrently, it’s safe to set this value significantly larger than work_mem. Larger settings might improve performance for vacuuming and for restoring database dumps.
  • work_mem = 512MB Specifies the amount of memory to be used by internal sort operations and hash tables before writing to temporary disk files. The value defaults to one megabyte (1MB). Note that for a complex query, several sort or hash operations might be running in parallel; each operation will be allowed to use as much memory as this value specifies before it starts to write data into temporary files. Also, several running sessions could be doing such operations concurrently. Therefore, the total memory used could be many times the value of work_mem; it is necessary to keep this fact in mind when choosing the value. Sort operations are used for ORDER BY, DISTINCT, and merge joins. Hash tables are used in hash joins, hash-based aggregation, and hash-based processing of IN subqueries.
  • On SSD make your random_page_cost = seq_page_cost = 1.0. This change is done for READ purposes only. SSD has 100X better READ than a RAID or HDD. By lowering random_page_cost you make sure that psql opts for indexed reads than sequential ones.
Although not a RAM, SSD is as fast as MEM ACCESS.

changing commit_delay and commit_siblings made a huge impact on the write speed.

Conclusion

After the above changes the writes were fast and data became ready to serve to the users. Our Bitcoin block explorer is live here.

Ethereum (1TB) is next.

If you found this article helpful please like and share.

--

--