Towards Massive On-Chain Scaling: Presenting Our Block Propagation Results With Xthin
Part 1 of 5: Methodology
By Andrew Clifford, Peter R. Rizun, Andrea Suisani (@sickpig), Andrew Stone and Peter Tschipper. With special thanks to Jihan Wu from AntPool for the block source and to @cypherdoc and our other generous donors for the funds to pay for our nodes in Mainland China.
The fracturing of Bitcoin development into several competing implementations bore fruit on 16 March 2016 with the release of Bitcoin Unlimited 0.12. Contained within this release was a new technology called Xtreme Thinblocks, or Xthin for short. Xthin fixed a longstanding inefficiency from Bitcoin Core where transactions were often received twice by each node. Nodes supporting Xthin can propagate blocks using fewer bytes and in less time than nodes that rely on standard block propagation.
The motivation behind Xthin is clear: as argued by Cornell researchers, block propagation between nodes is the bottleneck for on-chain scaling. Of particular concern is the propagation of blocks over the Great Firewall of China (GFC), which Jonathan Toomim reported is an order of magnitude slower than between nodes connected across the normal P2P network. Xthin is designed to address these issues.
For the past two months, we have been collecting empirical data regarding block propagation with and without Xthin — both across the normal P2P network and over the GFC. We have six Bitcoin Unlimited (BU) nodes running, including one located in Shenzhen and another in Shanghai, and we have collected data on the transmission and reception for over nine thousand blocks.
This post is part 1 of a 5 part series. It will describe our experiment’s methodology. Part 2 — coming later this week — will show how Xthin blocks are significantly faster than standard blocks, while Part 3 will illustrate how Xthin blocks are less affected by the GFC. Part 4 will summarize the bandwidth savings that result from using Xthin, and Part 5 will conclude the series.
The two variables of interest that we measured were the number of bytes and the length of time required to communicate a block.
In the case of a Xthin block, the number of bytes was measured by summing the Bloom filter size and the thin block size. In the case of a standard block, the uncompressed block size was measured.
The length of time required to communicate a block was measured by setting a timer immediately after the receiving node received notification that a new block was available (i.e., after it received the “inv” message) and stopping the timer when that block had been fully received and reconstructed. This applied for both standard and Xthin blocks. Starting the timer immediately after the “inv” message ensured that the time it took to construct the Bloom filter was included in the measurement.
The purpose of the experiment was to test for how these two variables (number of bytes and length of time) were affected by two factors: the propagation technique (i.e., standard or Xthin), and node connectivity during block transmission (i.e., whether or not the block passed through the GFC). To test for this, we performed a 2x2 full factorial experiment by collecting numerous data points, filling each of the four bins shown below.
Six BU nodes with clocks synchronized via the Network Time Protocol (NTP) were employed. These nodes were fully interconnected (to one another) and configured to freely accept incoming connections from other nodes (including Core nodes). Two of the nodes were behind the GFC and both were connected to AntPool (Core node) to ensure a reliable block source from within Mainland China.
A script running on each node logged the number of bytes and the length of time it took to receive each block, along with enough extra information to place each block into one of the four bins described above. To reduce the effect of block size, only blocks with an uncompressed size between 900 kB and 1 MB were considered.
The nodes’ Xthin functionality was disabled at certain times during the experiment, to allow Bins 1 and 3 to fill more quickly (without this, nodes typically received only 1 standard block for every 40 Xthin blocks). Ideally, it would be possible to enable and disable the “Great Firewall of China” in a similar fashion (as well as place it between any pair of nodes at the push of a button). Nonetheless, because two of the test nodes were in Mainland China, thousands of blocks crossing over the GFC were captured.
Part 2 of 5: Xthin blocks are faster than standard blocks
In our next post, we compare the propagation times for Xthin blocks to standard blocks, over the normal P2P network (Bins 1 and 2).
Download Bitcoin Unlimited
You too can help improve network block propagation by downloading and running Bitcoin Unlimited today [link].
This document and its images are placed in the public domain.