Boolberry Monthly Progress Report: August

Dev update

Boolberry
6 min readSep 21, 2018

This month was all about one thing: Improving the initial user experience by optimizing synchronization speed.

Background:

There are trade-offs everywhere in crypto, and at every level! New “features” almost always have some form of trade-off or computational cost. For example: desktop wallets are more secure than exchanges or web / mobile wallet applications, but require the full download and synchronization of the blockchain. Another example is Ethereum: it offers the benefit of turing-complete virtual machines and a unique platform for building distributed applications, but as we’ve seen, there are also many vulnerabilities, not to mention the impact of slower throughput and fluctuations in GAS prices.

Similarly, older projects such as Bitcoin have trade-offs. The benefits include flatter emission curves, which adds to value by increasing scarcity, and they generally have a more distributed ownership. Both of these are critical to having a valuable cryptocurrency. Along with flatter emission, and more distributed ownership comes longer blockchains, and sometimes painfully long synchronization times, which is discouraging for new users. We were determined to find a solution…

What Makes Syncing Slow?

Boolberry’s genesis block was on May 10th 2014. During the first 4 years, when using the wallet, all of the blockchain data was being stored in RAM. This made syncing very fast, but as time went on and the blockchain grew (we’re now at ~1.1 million blocks and ~2 million transactions), memory consumption increased and changing to an on-disk embedded database became a necessity. The change to LMDB (Lightning Memory Mapped Database) was completed in July of this year, which was a huge accomplishment as users no longer need 8GB of RAM to use the Boolberry wallet.

Initially, these benefits came at the cost of slower synchronization speeds, but finally we were able to optimize this process, and provide our users with the best of both worlds: a reduction of system resources, with faster sync times than ever before!

In prior releases, to avoid losing data in the event of a crash (software or hardware), each time transactions or a block were added, a commit was made to the database. Writing to the hard drive is much slower than writing to random access memory, so while this was not an issue for new blocks being added by an already synchronized peer, it made synchronizing hundreds of thousands of blocks incredibly slow.

What Can Be Done?

One common practice to improve database performance is to delay committing the data to disk until the syncing process is complete. This runs the risk of a fault or computer crash corrupting the database, but a command line option ( — db-sync-mode=fast) was added for users who have stable systems, and see this trade-off favorably.

A better solution would be to commit the data to the database in groups of blocks instead of individually. To stay consistent with the amount of blocks normally delivered in each transfer iteration, we implemented a system to lock the “currency core” until 200 blocks were added, then commit the data to the embedded database in batches.

A new problem then surfaced:

Peers, during the initial synchronizing, were requesting blocks from other synchronizing peers that were further ahead (and therefore created the appearance of a good source of blocks for syncing). We illustrate this problem here:

Blue lines show transfers to nodes that are still synchronizing (high data exchange). Green lines show transfers between sync’d nodes (low data exchange). The red circles show the connection between peers that are in a sub-optimal state for sharing blocks.

Peers 1, 2 and 3 are completely synchronized and are ready to share blocks.. Peers 4 and 5 have been syncing for some time and have already downloaded most of the blockchain. Peers 6 and 7 connected recently and just started to download blocks.

Peers 4 and 5 are downloading blocks from 1, 2 and 3 as they should. However, because peer 5 is ahead of peer 7, peer 7 is trying to request blocks from 5 even though 5 is not in an optimal state to share blocks. The same situation exists between peers 6 and 4; 6 is trying to download blocks from peer 4, which is not yet synchronized with other nodes.

The “currency core” software is locked during synchronization, so the requests from 6 and 7 to 4 and 5 will time out, impairing overall performance. To avoid this issue, the core software must avoid incoming requests while still syncing, which seemed to require a protocol change.

Protocol Improvement

Instead of a protocol change requiring a hard fork, we came up with a clever solution: have the nodes mask their progress while syncing, and thereby creating the appearance that they only have the genesis block. This will prevent other nodes from requesting data too early while maintaining backwards compatibility with older nodes.

Additionally, we implemented some “fuzzy logic” inside the currency protocol handler. Reaching consensus with a sufficient degree of truth rather than boolean logic allows the required flexibility between peer connections, consistent with the new core lock/unlock behavior.

The new process is illustrated here:

Optimized Peer Synchronization: In this diagram the red circles show the masking of progress to avoid synchronizing nodes from requesting blocks from each other. Blue lines represent higher data exchange (sync’d to synchronizing), and green is data exchange (only new tx’s, blocks, etc.).

After profiling all of these improvements and comparing with the original database implementation, we were able to achieve a 3X speed improvement yielding faster results for handling batches of blocks:

****sync_optimization:

NOTIFY_RESPONSE_GET_OBJECTS: 200 blocks were prevalidated in 2 ms (0.01 ms per block av) and handled in 10644 ms (53.22 ms per block av) syncing conns av: 5.00

****master:

NOTIFY_RESPONSE_GET_OBJECTS: 200 blocks were prevalidated in 3 ms (0.01 ms per block av) and handled in 28843 ms (144.21 ms per block av) syncing conns av: 1.31

These synchronization improvements are on the sync_optimization branch and can be tested on both the test-net and production software.

Checkpoints Re-Imagined

While the improvements outlined above had a 3X improvement in synchronization time, the process was still not to our satisfaction. To further improve speed, we looked to checkpoints.

The basic concept of checkpoints is to speed up synchronization by disabling certain verification's under checkpoints, namely, checking signatures. The reduction in the number of signatures being checked saves time and checkpoints also prevent forks or attacks on early blocks in the chain during synchronization. Checkpoints are trusted because each checkpoint can be manually verified at any time by checking it against the full blockchain.

We decided to extend this concept by uploading the full database to the CDN (content delivery network) with a hash that can be verified for data integrity. This allows peers to download the entire database up to that checkpoint (a certain point in time), and compiling the database locally rather than downloading the entire blockchain. Finally, peers validate the hash of the original database file by checking it against the hash hard-coded into the source code.

To eliminate any vulnerabilities that are inherent in a pre-compiled database, we have added command line tools to provide a transparent and unambiguous process of validation. These tools can run after the initial synchronization, and in the background so as to provide the fastest user experience without any compromise.

Most notably, — validate_predownload will have the daemon validate every block from the downloaded database just as if they were downloaded from peers using the traditional syncing process. Two instances of the core are created, one with an empty blockchain and one with the downloaded blockchain. Each block is then pulled from one to the other and validated by the protocol.

The pre-downloaded database changes are live on the “predownloaded” branch and can be tested on both testnet and production software.

https://github.com/cryptozoidberg/boolberry/tree/predownload

Results!

According to our tests, full synchronization from the genesis block takes about one hour using the new pre-compiled database approach, as opposed to ~24 hours using the original manual method! Categorically, a 3X speedup would have been great, but we are very proud to have achieved a 24X improvement, We fully expect this will forever change expectations for acceptable CryptoNote wallet synchronization times.

Predownload database at the moment is available only for daemon, but will be implemented in GUI wallet later this month.

Roadmap update (Phase III) - Project Z

Visit our webiste at Boolberry.com and watch Phase III on our roadmap for updates regarding our next generation of improvements, which we’ve codenamed “Project Z.” More details will be provided within the next several weeks.

TL;DR

  • Manual daemon synchronization has been sped up 3x with changes that are live on the sync_optimization branch.
  • Extremely fast sync has been made possible by uploading a precompiled snapshot of the database to be downloaded, and these changes are live on the “predownloaded” branch, 24X faster initial synchronization.
  • Phase III — Project Z, details coming soon.

Happy Syncing Boolians!

--

--

Boolberry

Boolberry is a CryptoNote based cryptocurrency whose main goal is anonymity of the sender and receiver using unlinkable transactions. https://boolberry.com