RISE 1.2.0 & Protocol Buffers

Improving peer-to-peer performance and efficiency

Matteo Canever
rise-vision
6 min readSep 21, 2018

--

One of the core values at RISE is using well-known web technologies, making development of our DPoS Blockchain accessible to as many developers as possible, including those with a background in Web development (like me); the core is written in TypeScript, APIs and node-to-node communication are based on the ubiquitous HTTP protocol, in JSON format.

Fast and efficient Peer-to-peer communication is vital to ensure that all nodes are constantly in sync. All transactions, blocks and information about peers are transmitted between nodes via a dedicated set of API endpoints, referred to as Transport API.

Latest efforts in developing rise-node have been focused on improving the performance, both in terms of potential Transactions Per Second and of efficiency in resource usage (CPU, network layer). While

took care of the database and many other things, I worked on improving performance and efficiency of the Transport API.

Which elements of the Transport API needed improvement?

  • JSON format is not efficient for transporting the kind of data used by RISE (Blocks, Transactions, Peers…); it has a lot of overhead, due to the brackets, commas and quotes used to serialize the information, and binary data (like cryptographic signatures and public keys) are encoded as hexadecimal strings, actually doubling the number of bytes transferred.
  • As a result of the previous point, the node synchronisation protocol had a hard-coded limit of 34 blocks per request: for acceptable stability on the HTTP protocol, the payload size needs to be limited to 2MB, and 34 is the approximate number of blocks, JSON-encoded, that can fit in 2MB, but only in the worst case scenario: blocks containing 25 vote transactions each. (The calculation is also wrong, because in RISE larger transactions are multi-signature ones, not votes). This fixed limit is extremely inefficient, because blocks may contain less transactions (down to zero), and various transaction types exist, with different size each. This number needed, instead, to be calculated dynamically, based on the actual blocks and transactions to transport in each request.

How did we fix these issues?

It became evident that using a different serialization format was necessary to reduce the network overhead. Additionally, a dynamic algorithm to calculate the number of blocks to fit into each Transport request needed to be put in place.

Say hello to Protocol Buffers!

Protocol Buffers are a language neutral, platform-neutral extensible mechanism for serializing structured data.

A Google backed project, ProtoBuf is a natural choice because of the robustness and flexibility it offers, while maintaining extremely high performance of encoding and decoding data and almost zero network overhead. Stable implementations are available for most programming languages, including NodeJS/TypeScript.

As shown in the Performance section of the great ProtoBuf.js library you can clearly see that protobuf.js is 1.7x faster in encoding and 4.6x faster in decoding than JSON (the currently used transport data), with a combined factor of 2.2x.

The major advantage in leveraging this new format, however, was coming from the binary encoding of the data, which guarantees huge savings in terms of data usage compared to JSON, usually more than 80%. Let’s have a look to a couple charts based on the most common RISE use case, serializing a block containing some transactions:

JSON vs ProtoBuf data usage (bytes per block)
Data usage reduction: ProtoBuf vs JSON (%)

The challenges

Implementing such a refactoring may seem a trivial task, but let’s not forget we are talking about a Blockchain node!

  • All nodes in the network must be able to communicate to each other without compatibility issues
  • Node owners do not upgrade nodes at the same time
  • Outdated node versions should be accepted in the network for a relatively long time, to allow for upgrades to be performed

For the reasons above, extreme care needs to be taken, both in development and release strategy. For this specific refactoring, the following approach was chosen:

  • Support to JSON Transport API will be kept for a non-yet-defined number of releases
  • Nodes will understand which serialization format to use for Transport API based on the version of each node: JSON for < 1.2.0, ProtoBuf for the rest
  • At some point, the number of nodes with version ≥ 1.2.0 will be enough to guarantee network stability, and a new release removing JSON support completely will be released.

Implementing Protocol Buffer endpoints was quite straightforward, but maintaining compatibility with JSON was complex, and required to abstract each single request made by the transport API into dedicated classes.

The other main challenge was the dynamic allocation of the elements returned by the getBlocks endpoint: the main complexity was predicting how many blocks and transactions to request to the database, without knowing anything about those elements, and avoiding slow queries at the same time. In this case, a little bit of Maths came to rescue:

Equations to calculate the number of transactions to query from the database

Results are very satisfying, with requests transferring between 700 and 8000 blocks in a single 2MB response, queried in few milliseconds.

Thanks to the efficient encoding method, we decided to reduce the payload size to 100 KB for the ProtoBuf endpoints.

Development and testing

This refactoring started from the creation of the dedicated issue in Github and is done after 75 commits and 131 files changed on the feature/protobuf branch.

Apart from new unit and integration tests that I added for the new elements introduced by this refactoring, we spent a significant amount of time testing the node in a real-life environment. Andrea helped me build tools to run what we call a Devnet. It is a script based on Docker that allows running multiple nodes, on the same computer, and debug their behavior in an environment that is very similar to the Testnet and Mainnet.

Using the Devnet allowed me to find several bugs and issues that were not caught by other tests, including a specific concurrency issue that was not related to the ProtoBuf refactoring.

Five Devnet nodes processing 30 vote transactions. Block time changed to 5 seconds for testing purposes.

Conclusions

This is my first large refactoring on RISE Node, and I am very satisfied with the results. I am thrilled to release this version in testnet, and excited about the opportunities brought by a high performance Blockchain Core:

  • More transactions can be stored in a single block → More TPS
  • Block time can be reduced, because sync is faster → More TPS, faster confirmations
  • Large transactions are transferred efficiently → New types of transaction may be introduced
  • Lots of flexibility for developers who will build their app based on the Rise Core

If you liked the content of this post please consider starring the RISE repositories in Github to follow further developments.

Please join our Slack for further development discussion. Follow us on Twitter and join our Telegram.

--

--