Stress Testing The RaiBlocks Network: Part II

Brian Pugh
Jan 26, 2018 · 2 min read


In my previous article, Stress Testing the RaiBlocks Network, I reported that I was able to broadcast precomputed transactions at a sustained 33 transactions per second (TPS) and peaked at 120TPS on a $5/mo Digital Ocean node. This time around I improved the following:

  1. Tweaked my stress test code to create more interesting patterns on by precomputing both send and receive transactions
  2. Broadcasted from a more powerful computer
  3. Better TPS measurement tools

This article will be brief, primarily just reporting results. For a more in-depth article, please read part 1. In these tests, I report the TPS presented on RaiBlocks.Club as well as the peak TPS of a $40/mo (8GB, 4-Core) Digital Ocean Droplet; this Droplet is not broadcasting the initial transactions. These tests were performed on the RaiBlocks Main-Net in a real world environment.

20 Account, 500 Transaction Test

Real-time capture of 500 transaction on; final svg

In this initial test, a total of 500 transactions were sent between 20 accounts. This was done on a $40/mo (8GB, 4-Core) digital ocean droplet.

The test results are as follows:

Number of Transactions: 500
Broadcast Length: 5.76 Seconds
Average Broadcast TPS: 86.8
Remote Peak TPS (1S Average): 263 TPS
RaiBlocks.Club Peak TPS (5S Average):187.2 TPS

100 Account, 5000 Transaction Test

4x speed up (but really ~30% speed up because of lag) capture of 5000 transaction on; final svg

In this primary test, a total of 5000 transactions were sent between 100 accounts. This was done on a $160/mo (32GB, 8-Core) digital ocean droplet.

The test results are as follows:

Number of Transactions: 5000
Broadcast Length: 47.28 Seconds
Average Broadcast TPS: 105.75 TPS
Remote Peak TPS (1S Average): 306 TPS
RaiBlocks.Club Peak TPS (5S Average): 172.8TPS


In this experiments, we showed that the network was able to sustain 105.75 TPS and the some nodes experienced a peak of 306 TPS. During the testing period, transactions processed normally on the network; the network was not saturated. Now that I removed the processing bottleneck for broadcasting, the next step would be to simultaneously submit transactions from multiple nodes. A single node can broadcast faster than I present in this article, but would require significantly more complex stress-test code.

Like stuff like this? Support me at:

Welcome to a place where words matter. On Medium, smart voices and original ideas take center stage - with no ads in sight. Watch
Follow all the topics you care about, and we’ll deliver the best stories for you to your homepage and inbox. Explore
Get unlimited access to the best stories on Medium — and support writers while you’re at it. Just $5/month. Upgrade