Fractal Performance Test
The Fractal Platform is a commercial-scale DApps platform. Fractal is powered by iChing consensus, a secure and eco-friendly proof-of-stake protocol. Our iChing consensus does not require a huge amount of non-recyclable computing power. At the same time, our novel scalability solutions provide scalable approaches to handle 1,000x more transactions than legacy blockchain.
In this article, we present our preliminary benchmark results in a benchmark with 500 full nodes with the focus on the throughput and latency. For simplicity, we disable the transaction validations in our benchmark. We varied the parameters in four different scenarios and report 3 important metrics: Transactions per Second (TPS), Blocks per Second (BPS), and memory consumption. In summary, our protocol can sustain a 5,600 TPS and produce on average 0.94 blocks per second. As CPU power is currently the bottleneck in our local servers, the protocol can deliver higher TPS given that more powerful CPUs are used.
We deploy 500 nodes on 5 workstations, each with 2 * Intel(R) Xeon(R) Silver 4110 CPU @ 2.10GHz (32 kernels) CPU, 128GB Ram, 400GB SSD Storage, and Gigabit NIC network. A dedicated node is used to generate and gossip transactions to all other nodes at a fixed rate of 5,000 tps.
Difficulty is a value used to show how hard is it to find a hash that will be lower than the predefined target. The difficulty parameter affects how often blocks are generated.
Greedy is the key parameter for our solution to fight against nothing-at-stake-attack, termed greedy strategy. As a dishonest miner may attempt to “mine” at different branches, thus, amlplifying his “mining capability”, our greedy strategy allow honest miners to mine in different branches as long as the created block is no further than the tip blocks in the longest chain by a distance d, the greedy parameter.
Because of the greedy strategy, the players tend to create blocks in parallel, so that there are more forks on the blockchain. It is important to note that orphan blocks will not be wasted. They will be utilized to pack transactions. The order among parallel blocks will be later decided through the informaiton contained in the blocks in the main chain. Thus, iChing consensus enjoys a nearly 100% block utilization.
More detail about greedy strategies in our consensus paper: https://www.fractalblock.com/home_docs/consensus-protocol/iching/
The figures below present the result of CPU usage, memory usage, transactions per second and blocks per second, where X-axis corresponds the time of the test started and the Y-axis tells the value of metrics
Greedy: 4, Difficulty: 5000
Greedy: 4, Difficulty: 10000
In this scenario, we double difficulty compared to scenario 1, we can see that the BPS was decreased approximately a half at 0.46. Since more transactions stay in the pool, we use more memory while the TPS is almost the same.
Greedy: 2, Difficulty: 5000
In this scenario, Because the greedy is lower mean that the blockchain tends to be a “smaller tree”, the player has less chance to create parallel blocks, the BPS was dropped almost 3 times compared to scenario 1.
Greedy: 2, Difficulty: 10000
In this scenario, the BPS is low so a lot of transaction is cached in
memory and CPU usage are also low. With 5000 TPS input, the system can not process all of the transactions.
The result shows that we can sustain a high throughput of 5,000 TPS. As we said at the beginning, in the scenario 1 CPU usage report, it can be clearly seen that the CPU is the bottleneck, the throughput will be much higher with more computing resources. Another thing the result tells us that if greedy is low and the difficulty is high, BPS is low and not all transactions can be packed.
We also define the block width as the number of blocks has the same height. As we expected, the experiment shows that the average block width depends on the Greedy parameter d. If the greedy is low the average width is low (smaller tree) and vice versa.
In conclusion, Fractal consensus can sustain high throughput because it can generate blocks in parallel and achieve nearly 100% block utilization. Our test on the real-world environment (real network condition) will be available soon.