Announcement: aelf testnet TPS data release: 14968 times per second
Earlier today aelf announced the testnet results and this post is to follow that up with some more in depth data including cluster configuration and environments.
NOTE: This test data is initial data and does not represent final data.
The data will be updated with the iteration of the entire network and the optimization of other functional modules in the future. We will continue to provide our community with the latest data reports. We also welcome any developers to join our testing during the testing process. aelf will continue to improve the technical content and build a blockchain ecosystem which can support large-scale commercial use.
Parallel Processing Model Benchmark Test
§ 2 key concepts of the aelf system architecture: Main Chain + Side-chains.
Every aelf node will process as a cluster through the use of parallel processing to increase the TPS. Parallel processing is a key function in the development. The current test focused on the benchmarking of parallel processing.
§ aelf adopts Akka as the framework.
§ Description of the testing (Complete procedure of a transaction)
ü Function of Token contract: Reading balance from account A, reading balance from account B; Deduction of balance from account A, Adding of balance to account B (Including twice on both reading and writing)
ü The benchmark program first deploys the Token contract, initializes the balance of the test accounts, then simulates mass transactions and groups the transactions. Finally it executes the grouped transactions in the Workers aka “Transaction Executor” that are deployed on multiple servers.
§ Server: Virtual machine to be created on AWS
§ Actor: The smallest parallel computing unit
§ Worker: The process that is hosting the “Actor”
§ Single server: Single Worker + single database instance
§ Cluster mode: Multiple workers + single database instance
§ Cluster mode: Multiple workers + database cluster
§ Server: AWS c5.2xlarge (8vCPU+16G)
§ Internet Bandwidth: 10G (default)
§ Redis: Version 4.0.10
§ Twemproxy: Version 0.4.1
Add documents to GitHub Wiki
• Test 1
There are 2 worker samples per worker server, 16 actors for each worker samples, and Redis is a single samples.
•Server and Role
Benchmark 1 server
Worker 4 servers
Redis 1 server
• Test 2
Each worker server has 4 worker samples, each worker samples has 16 actors, and Redis build the cluster by using twemproxy (2 servers with 8 samples).
• Server and Role
Bechmark 1 server
Worker 4 servers
Redis 2 servers
• Verifies the parallel execution capabilities in single machine situations.
• Verifies the scalability of a clustered environment with network impact.
• Since this is phased test, only parallel processing and scalability are verified. The server with a higher configuration is not tested, and the database is expected to perform better under a higher server configuration.
Later optimization ideas
• When dealing with a large number of transactions (for example, in a sidechain environment, it takes about 700ms for 80,000 transactions to split into 2000 groups), the grouping strategy needs to be further optimized to reduce the time spent by the grouping.
• Router dynamic allocation capability (testing uses the system’s routing strategy: round-robin-group, production environment is not optimal strategy)
• Perform individual module health monitoring (such as Mailbox) in parallel to understand the entire execution process
How to participate in the test
If you have any valuable suggestions during the testing process, please contact our technical team. Developers who want to participate in the testing work can contact the aelf technical team via the following email address:
Technical team contact: email@example.com
Slack test network communication (# testnet-feedback):
Manual: Refer to Benchmark manual
Results Statistics: Refer to statistics file
§ Tested parallel processing functionality on single server
§ Tested system extensibility in cluster mode (affected by network latency)
§ As this test is only a trial run, it only tested the parallel processing and system extensibility. However, we have not tested with the higher specification server which may give database better performance.
§ During the mass-processing of the transactions (under testing environment, 80k transactions divided into 2000 groups require 700ms for grouping), grouping algorithms can be improved.
§ Router aka “Dispatcher” can be improved (under testing environment, we used a simple round-robin strategy). More sophisticated strategy can be developed.
§ Develop tools to monitor the health (capacity needs) of the modules used by the parallel processing.