Stress test on VeChain’s testnet
Facts and results regarding our latest performance test
Thank you for showing interest in Arkane ❤️, we would like to get to know you, so please don’t be shy and join us on Telegram ✨
On October 19th, we launched a performance test on our backend environment which is connected to the VeChain testnet. The goal of our stress test was to see how our own system would behave under high pressure.
Things we were testing:
- Can our system handle the scheduled load?
- Is our VeChain node cluster able to handle the load?
- If we bring down one of the nodes, how does the platform behave?
- Can we recover from a broken node?
- Which service within our platform will be the first to get stressed?
- When the system is pressured, does it trigger a cascading effect or can it keep operating at a reduced performance level? 🐌
Our findings 📝 so far:
- A VeChain node has a hidden limitation on the number of transactions an address can perform on the testnet. At the moment it is unclear what the limitation is (not documented), nor if this also applies on the mainnet 👻.
In collaboration with one of our community members (thx MiRei !), it turns out that each node has a tx-pool, which has a limit of 16 tx per origin address. If transactions are not being mined quickly enough the queue can store up to 16 still to be processed transactions. This is also the case on mainnet. The limit can be increased and at this moment it is not expected to be a limiting factor on mainnet.
- Our Authentication service ran out of memory, even at low loads. The solution here was to increase memory resources for the auth service.
- Our Sign service was the second service to get under stress, here it happened under high load, again increasing the amount of memory solved the issue.
- Other services didn’t show any signals of degraded performance.
- All VeChain transactions were either successful or reverted as expected resulting in no loss of funds. 💯
Our first performance test was useful to make improvements regarding infrastructure requirements and to make better decisions on how to optimize when falling under a heavy performance load.
We will be executing more tests shortly to test more complicated services within our system and more specifically in relation to concurrency.