WAX Technical How To #22

Ross Dold
EOSphere Blog
6 min readFeb 14, 2024

--

Antelope Leap v5.0.0 was released on the 4th January 2024 and shortly after was incorporated into the WAX Protocol Network Software.

Leap 5 Deployed Article

WAX v5.0.0 was designed to be more performant, efficient and reliable than prior versions which is excellent news for Guilds, as even a marginal improvement can translate to massive gains across of a fleet of 100’s of managed nodes.

With this in mind the EOSphere Guild team have documented our real world comparison of the improvements in CPU, Memory and Disk IO between WAX Software v4.0.4 and v5.0.0 in the article below.

WAX Software v5.0.0 CPU, Memory and Disk IO Performance

The following article was built from gathering statistics on one of the EOSphere Guild WAX Mainnet Public Peer Nodes. This node was chosen as it’s in production and highly utilised with between 195–200 organic incoming public peers. Hardware configuration as below:

  • Ubuntu 22.04
  • Virtualised in KVM 7.2.0
  • 4 CPU Cores
  • 64GB RAM
  • 128GB SWAP
  • Drive 1 : OS & State : 384GB Enterprise NVMe
  • Drive 2 : Blocks : 6TB Enterprise NVMe (ZFS)

CPU

Below is the chart of daily CPU usage, showing utilisation on v4.0.4 and then the upgrade to v5.0.0 on the 13/2/2024 (12h00)

KVM CPU Utilisation of EOSphere Public Peer Node

CPU utilisation actually increased from 40% on average to a normalised 55%. This was a different result to what we had seen on other networks that were upgraded to Leap v5.0.0, in those there was a marked decrease on CPU utilisation.

On further analysis we noticed that our new v5.0.0 configs on this node had a few changes configured in regards to available CPU threads.

These were changed from default.

chain-threads = 4 
net-threads = 4
producer-threads = 4

These thread changes may have made a difference however nothing conclusive, but we still believe the v5.0.0 software is more efficient because of what was then noticed.

KVM Outbond Network Utilisation of EOSphere Public Peer Node (Megabytes per second)

Outbound traffic increased 2.5x (48Mbit/s to 133Mbit/s) which means the client peers off this node were receiving a service boost improving network synchronisation. This is great news for scaling. This could also mean that the traditionally configured max-clients peer limit of 200 could be extended to 250 or even 300 for a public node.

Disk IO and Memory

If you have read any of our previous WAX Technical How To Articles you will be aware that EOSphere have been advocates for running WAX nodes using the tmpfs strategy.

The tmpfs strategy involves running the nodeos chainbase database state folder in a tmpfs mount, allowing us to oversubscribe RAM with SWAP and allow more efficiency with memory utilisation and disk IO.

tmpfs is a Linux file system that keeps all of it’s files in virtual memory, the contents of this folder are temporary meaning if this folder is unmounted or the server rebooted all contents will be lost.

The challenge with using tmpfs being temporary all data is lost on reboot and nodeos will then require a restart via snapshot.

WAX v5.0.0 brings a new database map mode called mapped_private as an alternate to the default mapped mode. Instead of the constant writing to disk with mapped mode, mapped_private mode better utilises memory and reduces disk IO. It does this by mapping the chainbase database into memory using a private mapping, which means that any chainbase data accessed during execution remains in memory and is not eligible to be written back to the shared_memory.bin disk file.

If that sounds familiar, it is. mapped_private is an excellent replacement for the tmpfs strategy. This means no need to mount a tmpfs partition and as the in memory chainbase data is written to disk on exit, there is no need to restart using a snapshot on reboot.

mapped_private configuration

Configuration of mapped_private involves simply adding the below to the config.ini

> nano config.ini
database-map-mode = mapped_private

In order to start nodeos mapped_private requires sufficient memory to cover the private mapping of the configured chain-state-db-size-mb = , physical RAM can be substituted with SWAP allowing over subscription.

At the time of writing 64GB of Physical RAM and 128GB SWAP is sufficient to run a WAX mainnet node.

mapped_private operation and results

On the first nodeos mapped_private start up, the entire chainbase is uploaded to memory (RAM and SWAP) assuming you are starting with a snapshot and may take some time.

On nodeos exit the in memory chainbase is written to disk, this may take some time depending on how large it is.

Subsequent nodeos starts are faster not requiring a snapshot and as only data needed for execution is added to memory, displaying far less utilisation.

CPU and Memory Utilisation of mapped_private mode Second Start

Subsequent nodeos exits are also faster depending on how long the node has run, as mapped_private tracks dirty pages, only writing out dirty pages on exit.

There is also a slight improvement in memory utilisation compared to mapped mode.

CPU and Memory Utilisation of mapped mode

Other than RAM over subscription and lower utilisation, the real value in using mapped_private and the reason why EOSphere started using this mode in the first place is in far lower disk IO.

Performance requirements make it a necessity for operators to place the state folder containing the chainbase database on a high speed SSD drive. SSD drives have an endurance rating assigned to them by the manufacturer stating the maximum amount of data that may be written to the drive before failure. This is usually in TerraByte Writes (TBW), on a consumer disk this is usually between 150–2000TBW, on enterprise drive this value is usually in the Petabyte range. Essentially too many disk writes may wear out an SSD disk causing failure.

Below is the Drive 1 disk IO (Writes) of our example peer node using mapped mode, the network was seeing between 15–40 Transactions Per Second (TPS).

Drive 1 Disk IO (Writes) using mapped mode

And then this is was the Drive 1 disk IO (Writes) of our example peer node using mapped_private mode, with the network seeing the same 15–40 TPS.

Drive 1 Disk IO (Writes) using mapped_private mode

This demonstrates a massive reduction in the amount of writes using mapped_private.

Approximately 7 Megabytes (MB) per second down to 30 Kilobytes (KB) per second. That’s about 220TBW / Year reduced to 0.95TBW / Year.

This translates to SSD’s lasting longer, Virtual Environments scaling better and Cloud environments not being constrained by IO limitations.

In summary WAX Software v5.0.0 has better CPU utilisation, better network throughput, a more efficient memory footprint and easily manageable lower disk IO when using mapped_private.

Be sure to ask any questions in the EOSphere Telegram

EOSphere Guild is a Block Producer on the WAX Protocol Network as well as many other Antelope based Blockchains.

If you find our work helpful, please vote us on the WAX Mainnet: eosphereiobp

If you prefer to proxy your vote, our proxy account is : blklotusprxy

Connect with EOSphere via these channels:

TELEGRAM | MEDIUM |YOUTUBE | FACEBOOK | TWITTER | INSTAGRAM

--

--