Fetch.AI Ledger Benchmarking I — Overview and Architecture

Fetch.ai
Fetch.ai
Published in
4 min readFeb 21, 2019

This is the first in a series of articles where we discuss our benchmarking of the ledger we are building. As our ultimate goal is to build a scalable ledger, the absolute number of transactions per seconds is a poor metric to understand how well the system performs, as a perfectly scalable system would not be bounded in throughput. Rather, the interesting question is: How does the system perform as a function of computational resources available per node?

The answer to this question strongly depends on a number of different components in the system for each of which the performance will vary depending on the conditions under which they operate. For instance, transaction synchronization of a single shard can never exceed the limitations of the network connection it is running on. That is, if you run a shard using a 56K modem and average transaction size of 2kb, you would at the very best be able to get 20–30 transactions per second due to bandwidth limitations for a single shard. On the other hand, on a modern 100MBit connection, transaction synchronization could peak at 5k transactions / second.

To answer the above question, we will be benchmarking a number of individual components to first quantify their performance. Each of the these benchmarks will be based on test conditions and/or assumptions about the system that are expected to either reflect an production environment or the worst case scenario. In a series of blog posts, we will address:

  1. Low-level data structures
  2. Single shard synchronization performance
  3. The worst case performance for transaction scheduling
  4. Time-to-finality for consensus
  5. Multi-shard benchmarking
  6. Full system test

For reference, we are working under the assumption that the average transaction sizes to be around 2048 bytes unless otherwise stated throughout these benchmarks. This is well above the size of a normal Bitcoin transaction and has been chosen to ensure that the numbers will stay true even with the more advanced synergetic and smart contracts which Fetch.AI are enabling with our ledger.

The high-level architecture of the Fetch.AI ledger transaction management system involves a main chain that that controls multiple shards which we refer to as lanes. These lanes can be considered individual sub-chains that are coordinated through the main chain and where the state maintenance is done by a number of transaction executors working across the lanes:

This simplified block diagram provides an overview of Fetch.AI’s scaling solution; Transactions arrive at the main node and are sharded according to their digest and dispatched to the appropriate lane, usually on a different machine. This allows a high degree of parallelism in transaction execution and dynamic balancing to system demands. For further information refer to the Fetch.AI yellow paper on the scalable ledger.

One of the key elements (and hence possible bottlenecks) of the system is the transaction store. Each lane in the Fetch.AI ledger adopts an architecture that is very close to those found in other ledgers where transactions are first held in a transient memory store and later written to the disk. We summarise the simplified architecture in the image below:

The object store was benchmarked and we summarise the results here:

For this test, increasingly large amounts of transactions were submitted to the system, varying the size of these to be artificially small or the expected size.

Due to certain data structures in the object store designed to amortize the cost of disk writes, the effect is that for larger numbers of transactions submitted, the average rate of transactions per second increases significantly. As can be seen from the table, for the expected transaction size of ~2048 bytes, the rate rapidly approaches just above 30k Tx/sec. Given that the system design is to shard the object store across the lanes (where each lane will have its own disk), this result is comfortably within bounds.

For the test of the transient store, transactions were written to and read back from the store. In addition, 10% of these transactions were scheduled to be written to the object store, representing a subset of the mempool which is committed long term. This represents the expected operation, where transactions arrive, are stored in the mempool for some time, are accessed for verification and mining, and are ultimately either committed to long term storage, or discarded.

In conclusion the transient store is unlikely to pose a problem as it is decoupled from disk writes. It is able to make use of standard data structures to provide very high rates.

Having established that each lanes low-level transaction handling mechanisms are capable of handle 30k+ transactions per second, we will discuss the full performance of a single lane system in the next blog post.

--

--

Fetch.ai
Fetch.ai

Build, deploy and monetize AI apps and services.