Is traditional sharding necessary for a highly scalable L1 blockchain?

Andrew Scott Riley
Cyber Capital
Published in
6 min readNov 16, 2022

Traditional Sharding

Traditional sharding refers to the sharding design that was initially described in the Ethereum 2.0 roadmap before the pivot to a rollup centric approach with danksharding. A current implementation of traditional sharding can be seen in MultiversX (formerly known as Elrond) which has a metachain (similar to the beacon chain) along with 3 other shards. Traditional sharding usually exists with the understanding that the amount of shards can increase as demand for blockspace increases, allowing for the accommodation of a large volume of transactions. Each shard is essentially its own blockchain that produces blocks in parallel with the other shards. The validators then post proofs of the shards to the main chain (beacon chain in ETH, meta chain in MultiversX). As the number of shards increases, so too does the amount of blocks being produced by the entire system. We will use the term meta chain to describe the primary chain where shard proofs are posted.

What does sharding accomplish?

The benefits of sharding can be distilled into two aspects:

  1. Reducing validator hardware requirements — parallel processing of transactions of many validators.
    • It’s difficult for a single validator’s computer to process tens of thousands of TPS, but sharding allows many validators to create blocks in parallel thus reducing the hardware requirements needed for each validator. As the validator hardware requirements get lower, the decentralization of the blockchain increases.
  2. Auditing account balances and partitioning state data into smaller pieces — easier to download and audit.
    • Instead of downloading an entire blockchain, a user can download just the meta chain and their shard in order to properly audit their account balance.
  3. Both of these aspects work together in order to add more transactional capacity to a blockchain.

The problems with sharding:

  1. Redundant transactions — When a cross-shard transaction is made it’s written in three places: on the sender’s shard, on the receivers shard, and on the metachain. If we can imagine a world where a sharded blockchain has 1000 shards, it becomes clear that many of these transactions will be cross-shard transactions. Validator’s are assigned to different shards frequently (for instance in the case of MultiversX, roughly every 3 days). Assuming that each shard gets sufficiently large, three days may not be enough time to download a new shard in which case it would push validators to download all the shards anyway.
  2. Fractured Liquidity — A liquidity pool would only be able to live on one shard and if a user exists on a different one, they must make a cross-shard transaction to deposit or withdraw from that liquidity pool. But since a shard has a hard cap TPS on it, transactions to and from the liquidity pool would also be bottlenecked by the shard’s capacity.

Let’s now compare and contrast the benefits and problems of sharding against a Monolithic (Aka: non-sharded in this context) blockchain that has additional tools necessary to achieve the same benefits of sharding.

Validator Hardware Requirements

  1. Traditional Sharding:
    • Allowing different blocks to be added in parallel (to their respective shard chain), so long as the shard chains are attached to some parent chain. This splits up the computation among many validators allowing them to have lower hardware requirements.
  2. Monolithic:
    • A monolithic chain with traditional block production will only add one block at a time to the chain. The implication is that if block times are to be short, then validators will have to have robust hardware to process all the transactions within a short period of time.
    However, with proposer-builder separation, the builder does the processing, the validator just checks the proofs then proposes the block, so the validator hardware requirements may still remain low even with a high throughput monolithic chain. -Source
  3. Conclusion:
    • Proposer-Builder Separation reduces the need for high validator hardware requirements, this is necessary for any high throughput L1 imo (sharded or monolithic). Additionally it has the added benefits of censorship resistance and protection from MEV centralization (which are both necessary). One drawback would be if transaction volume got so high that only a few builders would be able to keep up. This could create centralization among builders, but at least the centralization would be separate from the validator set and their consensus, otherwise it would threaten decentralization. Since there would be non-parallel execution, the computational power of the builders would be the bottleneck. A devil’s advocate argument would be: What if the centralized builders amass so much power that they then buy up tokens and become validators to further centralize? The answer is that this is no different than any other industry. Google can make billions per year and buy up validators in this same manner. The important distinction here is that the roles of proposer and builder are divided. The problem arises when there is no separation between proposer and builder so a few proposers end up being the only ones that can build profitably. With PBS all validators equally profit from the work of a few centralized builders thus maintaining decentralization within the consensus framework.

Auditing Account Balances

  1. Traditional Sharding:
    • Each address’s state lives on a particular shard.
    • In order to check the balance of a particular address, one can download the shard and verify that its current state proof is posted on the beacon chain.
  2. Monolithic:
    • Having the ability to audit a single shard may be unnecessary if validators (or full nodes) can serve a zkproof of a particular data point to a user without the user having to download the entire shard/blockchain.
  3. Conclusion:
    • Data Availability Sampling is superior to having to download an entire shard to check the validity of something. Therefore sharding is not necessary for being able to audit a chain. With Data Availability Sampling, the audibility can remain very high even with a large monolithic chain.

Conclusion

It’s not sharding that’s important, what’s important is:

  1. Maintaining low hardware requirements for validators — Achieved by Proposer-Builder Separation
  2. Allowing audibility of the chain — Achieved by Data Availability Sampling
  3. If we mix these two tools without a rollup centric approach, we get all the benefits of both traditional sharding and L2 centric approach plus more, such as:
    • Theoretical Infinite scalability (same as sharding, better than L2)
    • Superior user experience (slightly better than sharding, better than L2)
    • Non-fractured liquidity (better than sharding, better than L2)
    • Reduces redundant transactions when transacting from one shard (or L2 chain) to another. (better than sharding, better than L2)
  4. If we want even further improvement, we can add Danksharding architecture in order to have pruning. My idea of this would be fundamentally different from Ethereum’s version of Danksharding where each shard is a blob of arbitrary data. Each shard could be and should be L1 transactional data. This is more expensive than blob data as L1 transactional data has to run through the EVM. But would save users from poor UX and fractured liquidity.
  5. I think Ethereum has a LOT right, Proposer-Builder Separation, Data Availability Sampling, Danksharding, but of course the fundamental flaw is significantly favoring the L2 data above the L1 data. With proper pruning under danksharding, the high L1 capacity wouldn’t be a problem.

About the author

Andrew Scott Riley is a research consultant at Cyber Capital. With his computer science background, he analyzes cryptocurrency projects and whitepapers and compiles extensive research reports.

About Cyber Capital

Cyber Capital, Europe’s oldest cryptocurrency investment fund, is a fund manager that specialized in providing exposure to the crypto-asset markets as an alternative asset class. Cyber Capital is fully registered by the Dutch Authority for the Financial Markets under the AIFMD-light regime and the Dutch Central Bank.

Follow us on Medium and Twitter.

--

--