Intro to sharding and cross-shard trust
The challenges of scaling blockchains are well documented. The most successful blockchains in operation today form a linear chain, where each block or update references the previous. Every node on the network stores a complete copy of the ledger history. The singular chain model works extremely well at keeping the entire world in consensus. Unfortunately, it is rather limiting in terms of overall network throughput, since every node needs to receive and validate every transaction that happens globally.
Many approaches have been proposed to help blockchains scale. The most popular approaches today are layer 2 solutions such as Lightning or Plasma, bigger blocks, and abandoning the idea of a linear blockchain completely in favor of a more scalable architecture. A couple of possibilities exist to replace a linear blockchain: DAGs and sharding.
Note on DAGs
Directed acyclic graphs (DAGs) have been proposed by projects such as IOTA, Byteball and Nano. They believe that it’s not important for everyone to have a global state, but rather nodes should only need local state relevant to them and enough connections to other nodes to verify their local state is not in conflict with others’ view.
In the absence of the global state, attacks such as the Eclipse attack are possible if an attacker can monopolize the incoming connections of a victim node. IOTA until May 2019 solved this with a centralized coordinator that everyone connects to. This arguably defeated the whole point of the DAG. They’ve recently announced a switch from the coordinator to voting module dubbed Coordicide. Consensus participants are going to be expected to proactively vote on conflicting transactions, which in my view sounds quite a lot like traditional blockchain consensus.
Sharding systems are similar to DAGs, but they acknowledge the importance of a global view of state and impose formal structure on the ledger to ensure that the whole system stays in alignment. Sharding divides the validation work of a blockchain into groups that are each responsible for a subset of the work.
Many designs exist for sharding, yet very few are in production today. The primary design consideration is whether to have a beacon chain or not. A beacon chain acts a lot like a traditional blockchain, but rather than validate transactions and ledger state itself, it relies on each shard to come into consensus on its own state. This state is then condensed into a merkle root that is signed by a quorum of shard validators. The beacon chain then weaves together the shard roots into an overall chain root.
It is possible to have sharding without a beacon chain, but even more effort must be taken to divide resources fairly and prevent large reorganizations and shard takeover attacks.
In a sharded system, the stated goal is that you divide the work among workers and therefore increase throughput. One of the first critical problems is understanding how work is divided. There must be protections taken to ensure that dishonest validators cannot overwhelm a particular shard.
Above is a visualization that hints at how even just 1% of the validators could wreak havoc if they were to concentrate on a single shard. Ethereum plans to use a shuffling procedure that assigns validators to shards in an unpredictable manner to prevent a bad actor from taking over a shard.
What happens on the boundaries of these shards? How do they interact with each other? One obvious answer is that if an application isn’t on my shard, I can make a new account on a different shard, or I may even be able to use a cross sharding service that takes my wallet address on one shard and lets me interact with applications on any other shards.
Imagine you want to receive a payment from a network participant that is not in the same shard as you. How can you receive money from a shard that you are not participating in?
Visualized sample approach proposed by Ethereum researchers
Here, we invoke the idea of receipts. A recipient shows proof that they are to receive coins from a foreign shard, by providing a merkle path to a transaction in the source shard. The destination shard consumes the receipt and credits the recipient’s account. This must be done in an atomic fashion. Either the sender’s and recipient’s accounts are modified together or they are not. If there is a gap or one end fails, the sender could trick the recipient into believing they’ve received funds that they’ll never end up getting.
Transactions In Transit
Truly atomic transactions across shards are a difficult problem, since it requires validators between shards communicate with each other synchronously. If demand for cross-shard transactions is high enough, performance can degrade as more shard workers must collaborate together to handle cross-shard transactions.
A sharded system must develop mechanisms for trusting that the network will not reverse these transactions from the foreign shard. How can one protect themselves from large reorganizations that may happen?
The best answer we have to date is ensure that the number of validators within a shard is above some minimum threshold so that the odds of dishonest validators overwhelming a single shard are low. Regular, but not overly frequent, validator rotation limits the ability for potential bribes of a validator set within a pool. If validator rotations are too frequent, then the cost of running a node will increase, and decentralization will be harmed, since nodes will need more storage and bandwidth to keep up with the shard changes.
One feature that simplifies these problems greatly is finality. Once a block is marked final by the economic majority in the system, we can be sure that foreign shards won’t change underneath us. Finality seals the entire ledger so that it cannot be mutated, and prior cross shard transactions can be considered just as secure as if there were just a single chain.
Proof-of-Stake is able to guarantee finality, while Proof-of-Work never can. This is the reason that Ethereum developers are pairing PoS and Sharding together into their roadmap plans for Eth2.0.