Intro to Beam Sync

A step toward 100,000x speedup over Fast Sync on Ethereum Mainnet

Jason Carver
10 min readSep 24, 2019

Why work on sync?

I wince when I am reminded of how many people are still using Infura (via Metamask, Gnosis Safe, etc) to interact with on-chain applications. Infura is a great service, but something is wrong when most people aren’t running their own node. Even very capable and motivated developers are punting to Infura. We are failing to achieve an important part of Ethereum’s self-validating vision.

Our team wants to do our part to reverse this trend. We are on a mission to dramatically increase the number of nodes on the network, especially those run by hobbyists, researchers, and developers. When we chat with people about the reasons that they aren’t running their own node, we typically get some variant of this answer: “I installed it, and tried to sync for a while, but it didn’t seem like it was working. I shelved it, because I’ve got other 💩 to do.”

So if we’re going to get more people running nodes, we need to make sync a lot faster, and give better feedback during sync. Many different teams are attacking this problem in different ways. Dedicated hardware is one important avenue. This post is about how we can dramatically speed up sync using a new strategy called Beam Sync.

Current Sync Options

To understand how Beam Sync works, it helps to first understand the existing options.

Full Sync

The Full Sync strategy is to execute each block since genesis. There is a starting genesis state (account balances, contract bytecodes, storage slots, etc). Then each block reads the previous state and writes new state, verifying the new state root on the header. Full sync is painfully slow on mainnet, and the time to finish full sync will increase without bound as the network gets older. So “Fast” Sync was born.

“Fast” Sync

Fast Sync downloads all the past blocks and headers, and picks a recent block as a “launch block.” It skips chain execution up until the launch block, making a presumption that the chain has followed all the EVM rules correctly, until then. This is somewhat reasonable, given that miners have incentives to produce correct blocks and reject bad blocks.

Before Fast Sync can execute the launch block, it needs the state: bytecode, accounts, and contract storage. Transactions might read any of these values during execution. So Fast Sync requests the snapshot of the state just before the launch block, from its peers. The snapshot is referred to by its state root hash, a Merkle Tree root hash of all of the state. The node uses that state root hash to verify that the state data downloaded from the peers matches the state that the miner declared in the block.

After Fast Sync has finished downloading all the state, the node has everything needed to execute any valid transaction. So the node switches over to full sync, and executes the blocks from that point forward, as if it had done a full sync up to the launch block.

A simplified version looks like this:

Others

There are more sync approaches like Warp Sync, and some more experimental strategies. At a very high level, they are all some kind of variant of Fast Sync. Knowing the details won’t help with understanding Beam Sync, so I’ll save them for another post.

How Fast is “Fast” Sync?

Fast Sync has some significant challenges on mainnet today. There is quite a bit of data to download, more than 100 GB. So it tends to get stuck in that 2) Get All State step above for a while.

To make things even more difficult, peers don’t serve state for every block. Peers will only serve you state from a relatively recent period, roughly 100 blocks, or 30 minutes. The default seems to be 120 blocks in geth.

If you can’t download all the state in 30 minutes (spoiler alert: you can’t), then you need to “pivot.” Pivoting means switching to a new launch block, and starting to sync again. Pivoting doesn’t mean starting from scratch, but it does increase the time spent downloading and verifying state.

Geth has made amazing progress with speeding up sync, both Fast and Full. It keeps getting better with every release. But even if you have the ideal hardware available, syncing will take 4 hours minimum. That’s a rough experience for a first run-through.

Our team works on a Python client called Trinity. Python will not beat Go in a pure speed race. If performance-focused geth code can’t sync as fast as we want, what chance would Trinity have? It’s totally reasonable to project that a Trinity implementation of Fast Sync would take weeks to sync, at least. But a client isn’t a client if it can’t sync to mainnet. Syncing in weeks doesn’t count. Out of that necessity came a new strategy for syncing, which we are now calling: Beam Sync.

Beam Sync

Overview

Beam Sync is a direct evolution of Fast Sync. The primary difference is that Beam Sync starts by executing the launch block, and only requests state data that is missing from the local database. The input state and output state are saved locally. Then Beam Sync advances to the next block and repeats the process, requesting any missing data on-demand.

Over time, less and less data will be missing. Note that if some state is never accessed, then the client would never request it, so we run another process in the background which fills these gaps. With that backfill process, Beam Sync eventually populates the local database with all state data, and the node can switch to Full Sync.

We call the set of data needed to execute each block the block witness. Due to the magic of Merkle Trees, we can prove that the witness data is present in the whole state, without actually downloading the whole state.

Block Witness Size

For simplicity, we refer to the block witness size as the count of the data elements required to execute the block. A data element might be a single node from the main account trie, a node from a contract storage trie, or the complete bytecode of a given contract.

Analyzing the witness size is a crucial component of understanding Beam Sync performance. Fast Sync must download the full state before executing its first block. Beam Sync only needs to download a single block’s witness. If a block witness was one third of all the state, then Beam Sync would run at most three times faster than Fast Sync.

So the obvious next step is to see how big mainnet witnesses are, in practice. This isn’t the final word, but early experimentation suggests that~3,000 trie nodes is a reasonable estimate for the high end of witness size (90th percentile). In contrast, the total state on mainnet is more than 300 million trie nodes.

Beam Sync Speedup

So let’s define a new metric: “launch-to-execute” time. This is how much time it takes between firing up a node with an empty database and finishing full import of a recent block.

If Beam Sync only needs to download 3k nodes, and Fast Sync needs to download 300M, then we can define an upper limit on how much faster Beam Sync can run: a 100,000x launch-to-execute speedup on mainnet!

Beam Sync doesn’t actually run 100,000x faster, for many reasons. For example:

  1. State download doesn’t make up the whole launch process. For example, we need to download headers to verify that we are on the longest chain.
  2. We determine the block witness on-demand, which means we don’t know what state to ask for next until we receive the previous one. So we must ask peers for trie nodes one at a time. In contrast, Fast Sync can ask peers for up to 384 nodes at a time. This makes Beam Sync very sensitive to peer latency.
  3. Bootstrapping to find high-quality, low-latency peers takes some time. We are at the mercy of the RNG gods to find good peers.

Unlike Fast Sync, Beam Sync continues to download the state on blocks subsequent to the launch block, which slows down import time. If you have built some intuition at this point, you might notice that it would be especially problematic if the average witness collection takes longer than the average block generation time.

Beam Sync Lag

The first block witness will almost certainly take longer than a single block generation. Similarly, we can expect the witness to arrive slowly for several blocks after the first. We refer to this situation as simply “lag,” roughly the time gap between the last imported block and the block at the tip of the chain.

The lag of witness collection may start to compound, and you would find that your beam-syncing node is lagging by, say, 5 minutes. At that point, your local node has a block at the tip of the chain that was generated 5 minutes ago. That means that RPC requests to your node about a current account balance will return the balance as of 5 minutes ago.

In anecdotal testing, it’s very common to see lag vary widely, from 1 to 20 minutes. Luckily, we have some tricks up our sleeves to recover from laggy situations. In fact, we can generally recover better the further lagged we are, which can cause a lot of variance in the lag: falling behind and catching up repeatedly.

One reason we can catch up faster while lagging is that we can look ahead to generate witnesses for multiple blocks at the same time. These future blocks are only available if you are lagging. Of course, if it perpetually takes more than the block time to collect the block data required, then by definition you fall increasingly behind the head of the actual chain. We would prefer that this never happens, but we need to plan for it.

Beam Sync Pivot

Just as with Fast Sync, if you fall too far behind the head, you might start asking for data that peers will no longer be willing to serve you. Pivoting is the primary mechanism for recovering from that.

Pivoting in Beam Sync is almost the same as Fast Sync. Your node chooses a series of blocks to skip over and picks a new launch header near the chain tip. At this point, Beam Sync starts over again. The node isn’t starting completely from scratch, it still has all the data from the previous sync.

Whether you are beam-syncing or fast-syncing, you pay a cost if your node is forced to pivot. Pivoting means more data downloaded, and that there is some series of blocks whose execution you didn’t locally verify. The good news is that as long as you don’t lag more than ~30 minutes, then Beam Sync shouldn’t ever need to pivot. In contrast, it’s practically impossible to Fast Sync mainnet without pivoting several times.

Okay, you say, enough theory. Where is this thing? Can I play with it?

Beam Sync on Trinity

Prototype Released

A new alpha version of Trinity was released last week. The release includes a prototype of Beam Sync that works on high-end hardware (anecdotally).

We have been testing sync against mainnet, and typically can execute the first block within an hour. It’s not unusual at all to execute within 5 minutes! This ignores the time to download headers, and the occasional lack of good peers. Edit: the increase of the gas limit from 8M to 10M seems to have increased average lag. Istanbul is likely to decrease lag, because the gas costs of loading state data are going up.

The Trinity client is in alpha. The latest version still has plenty of bugs: Sync tends to crash out after a day or two. Even if it doesn’t crash, it often falls behind enough that a pivot is required. Installing Trinity requires some extra work, and the shell output is a mess. So Trinity is only ready for developers and researchers who are curious and ready to get their hands dirty.

The pending issues are all just typical “bugs on the backlog,” from everyday development. At this point, there is no concern about whether Beam Sync is a pipe dream. This confidence in the feasibility of Beam Sync is new. We thought there might be unknown deal-breakers in the approach, as recently as a month ago!

Remaining Work

There is still much to do, beyond the basic debugging & implementation work.

Trinity doesn’t yet implement backfill of state or download old events, transactions, and receipts from before the launch header. The only way to pivot right now is to restart Trinity. The minimum machine specs for Beam Sync are unknown (we welcome help in collecting this data!).

All of this is under active development, and is only one project of many happening on Trinity. Thanks to the Ethereum Foundation for funding this work!

What’s New Here?

Beam Sync, like so many ideas, builds heavily on previous work. We didn’t invent the idea of skipping execution of old blocks by downloading recent state, that was Fast Sync. We didn’t invent the idea of executing a block against a witness instead of the full state. Credit for that goes to Stateless Clients.

What’s new is the combination of the two: we use a guided Fast Sync to simulate a Stateless Client at first, and then fade into a Full Sync. We drop one benefit of Stateless Clients, low disk usage, but we get to keep the benefit of quick execution of a recent block. By saving the input and output state locally, we mitigate a critical concern about Stateless Clients: a risk of being DoS’d by giant witnesses. The longer that Beam Sync runs, the less effective those DoS attacks are.

Beam Sync provides better feedback and quicker results when running an Ethereum node locally. We believe this is an important step to bringing back the excitement and pleasure of running your own node!

I’d love to hear what you want to read about next! Two leading options from private polling are: “How Beam Sync makes the whole network more healthy and resilient to an onslaught of leeching peers” and “How to supercharge Beam Sync: network protocol upgrades for even bigger wins”

--

--