Eth 2.0 Dev Update #46 — “Slasher running & Integrated fuzz testing”

Terence Tsao
Mar 20 · 8 min read

Our biweekly updates written by the entire Prysmatic Labs team on the Ethereum Serenity roadmap.

🆕 New Testnet Updates

🔹Slasher running in production catching slashable offenses

Our slasher implementation is now capably working in production, running in our cloud deployment in order to catch all slashable attestation events happening in the Prysm testnet. That means if your validator is wired up to do a conflicting double vote or a surround vote, we’ll catch you and you’ll get forcefully ejected from the validator set and gradually slashed! This was a huge milestone for the team and something that we will continuously improve throughout the next testnet restart. We were able to simulate and catch another slashable event, this time a surround vote as seen here: At the moment, however, our slasher only detects slashable events as they happen and is not performing historical chain detection. That is, if a slashable event happened at slot 1, our current slasher won’t catch it. Ivan from our team is now working on plugging in historical chain detection and getting it functional.

Monitoring slasher through Grafana

📝Merged Code, Pull Requests, and Issues

🔹Able to build all Prysm libraries + dependencies from source, including low-level cryptography

One of the key principles at Prysmatic Labs is to create builds as hermetic as possible. What this means is that building Prysm should be exactly reproducible across different machines. Prysmatic Labs achieves this goal by removing dependencies on system level libraries and building everything from original source code. This way, we can ensure that the same version of a library or dependency is used across builds. For example, building Prysm from source downloads a specific version of golang compiler, whether or not the host machine had that already installed! One of our most recent challenges was to build our BLS cryptography library, which is primarily c++, within our go project. While we have been using precompiled library archives for BLS, some differences in linux OS versions have caused issues which could only be resolved by compiling from source. If you’re on linux, try building Prysm with — config=llvm to build BLS entirely from source files.

🔹Fuzz testing Prysm’s core functions is now working

Continuous fuzz tests on

Our friends at Sigma Prime have been working hard on differential fuzz testing across all of the Ethereum 2 implementations for core functionality. Inspired by their achievements, we’ve taken to adding continuous fuzz testing as part of our continuous integration pipeline. At present, we have 7 fuzz targets which are tested with coverage based random inputs! In other words, the fuzz testing platform uses a set of valid data as a seed corpus, mutates the inputs slightly, executes the target with the new input and checks if a new code path was discovered. This type of testing can discover unknown edge cases which crash the application. Just this week, the fuzz testing has found two fatal bugs within Prysm’s core methods. One of these bugs involved populating a certain cache with certain inputs that would lead to an overflow. This issue is truly something that we could not have foreseen and would have been extremely difficult to reproduce without the input data. As we advance towards mainnet, we will be embracing fuzz testing for as much of the application as possible, especially mission critical methods which are exposed to unknown inputs.

🔹Revamping of beacon state management

Currently in our design, a node prunes all states prior to the finalized checkpoint, while this design was easy to implement, it’s neither fully desirable nor sustainable. The nodes no longer store historical states, and the nodes do not have the capability to retrieve state that’s prior to the finalized check point. With that limitation, it comes with the following downfalls.

It limits debuggability. If a user wants to retrieve historical data to debug, it would not be able to do that. This increases the disparity between the regular node and the archival node. We have seen complaints when an archival node’s functionality was applied on a regular node. This disables a node to perform state sync. With state sync, having access to some amount of historical state will be important. Finally, this increases man testing power to test both regular and archival nodes.

The current proposal is to implement a wrapper service on top of the state DB with separation to maintain both hot and cold sections of the state. By efficiently splitting the states into two sections. In both hot and cold sections, full BeaconState objects are only stored periodically and intermediate states are reconstructed by quickly replaying blocks on top of the nearest state.

To replay blocks , we omit redundant signature checks and ssz hashing calculations. For the hot section, the full state where blocks are replayed from are referred to as epoch boundary states. The frequency at which the hot section stores BeaconState is fixed to one per epoch. For the cold section, the full state where blocks are replayed from are referred to as archived points. The frequency at which the cold section stores BeaconState is configurable via cli flags.

This proposal enables flexibility for end users to explore space and time trade-offs. Frequent archived points occupy more disk space but accelerate the loading and computing of historical states. Whereas infrequent restore points occupy less space but also slow down the loading and computation of historical states.

Follow the remaining issues for bugs follow up and further optimizations.

Wanna give a shout out to Handelaar and Butta from our Discord. They have been on the front line with me testing out this feature in the wild. While this has been buggy, we are seeing good results! Thanks guys! ❤️

🔹Separating initial sync block fetching from block processing

This week we have merged #5096 into master, which features a major headway in initial sync refactoring. This update introduced a concurrent model of syncing, when blocks are processed as they are being fetched. Number of issues were addressed as well (stucking on a single block, or looping on a range of blocks without any progress).

Updated sync process is not fully complete, as it still has a number of quirks and inefficiencies, that’s why we have put it behind the flag i.e. by default we still sync using previous implementation. During the next week more polishing will be done to the algorithm, to improve correctness, performance and memory footprint.

🔹Dynamic attestation subnets complete

A huge blocker for multiclient testnets that are large in scale is reducing network spam of validator attestations. If there are 100k validators, your node will be receiving a huge amount of attestations per second via gossipsub p2p. Instead, we should only be subscribing to unaggregated attestations our validators care about in order to put into blocks. We can do this by splitting this up into committees our validators are part of. That is, if our validators are part of committees 3 and 4, we should only care about listening for unaggregated attestations for those committees. To do this, we can subscribe to dynamic subnets for attestations. We have completed this feature and it is now behind a feature flag — enable-dynamic-committee-subnets. It will take precendence in our next testnet restart.

Imagine the biggest flame (1GB) above all goes away

P2P discovery using attestation subnets

Now that we subscribe dynamically to only certain gossip topics for unaggregated attestations our nodes care about, then we have to ensure we have peers that are on these subnets. Nishant from our team took the initiative of modifying our discovery protocol, discovery v5, to search for peers that contain the attestation subnets within their ENR record, which is a unique identifier for peers via p2p that contains important metadata for eth2. We have wrapped up this feature and tested it locally via a development chain with beacon nodes that could discover each other successfully based on the topics they cared about. You can see more in the PR #4989

🔜 Upcoming work

🔹Optimizing attestation aggregation in prysm

The purpose is to aggregate validator votes (attestations) efficiently, to maximize validator’s profitability. The difficulty arises from the fact that it is a known NP-hard problem and we need to come up with a heuristic that will be both efficient and performant.

Currently, we use the most naive strategy and are looking to improve it significantly in preparation for mainnet. There are a number of proposed strategies, and during the course of next few weeks we need to decide on which strategy to use and come up with a benchmarked implementation.

🔹Testnet restart with spec version v0.11

We have been heads down the last few weeks revamping Prysm with latest spec releases. First v0.10.1, then v0.11.0. Given both releases have breaking changes that make it incompatible with current testnet, we will perform a testnet restart once all is done and well-tested. We are super excited about the new testnet restart after the current testnet has been running for over three months.

The v0.11.0 is also the current target for multi client testnet. This release was significant as it represented a post audit phase 0 spec. The two releases contains limited consensus changes, it mostly contained networking changes with new features, DoS protections and simplifications.

Follow the remaining v0.11.0 todos in #5119 and top priority testnet issues in #4781


🔹Eth2 Research Explainer Paper Published, With Formal Proofs

After a significant amount on behalf of the eth2 research team, a formal paper on eth2 known as “Combining GHOST and Casper” has been published on ArXiv here. We are thrilled to see such a great write-up on all the work that went behind creating the specification for eth2, and we believe this will open up contributions into eth2 to the wider research sphere.

Interested in Contributing?

We are always looking for devs interested in helping us out. If you know Go or Solidity and want to contribute to the forefront of research on Ethereum, please drop us a line and we’d be more than happy to help onboard you :).

Check out our contributing guidelines and our open projects on Github. Each task and issue is grouped into the Phase 0 milestone along with a specific project it belongs to.

As always, follow us on Twitter or join our Discord server and let us know what you want to help with.

Official, Prysmatic Labs Ether Donation Address


Official, Prysmatic Labs ENS Name


Prysmatic Labs

Implementing Ethereum 2.0 - Full Proof of Stake + Sharding

Terence Tsao

Written by

Building Ethereum 2.0 client at Prysmatic Labs @Prylabs

Prysmatic Labs

Implementing Ethereum 2.0 - Full Proof of Stake + Sharding

More From Medium

More from Prysmatic Labs

More from Prysmatic Labs

More on Blockchain from Prysmatic Labs

Welcome to a place where words matter. On Medium, smart voices and original ideas take center stage - with no ads in sight. Watch
Follow all the topics you care about, and we’ll deliver the best stories for you to your homepage and inbox. Explore
Get unlimited access to the best stories on Medium — and support writers while you’re at it. Just $5/month. Upgrade