A technical overview of Ethereum’s emergent Optimistic Rollup ecosystem
The following report was commissioned by MolochDAO. Input/review was provided by John Adler, as well as a number of others; however, all opinions expressed here, subtextual or otherwise, represent my own. Additionally, the assessment of projects should be taken as a snapshot of their status as of the time of this report (February, 2020), and not strong commitments. Things can change, and that’s okay.
A specter haunts the Ethereum scaling community — the specter of Optimistic Rollup.
In the later half of 2019, Optimistic Rollups (hereby ORU or Optiroll) emerged as Ethereum’s hot, new, layer 2 scaling protocol. This piece aims to give a snapshot of the nascent ORU development ecosystem as of the time of publication (February 2020). We’ll first situate the theoretical properties of ORUs in the context of the layer 2 design space, then compare different projects’ approaches and technical design decisions and the various the trade-offs within. The 9 projects include:
- Fuel Labs
- Whitehat, Cellani, Lim ( hereby “WCL”)
- Offchain Labs
- Interstate Network
Review their differentiators here in a Google sheet. Finally, we’ll delve into some of more the qualitative questions around how these projects envision their own role in the space as it unfolds.
Background & Theory
Open, permissionless blockchains, for all their celebrated virtues, come with a major catch: all full nodes in the network must witness and validate every transaction the system processes; the sheer inefficiency of this (relative to centralized digital payment systems, say) is the heart of cryptocurrency’s much-discussed scaling challenge.
Layer 2 protocols represent one category of approaches to alleviating this burden. They do this (in some way, shape, or form) by shifting the burden of global validation of all transactions by all nodes to local validation of some subset of transactions by only the interested parties (those looking to secure their own funds, say). Crucially, they do this while preserving the trustless security model of the base layer without relying on resources beyond software the user herself runs.
Data Availability: Working Around it
In the early stages of Ethereum layer 2 research and development, researchers tended to operate under the implicit presumption that “alleviating the validation burden” of the base layer was equivalent to keeping (some) transaction data off of the blockchain entirely. (See Josh Stark’s piece “Making Sense of Ethereum’s Layer 2” from early 2018 for a nice overview of the thinking at the time).
Cryptocurrency protocols have a built-in economic property of incentivizing block producers to instantly share blocks far and wide (barring selfish-mining edge cases); blockchain base layers are sometimes referred to as “data availability engines.” Ensuring that data be (very) public guarantees it can be validated, and makes it nearly impossible for an invalid transaction to sneak through.
Thus, in a layer 2 scenario, if data is to be kept off-chain entirely, it follows that we no longer have this data-availability assurance; data could potentially be withheld from those who care about its validity. We therefore must figure out how to somehow ensure that no invalid state updates occur, even under these dire conditions. This data withholding problem is always the hardest, worst-case scenario for a layer 2 system; if you’re trying to ascertain the truth, you can’t be in a worse position than having no information at all.
If our definitions are loose enough, all solutions to this data withholding problem can fall into two distinct categories: channels and Plasma. If two parties are in a channel, both parties need to give unanimous consent for an off-chain update to be considered valid. Thus, if Alice chooses to withhold data from Bob, Alice is stuck at the latest state; data withholding does her no good. Plasma constructions, by contrast, do not have this requirement for unanimous consent. It directly follows that in Plasma, an invalid update can occur without a user having any direct evidence. The Plasma guarantee is that even in this case, users still have the requisite off-chain data to prove and secure ownership of their assets via an interactive dispute game.
In terms of actual results, channels have been the first layer 2 constructs to deliver. They are well understood, with (arguably) no unsolved fundamental research challenges, and indeed, we currently see channels live on both Bitcoin and Ethereum. Channels have some properties, like instant transactions, that are quite useful, especially for certain application-specific needs. As a more general purpose scaling solution, however, they prove somewhat limited; by their nature, channels are somewhat siloed from each other, and trustlessly connecting them gets capital inefficient and/or gets restricted by paths of available liquidity. (I’ve written about these limitations and some strategies for surmounting them in the context of Bitcoin’s Lightning Network here.)
Plasma offers the promise of a broader-purpose, sidechain-esque scaling solution, where participants can more freely interact with each other. Its developmental trajectory, however, has been rockier. A detailed rundown of the technical challenges with Plasma is outside of the scope of this piece. But in (very) brief, the property of lacking guaranteed data availability makes it difficult to preserve various properties at once, namely: supporting arbitrary denominated payments, minimizing validation/storage requirements for users, avoiding mass exit scenarios, and supporting smart-contract logic. Mitigations of these problems exist, but solutions to one obstacle often come at the cost of exacerbating another. In short, it’s hard to get quite right, and even with workable enough constructions, their complexity makes implementation difficult and slow. I’ve written about the logical progression of Plasma Cash and its variants, and the challenges arising from them here and here.
The troubles with Plasma led some to recently go so far as to declare its demise. Even if one believes this claim to be premature (as I do), there’s little doubt that Plasma’s complexities proved more difficult than anticipated, and that the space has been slower to deliver results than initially hoped. With Plasma stalling, the Ethereum community was thirsty for something with analogous non-custodial sidechain properties, but more ready for primetime in terms of research and implementation status. Which, at last, brings us to Optimistic Rollup.
Data Availability: Giving In
One can spot traces of the construction we now refer to as Optimistic Rollup in all sorts of prior proposals; Shadowchains, Coinwitness, bulk validation with ZK-SNARKS (now called ZK-Rollup), this student presentation on Arbitrum from early 2015, etc.
The fundamentals of the protocol as we now understand it were articulated by John Adler and Mikerah Quintyne-Collins (aka “Bad Crypto Bitch) as “Merged Consensus.” The previously named Plasma Group team (now Optimism, covered below) described similar principles in a blog post, framing it in their analysis of optimistic layer 2 game semantics, and gave it the term Optimistic Rollup, which, for whatever reason, is the name that finally stuck. And here we are!
Optimistic Rollup takes the framing described in the previous section and turns it sideways; instead of trying to preserve non-custodial-ness in the context of data withholding, ORU takes a more direct approach of simply requiring that transaction data be published on chain — more specifically, enough data such that anyone running an Ethereum node can reconstruct the ORU’s state. The scaling benefit here is that layer one simply has to witness this data and Merklize it down to a block root, but need not execute its transactions; the computation (ideally) is executed only on layer 2. Accordingly, the transaction data is published on the Ethereum chain as calldata, and is not stored in the state; being that state growth and computation costs are (arguably) the core bottlenecks to Ethereum scaling, this is no small upside.
As with Plasma, the fact that the base layer isn’t validating transactions directly means that it’s possible for an invalid transaction to slip through. In the case of ORU, any party interested will witness this invalid update, and then demonstrate this to the base layer via a fraud proof, which reverts the fraudulent block and any subsequent ones. Once enough time passes without a fraud proof being submitted, the rollup block is finalized, and withdrawals initiated from those blocks can be completed. To disincentivize griefing the network by deliberately publishing invalid blocks, block proposers post a bond, which gets slashed if and when fraud is proven. The precise way that this fraud proofing is handled is the heart of any particular ORU construction.
Compared to Plasma, ORU has one fundamental and unavoidable downside, which is its relatively lower scalability potential. Given that on-chain data is directly proportional to data in ORU blocks, ORU constructions are bottlenecked by the amount of data that the base layer allows. Otherwise, ORU offers all sorts of benefits, including:
- Easier / broader smart contract support
- Easier arbitrary payment denomination support
- Permissionless block production
- Simpler exit game mechanics
- Relative simplicity of implementation
Although as we’ll see, even the factors above tradeoff against each other across different ORU constructions. (For more on the motivations behind ORU, see John Adler’s The Why’s of Optimistic Rollup.)
For this report, nine projects utilizing ORU design patterns were interviewed, with one project choosing to remain pseudo-anonymous as of publication (hereby referred to as “ANON”).
Only projects that fall within Optimistic Rollup confines were considered, which is to say, they need be both “Optimistic” — involve some assumption of the form “assume valid unless/until a fraud proof is submitted” (i.e., not ZK-rollup) — and involve a “Rollup” — i.e., enough data is published on-chain for any observer to reconstruct the state and detect invalidity (i.e., not Plasma). Most projects discussed fall strictly within these parameters, with the only exception being IDEX 2.0, the details of which are discussed below.
Various other projects that have worked on Plasma or Plasma-adjacent constructions are in the early stages of looking at building ORU, including Matic, LeapDAO, and Cryptoeconomics Lab.
Smart Contract Support and Fraud Proof Interactivity
A core differentiator between ORU projects is the extent of their support for smart contract scripting and, in turn, the nature of the verification and fraud proofs that they require. Five of the nine projects investigated are implementing full Ethereum Virtual Machine functionality into their rollup, where the rollup sidechain has Solidity smart contract support (roughly) on par with that of Ethereum’s base layer, while the remaining projects deliberately support more limited, constrained functionality.
Broadly speaking, the benefits to supporting full EVM, beyond the smart contract features themselves, is the technical consistency between the rollup and the parent chain, promising easier integration with infrastructural tooling and easier transition for developers accustomed to using Solidity-compiled smart contracts on layer 1. Conversely, projects with more constrained functionality offer the benefit of cheaper (in gas) fraud proofs, easier verification, opportunities to directly optimize implementations to the specific use-case, and general simplicity.
Full EVM: Layer 2 Virtual Machines
For layer 2 smart contract computation to remain trustless, there must be a fallback case where such computation — in some form — gets executed on layer 1. It follows that for an ORU to support the full EVM, the second layer needs its own virtual machine that can be executed within the base-layer EVM itself. Creating a performant implementation of this is not trivial; simply put, the EVM was not designed to run itself. You can get a sense of some of the challenges in this EIP discussing the possibility of modifying the EVM to incorporate this functionality directly, and this overview of this issue in the context of Plasma by Kelvin Fichter.
All five full-EVM projects thus created their own modified version of the EVM for layer 2 execution. To ensure reliable and predictable fraud proofing, the VMs execution must be deterministic; i.e., the exact conditions in which fraud was initially detected must be reproducible at the time of the proof. Thus, non-deterministic operations, like checks for block-height, difficulty, and timestamps, for example, must be modified or removed entirely. Likewise, opcodes for contract creation or destruction need to be removed, as this logic is handled specially. Thus, porting a contract written for layer 1 onto ORU potentially requires some minor modifications to its Solidity code before deployment to an ORU chain.
All full EVM ORUs share some basic commonality in how they facilitate fraud proofs: the state of the ORU chain is regularly serialized and committed, as is a hash of the computational operations involved in executing a state transition. (In all implementations, the burden of generating and verifying these state-root commitments falls onto the operator, not the users). Fraud proofs amount to somehow using this data to show that the committed steps do not, in fact, properly transform the initial state to the final state.
The main differentiator among full EVM implementations is their level of interactivity for handling these fraud proofs. An ORU, by its very nature, must include enough calldata such that fraud is instantly detectable and eventually provable to the parent chain’s consensus; however, the process of carrying out this fraud proof varies by construction.
In a single-round (sometimes, somewhat confusingly, referred to as “non-interactive”) fraud proof scheme, fraud can always be proven in a single transaction, or multiple transactions from a single party. The benefits here are immediate “dispute resolution”, no griefing vector and therefore no bond requirement for fraud provers, and simplicity. In a multi-round fraud proof, fraud is immediately evident, but several interactive steps between the fraud prover and block producer are required. The benefit here is lower (in some cases, much lower) gas cost and perhaps lower on-chain data as well.
Single-round Fraud Proofs
Both Nutberry, Optimism and Celer’s ORU support single round fraud proofs. This necessitates that each transaction commit to a serialized post-state root. In Optimism’s model, which Celer also draws direct influence from, if fraud is detected, the fraud prover posts the slots of the initial and final state that the transaction uses, and has the parent chain execute the transaction in full. (This flow closely resembles proposed stateless client models for validation of Ethereum blocks). Ideally, the transaction should only require minimal state slots to prove fraud, but in principle, a transaction could require reading a large amount of state data. On the off-chance that a fraud proof requires significant data and/or computation such that inclusion in a parent chain block becomes difficult, Optimism supports the splitting of the proof into multiple transactions (note that both transactions are still submitted by the fraud prover, and thus this additional step doesn’t qualify as an “interaction” in the sense that we’ve used the term.)
Nutberry’s approach is similar, but uses what they refer to as a “gated computing” model for its contract execution; smart contracts are patched to include checkpoints. Transactions, in this model, commit to several, more granular intermediate state roots, potentially requiring more data, but making it possible to execute fraud proofs in smaller chunks.
Multiple Round Fraud Proofs
Multiple-round frauds proofs require several steps of interactive communication between the attester and the fraud prover in order to resolve. The key point here being that as per ORU’s definition, enough data is published such that any honest participant or observer can determine which party is telling the truth — and the thus predict the outcome of the interactive game — from the start.
In the case of Interstate One, transactions, which again include state-root commitments, are published with only a merkle-root commitment to the corresponding steps of execution, but not the steps themselves. This commitment is in some sense a second-order “optimistic” assumption; if and only if a verifier submits a challenge will the operator post the stack of EVM messages in calldata, which the verifier can then use to succinctly prove fraud. In the worst case, this procedure takes a total of 3 rounds and requires calldata linear to the number of steps in the transaction in question (compared to single round fraud proofs, where linear data is required in all cases).
Offchain Labs’ Arbitrum Rollup is the farthest along the spectrum of increasing interactivity to minimize on-chain footprint. With Arbitrum, only blocks — not transactions — require state-root commitments; as with Interstate, these also include commitments to hashes of the computation involved. If two parties make conflicting assertions, they enter a dispute, in which they interactively figure out the single computational step that was invalidly executed (if fraud was committed, there must, in principle, be at least one invalid step).
They do this by effectively binary searching through the stack until the fraud is isolated: i.e., the fraud prover asks for the state hash at the point halfway through the stack, then bisects the stack and repeats with the remaining half she knows is invalid. This procedure repeats recursively until only a single, invalid operation remains, which is then executed executed on-chain. Thus, at worst, this procedure takes log(n) steps (where n is the number of operations) and requires minimal layer 1 computation.
One surprising property of this approach is that while a dispute procedure is underway, the rest of the system need not be put on hold; users and block producers can continue transacting as usual. A “dispute” can be thought of as a bifurcation in the tree of possibilities; honest users can validate, determine which side is the honest branch, and continue building on top of it, knowing which way the dispute will ultimately resolve. Thus, the length of the dispute need not impose latency on the rest of the system. See How Arbitrum Rollup Works for more info.
Application Specific Rollups
The ORU projects that support more constrained functionality seek to optimize around more specific use-cases: token payments, decentralized exchange, private payments, and mass migration. The four protocols profiled are all quite distinct, and will be explored separately.
Fuel (“Bitcoin but on the blockchain”)
Fuel is implementing a payments-focused UTXO-based ORU sidechain, with a data model similar to that of Bitcoin. This design trades off smart contract features in favor of general simplicity and cheaper validation and fraud proofs. In fact, many of the intuitions about trade-offs between Fuel and full EVM ORUs parallel the tradeoffs between Bitcoin and Ethereum itself.
As with Bitcoin, the state of the Fuel chain is implicitly defined as the set of all unspent transaction outputs; no state-root serialization is necessary. The model for supporting succinct fraud proofs mirrors the one initially proposed for Bitcoin by Greg Maxwell in 2014 (and independently rediscovered by John Adler in 2019); transactions are very similar to Bitcoin transactions, but include an additional data field which specifies the process location of each of their inputs. With this in place, all cases of fraud — double spends, non-existent inputs, etc. can be proven in a single-round with just one or two inclusion proofs. In addition to low-cost fraud proofs, the UTXO model also promises more performant validation — better state access patterns and room for parallelization (compared to validating EVM execution, which must be done synchronously).
Fuel will support ERC20 and ERC721 transfers using a model that mirrors the (infamous) colored coins proposal for Bitcoin. It also supports some special transaction types, including HTLCs for atomic swap transfers. The plan is to eventually support a more robust stateless predicate scripting language, with features similar to Bitcoin Script. (See here for more).
ANON’s ZK-Optimistic-Rollup (“ZCash but on the blockchain”)
Another unique ORU project is ANON’s ZK-Optimistic-Rollup, which supports ERC-20 and ERC-721 payments with the same privacy guarantees as ZCash shielded addresses. The design has much in common with ZCash itself (See here for a ZCash primer); claims to funds take the form of UTXO-esque “commitments”; spending involves generating new commitments and creating a “nullifier,” a record that the commitment was spent to prevent future double spends. Transactions include a ZK-SNARK, which prove that all of their validity conditions are met without actually revealing any details to observers.
In order to maintain succinct fraud proofs, ZK-ORU includes a feature not included in ZCash: nullifiers are stored in a Sparse Merkle Tree, updated with each new ORU block. This allows easy proof of both membership (commitment was spent) and non-membership (commitment is still unspent). As with all other ORUs, the computation is deferred optimistically, including the validation of the SNARKS themselves. All cases of fraud, including an invalid SNARK — can be proven in a single step.
Note that the burden of SNARK generation falls on the user; ANON estimates that SNARK generations should take 10–30 seconds on a consumer-grade laptop.
(Also note that despite the similar names and component parts, ZK-Optimistic-Rollup is actually considerably different from ZK-Rollup, which doesn’t necessarily offer privacy benefits, doesn’t use fraud proofs, and uses SNARKs generated by the operator to prove validity at the block level. Welcome to crypto.)
WCL’s ORU Hub
This ORU implementation aims to be something of a standard for batching transactions between rollup chains, serving as a means for users to voluntarily migrate their funds to an upgraded contract.
The implementation itself is a simple, payments only account-based chain which, like Fuel, offers cheap fraud proofs and data validation. The main impetus for this construction is to establish a standard architecture for enabling transfers between different chains directly, i.e., without having to withdraw from one chain and redeposit onto the another. This is achievable via supporting batched deposits and establishing unidirectional inter-chain cross-links; so long as the validators on the destination chain also check for fraud on the departure chain, payments can be considered final without any additional latency. This parallels the logic in research around ETH 2.0 cross shard communication.
While the primary planned use-case is upgradability, a potential additional use-case for this mechanism is migration between independent different live rollup chains, although more research around this is needed. Further details on the precise protocol for inter-rollup-chain migrations, as well as the specifications chains would need to conform to in order to support this, have not been made public yet, but will be linked here when available.
IDEX is unique in that they arrived at ORU as the best way to make a more scalable version of a project already in production. The IDEX contract, currently on live on mainnet, uses more state than any other Ethereum application.
IDEX 2.0’s rollup chain supports order-book-based decentralized exchange functionality, and is built around this specific use-case. The chain is responsible for executing orders and maintaining users’ balances, with other functionality, like more advanced order types, automated traded engines, etc., occurring in permissioned settings.
Validators in IDEX’s ORU are required to stake the IDEX token. Validators submit receipts which serve as attestations to published blocks, and are rewarded and/or punished (if they opt in to full, “riskful” validation) according to their signing off on valid blocks and issuing valid fraud proofs. Rewards are paid out using both a portion of the exchange’s transaction fees and the native token (analysis of the cryptoeconomic model is outside the scope of this piece; see the IDEX 2.0 whitepaper for more).
In their protocol, the block producer posts only the block’s Merkle root as its initial commitment, and only proceeds to post the block content in calldata if challenged. Because there isn’t an on-chain guarantee of rollup-block data availability, this doesn’t strictly fit the definition of Optimistic Rollup laid out above; indeed, they refer to it as an “Optimized Optimistic Rollup.” Once the calldata is published, fraud can be proven in either one or two additional steps. The trust/security implications of this approach will be discussed below.
The majority of projects have settled on an open, permissionless block production model; i.e., any party has the right to post a bond and extend the ORU chain by proposing a new block.
The one exception is IDEX 2.0, in which IDEX themselves have the sole, permissioned right to produce blocks. Under these conditions, maintaining trustlessness requires giving the users the ability to withdraw their funds without the permissioned operator’s consent (i.e., to withdraw by publishing a transaction directly on the parent chain). Without this option, users would be helpless against a malicious operator who chooses to censor all of their withdrawal attempts, effectively locking their funds on the rollup chain indefinitely. Likewise, in a permissioned system, the operator can always simply cease producing blocks altogether, grinding the chain to a halt; users need recourse to withdraw in this scenario.
IDEX 2.0 will, indeed, give users the option for mainchain-initiated exits. The added complexity here is that these exits must be validated / potentially reverted if fraudulent, and thus must also come with a posted bond to prevent griefing. But given that they are supported, IDEX’s permissioned-model does not, in and of itself, render the system custodial. (Other permissionless chains also plan on supported mainchain initiated exits as an option in special cases.)
Most other implementations include some notion of separating block production into two steps: separate commitments to transaction inclusion and transaction ordering. These models address considerations around censorship resistance and mitigating and/or containing front-running opportunities.
In Arbitrum’s model, any party can add transactions to a queue (which they call “inbox”); block producers are forced to take transactions from the end of the queue and include them in the next block. Invalid transactions are simply included, but their state transition isn’t executed. Both execution of an invalid transaction and failure to execute a valid transaction are both punishable offenses via fraud proof, precluding easy censorship (see here for more).
Others, like Optimism* and WCL, plan a Proof of Burn auction model for transaction ordering; after a group of transactions is committed for inclusion, rights to order transactions go to the party willing to burn the most Ether. The intent here is for the cost and open competition to minimize profit margins for parasitic front-running opportunities, and ideally mitigate front-running risks in general. See “Miner Extractable Value Auctions” for variations on how schemes like these could work.
Fuel has a block production life-cycle in which a window of time exists where a party (i.e., the company/team) gets permissioned access to ordering rights. In this model, the team is in a privileged position to extract value and monetize their application, without having to restrict block production to a permissioned set.
Finally, Celer’s Rollup is unique in that block production itself will include its own separate consensus mechanism. Prior to developing their rollup implementation, Celer already planned a Proof of Stake Ethereum sidechain (the “State Guardian Network”) to serve as a watchtower service for their state-channel network; participation in block production is permissionless, and requires staking Celer’s token. The sidechain will use BFT consensus — likely Tendermint or something similar. Celer plans on leveraging this sidechain infrastructure for proposal and publication of ORU blocks. (See here for more.)
Most projects adhere strictly to the trust requirements for layer 2 protocols described in the opening section.
IDEX 2.0’s reliance on data availability challenges makes their protocol an exception to this, and brings their protocol outside of the strict definitional scope of ORU. When the block producer proposes a block, they initially only post its Merkle root, with the implicit assumption that they’ll share the associated block data with validators off chain. If, say, the block data is never shared with validators, validators can issue a data availability challenge, forcing the block proposer to publish data as call-data (as in done in typical ORU constructions initially) . If it turns out the block is valid, the validator is punished for an invalid dispute challenge, and gets some portion of their stake slashed.
The trouble here is that the disagreement outlined above is in essence a dispute over who of two parties is the one actually withholding data; objectively resolving such a dispute is provably impossible, as per the axiom of speaker/listener fault equivalence. Thus, what disincentivizes IDEX from forcibly griefing and ultimately punishing validators are things like concerns over reputation and incentive to continue participating and earning fees in the exchange ecosystem. Whether this proves to be enough to prevent malicious behavior in practice, these assumptions are distinct the security guarantees of ORU or Ethereum itself, and thus, this use of data availability challenges and punishments qualifies as an additional trust assumption.
The only other project which (arguably) requires an additional trust assumption is the ZK-ORU, which will require a trusted setup for the creation of its snark circuit. Projects like ZCash and Aztec have dealt with this by having an elaborate ceremony, cryptographically secure as long at least one of the n parties involved behaves honestly; ANON plans on doing something similar with ZK-ORU.
Finally, some projects — namely, Fuel, Optimism, and Nutberry — have discussed giving the users the option for trust-based faster payments; a user, should they opt to, can consider a payment finalized before it gets included in the block. If this promise is reneged, this fraud will not be provable to the parent chain, but could be made evident to any third party. Crucially, this option is entirely opt-in with a distinction made apparent in the user interface; users opting for pure trustlessness can still choose to wait for a block confirmation. Likewise, Celer plans to offer users a range of security/trust parameters, in which users can require that all transactions be immediately published in rollup blocks, or can accept smaller, less mission critical updates on their sidechain for a parametrized trust-window before they are finalized on chain.
Trustless instant confirmation can be implemented by having the block producer post collateral to be slashed and paid out to the user if and/when the payment promise is reneged; several teams are considering releasing their implementation with this functionality in place.
All projects plan on releasing their work as free open source software.
ANON is planning on releasing their implementations strictly as a public good, with no plans to directly capture value as of now. PinkieBell (sole pseudo-anonymous developer behind Nutberry) also has no plans for monetization, but is currently considering a developer fee, which collects a small portion of the fees from block producers for developers working on the project.
Five of the projects — Fuel Labs, Optimism, Interstate, WCL, and IDEX are planning on monetizing their work by hosting an instance of their implementation themselves, profiting off of collecting fees for transaction inclusion, and by providing users liquidity for faster withdrawals.
Although block production for Fuel, Optimism, and Interstate is permissionless, all operate under the assumption that they (i.e., the company) will be the primary block producers. They plan on monetizing by collecting fees, with themselves in the advantageous position as priority aggregator (as discussed earlier).
At least one project expressed uncertainty over the legal status of the company as payment processors, and are awaiting further clarity on the issue.
Offchain Labs is not planning on hosting an instance of Artibrum Rollup, and plans to monetize by offering projects utilizing their work an enterprise plan, providing support as well as some stronger economic and security guarantees.
IDEX’s economic scheme is the most elaborate; monetization comes from a crytoeconomic setup involving both collecting exchange fees and value accruing to a native token. Both the block producers (IDEX) and validators are compensated, making IDEX also the only project which attempts to directly incentivize validation. IDEX and Celer are the only project planning on incorporating an app token. Several other teams anathematized the very idea of a token native to a layer 2 system; others are more open to there potentially being sounds economic token models, but feel that more research needs to be done, and have no concrete plans to do so for their project.
User Experience / Validation
Most projects plan to support transacting on their rollup chain via a metamask plugin, the infrastructural tooling of which is largely already in place, in addition to releasing their own front-end interface.
All projects are operating under the assumption that not all users will maintain uptime and verify all blocks; i.e., they expect two classes of users: average end users, and power-users (i.e., validators.) The nature of all ORU is that as long as at least 1 honest party is validating and issuing fraud proofs at any given time, all users are safe. Financial incentive for validation comes in the form of direct reward (in the case of IDEX only), opportunity to produce blocks and collect fees, and chance issuing a fraud proof and collecting a reward. The catch-22 of fraud proofs, of course, is that if the system’s design is sound, one would hope that fraud rarely — if ever — actually occurs.
The other direct incentive for validation is that it gives the validator trustless, fast assurance that the transactions she cares about are finalized; i.e., she doesn’t have to wait the whole span of the exit window. The expectation is that the sorts of parties who would benefit from this would be exchanges, wallet providers, providers of liquidity for faster exits, users accepting large, important transfers, and power users/ hobbyists, and the creators of the project itself. One team even suggested that miners may deem it their obligation to validate rollup chains, raising some interesting questions about the interplay between the incentives of layer 2s and the layer 1s they use for data.
Open Topics of Research / Discussion
Easily the hot topic at the moment; with BLS signatures, the block producer could take each signature in a rollup-block and replace them with a single aggregated one, bringing the on-chain data requirements closer to those of ZK-rollups. The trouble is, in the case where it needs to be verified, verifying a BLS signature is far more gas-intensive than verification of typical ECDSA signatures. Barry Whitehat says he’s settled on a workable construction that should be ready for his ORU Hub, but consensus across projects hasn’t been reached on the optimal signature aggregation scheme.
One can imagine a future, if and when the use of rollup techniques becomes the norm on Ethereum, when a large portion of the chain-data sits in “rolled up” form; i.e., is utilized for availability, but not directly in execution; this change could have significant economic and technical implications for Ethereum client implementations. One proposal, to prepare for such a future, is to introduce a special transaction data field, tentatively called “post-data”, that exists solely for the purpose of availability, and is restricted from ever touching the state or the EVM. With this in place, clients could optimize the processing and storage of this data, and ideally, its gas price would ultimately be reduced. An EIP outlining this proposal is in its early stages.
Bond size and challenges period parameters
The Ideal parameters for bond required for block production and the window of time
before blocks are considered finalized (and thus withdrawals onto the mainchain are possible) is still on open discussion, with most teams not yet settled on final numbers yet. A larger challenge period ensures users will have more time to detect fraud, and puts a heavier burden on any miner attempting to censor fraud proofs. Some project expect several days to one week to be safe, reasoning that they should veer on the side of safety, and that users will usually be able to withdraw faster anyway, by either atomic swapping onto the chain or tokenizing and selling their exits as bonds (as has been discussed for Plasma constructions). In Ed Felten’s analysis, he argues that a shorter dispute window of 3 hours is sufficient.
Unlike other layer 2 constructions, where bonds are necessary collateral to compensate users in the case of an invalid settlement, in ORU, bonds exist solely to provide a disincentive against invalid block production beyond just the wasted gas (and perhaps also to reward successful fraud provers). Suggested bond requirements range from 32 ETH, on par with the staking requirements for ETH 2.0, to just 1 ETH, with some suggesting it should correspond to the amount of economic value on the rollup chain (Offchain Labs plans “1 Eth, or 2% of total value in chain, whichever is more.”)
One downside to rollups, relative to Plasma and channel constructions, say, is one doesn’t get the privacy benefits of keeping data entirely off-chain; the only project covered here that offers privacy benefits is ZK-ORU. Other projects are discussing building other privacy focused rollups, either by implementing other known privacy preserving technology on top of their constructions (i.e., Tornado Cash — esque mixers or AZTEC-esque privacy layers ) or creating hybrid roll-up/federated models, where users rely on a consortium for data availability and thus can publish less on chain.
Block Production Model
One of the primary sources of disagreement across teams — beyond the full EVM / app specific divide — seems to be over what models for permissionless block production should be deemed acceptable, and what results various models will yield, namely:
- Whether single-priority aggregator models will simply incentivize more destructive front-running when it should be mitigated.
- Whether pure Proof of Stake — style block production could be griefed and effectively censored by a wealthy party.
- How market value can be adequately discovered and expressed in a Proof of Burn auction.
Some research and modelling could potentially shed some light on these questions, but likely we won’t know anything conclusive until we see them tested in the wild.
Optimistic Game Semantics
The Optimism team’s research into theoretical properties of layer 2 constructions culminated in a notion of Optimistic Game Semantics, an attempt to express the different layer 2 constructions under a unified semantic framework of fraud/dispute conditions. The vision is to create shared contract logic and infrastructure across different types of layer 2 applications (including not just rollups, but channels, plasma, etc.) Optimism and Cryptoeconomics Lab (currently focusing on Plasma constructions, but likely to develop rollup implementations in the future) are both attempting to develop tooling within this grand-unified framework.
The teams utilizing the Optimistic Rollup pattern arrived at it from various different angles: exploring the theoretical design space of different layer 2 constructions, seeking a way to efficiently run an execution engine within the EVM, trying to create more scalable version of an existent Ethereum app, etc., converging on ORUs to get enough scalability benefits with minimal practical complications as the best way to move forward.
The open questions are the ones only answerable once we see these hit mainnet, and to see how the ecosystem chooses to use these tools; i.e., whether projects prefer to plug their infrastructure into a more generalized rollup solution, or craft something more specifically geared to their use cases, plus some more concrete benchmarks for gas costs and validation burden to put hard numbers on the tradeoffs. But with several projects on testnet already, and no sneaky research or usability challenges in sight, what’s clear is that we will, truly, see these in action soon. Scalable autonomous smart contracts; it’s happening.
*Correction: an earlier version of this piece implied that Optimism’s planned block production model gave them permissioned priority as sequencer; their MEV auction protocol will not, in fact, privilege any parties. Thanks Ben Jones.
Thanks to all teams interviewed for their helpful discussions.