Rollup and Decoupled SVM: An In-Depth Analysis

Soon SVM
19 min readOct 16, 2024

--

Introduction

As blockchain technology becomes widely adopted, on-chain scalability issues have become increasingly prominent, especially in public chain ecosystems like Ethereum. How to achieve high throughput and low transaction costs while ensuring decentralization and security has become a major challenge. Rollup, as a Layer 2 (L2) solution, has received widespread attention. Among them, Optimistic Rollup is a widely used representative scheme, with Optimism being its flagship project.

This article will first take Optimism as an example, starting from the working mechanism of Optimistic Rollup to deeply analyze its security assurance mechanisms. The aim is to help readers thoroughly understand the underlying principles of block derivation and decoupling, and then explore the specific design and architectural advantages of SOON’s Decoupled SVM. This will enable readers to gain a deep understanding of the concept and principles of Decoupled SVM.

To provide a solid foundation for later discussions, we’ll start by analyzing the basics of Rollup to understand how the fundamental security of Optimistic Rollup is guaranteed. Rollup is a mainstream scaling solution that reduces the cost of user interaction with the blockchain. However, few have meticulously explored the technical principles and security issues behind Rollup, and comprehensive analyses from an overall perspective are rare. Today, we will delve deeply into its principles and the deeper issues involved.

The Architecture of Optimistic Rollup — Optimism as an Example

Key Components of Optimistic Rollup Architecture

L2 faces a very important issue: how to derive blocks from L1?

The reason why block derivation is so important is because L2 is essentially an extension of L1. L2 cannot independently keep a ledger without L1, and the legitimacy of L2’s ledger also needs to be recorded on L1 in a way that the decentralized consensus of L1 can confirm.

Before diving into the specifics of derivation, let’s first look into the relationship between L1 and L2. How exactly do L2 and L1 communicate, and what is the relationship between the data?

The key data that L2 submits to L1 for storage includes:

  • DA Commitment: This represents that L2’s transaction data has been published, ensuring that subsequent verifiers can access and verify this data.
  • State root: This reflects the on-chain state after L2 execution;
  • Blob (usually placed in a separate DA layer): This is the blob data submitted by L2, serving as proof of data availability.

This stored data guarantees the finality of L2 transactions and provides the foundational data for future verification and challenges.

L2 systems need to listen to key transaction data from L1 to ensure synchronization and consistency with L1. The primary data that L2 listens for includes:

  • Deposit and withdrawal transactions: L2 must constantly monitor deposit and withdrawal transactions on L1, as the confirmation of these transactions on L1 directly affects user account statuses on L2. L2 must synchronize these transactions in real time to ensure security and consistency in the flow of funds between L1 and L2.
  • Transaction batch data submitted by L2: This is the batch data that L2 submits to L1, containing transaction details. It allows L1 or independent verifiers to replay these transactions and perform state verification.

ens for includes:

  • Deposit and withdrawal transactions: L2 must constantly monitor deposit and withdrawal transactions on L1, as the confirmation of these transactions on L1 directly affects user account statuses on L2. L2 must synchronize these transactions in real time to ensure security and consistency in the flow of funds between L1 and L2.
  • Transaction batch data submitted by L2: This is the batch data that L2 submits to L1, containing transaction details. It allows L1 or independent verifiers to replay these transactions and perform state verification.

Why Is It Designed This Way?

We can categorize the data that L2 submits to L1 into two major types for understanding:

The first type is transaction batch information. This batch information is particularly important because it supports independent verifiers in replaying L2’s transactions. Verifiers can use this batch information to recompute an independent state root and check whether it matches the state root that L2 submitted to L1. This is key to ensuring that L2 cannot “misbehave” at will. If the data submitted by L2 is illegal or fraudulent, independent verifiers, after replaying and deriving a different state root, can submit a fault proof and initiate a challenge, requiring the block to be rolled back.

The second type is the state root. The state root reflects the on-chain state after L2 transactions are executed, including account balances, contract statuses, etc. This data is submitted to L1 for data verification. If independent verifiers compute a different result, they can initiate a challenge. Therefore, L2 must ensure that the data submitted to L1 is entirely correct; otherwise, it may face challenges and penalties from verifiers.

The core data that L2 listens for from L1 are deposit and withdrawal transactions. Deposits and withdrawals on L1 must be reflected in the account statuses on L2. This ensures the security and consistency of cross-chain fund operations. L2 also needs to listen for transaction batch data.

One might wonder, since L2 submits this data itself, why does it still need to listen to it?

Even though L2’s transactions are packaged and submitted to L1 first, L2 must ensure consistency by confirming that the data has indeed been successfully submitted and is available. Although the data was initially packaged and submitted by L2 to L1, L2 must still listen to these batch details again from L1 to confirm that they have been properly recorded. This process is essentially a security guarantee. If the submitted data is not recorded on L1, L2 cannot ensure the system’s security.

At this point, another question might arise: why doesn’t L2 need to wait for L1 to confirm the submitted data before continuing block production? This is the core of the “optimistic assumption” in Optimistic Rollup: optimistic execution. L2 doesn’t need to wait for L1’s confirmation at every step but instead continues to produce blocks and process transactions independently, assuming that all transactions are legal and valid. This greatly improves system efficiency by avoiding the long delays associated with the verification process. Each L2 block assumes that the transactions are valid, and periodically submits the transaction data in batches to L1 until a verifier finds a problem, submits a fault proof, and initiates a challenge.

Now we can understand that L2 is derived from L1.

Engineering Implementation — Optimism as an Example

Currently, Optimism’s main components include: op-node, op-geth, op-batcher, and op-proposer.

  • op-node: This is the coordinator node within the system. It communicates with L1 (the Ethereum mainnet), receiving block headers and transaction information from L1, while managing the state transition logic within the L2 network. The op-node acts as a bridge between L1 and L2, helping to transfer block data from L1 to the L2 network. Additionally, it is the core node that coordinates different components.
  • op-geth: This is the node that implements the Ethereum Virtual Machine (EVM) on Optimism. It is responsible for executing smart contracts and transactions on L2. Essentially, all smart contracts and execution environments running on L2 are handled by op-geth, which serves as the core execution engine.
  • op-batcher: This component is responsible for submitting transactions packaged on L2 to L1. It collects users’ transactions, packages them into a batch, and submits the batch to Ethereum L1 via L1-RPC. This batch of data is not executed immediately but is stored on L1, with L2’s state updates relying on these batches.
  • op-proposer: This component is responsible for submitting L2’s state roots to L1. Whenever a state change occurs in the L2 network (such as executing transactions), the op-proposer periodically submits these state changes to L1.

The op-node and op-geth together form what is commonly referred to as the Sequencer, while op-batcher and op-proposer are mainly responsible for ensuring that the transaction data we mentioned earlier is verifiable, focusing primarily on security by design.

At this point, another question may arise: why are both the op-batcher and op-proposer necessary, even though they both submit data to L1? As we mentioned earlier, the op-batcher submits only the “raw materials” (transaction batches), while the final “product” (state root) is generated by the execution engine (op-geth) on L2 and is eventually submitted to L1 by the op-proposer.

Thus, while the data submitted by the op-batcher is important, it doesn’t control the state communication from L2 to L1 directly. Instead, it simply provides transaction data that can be replayed by independent verifiers. The Sequencer (op-node + op-geth), under the assumption of optimistic execution, produces blocks independently and periodically provides transaction data and results to the op-batcher and op-proposer.

In summary:

  • op-node: Coordinates data synchronization between L1 and L2.
  • op-geth: Executes transactions and contracts on L2.
  • op-batcher: Submits L2 transaction batches to ensure data verifiability.
  • op-proposer: Periodically submits L2’s state root to L1 to ensure state updates and data consistency.

From the previous discussions, we can now understand how L2’s security is guaranteed. Although we haven’t specifically discussed fault proofs, the core conditions for them have already been implicitly reflected in the L2 architecture. The security of the L2 system relies on independent verifiers replaying the transaction data submitted by L2 to verify its correctness and, subsequently, raise challenges if necessary.

In the Optimistic Rollup framework, the transaction data and state roots submitted from L2 to L1 are assumed to be correct, and the system doesn’t immediately perform a full verification. This optimistic assumption allows L2 to continue processing transactions efficiently and generating blocks without waiting for every transaction to be individually verified. By increasing throughput and reducing transaction costs, the system becomes highly efficient.

During the challenge process, if it is confirmed that there are issues with the state root, L1 triggers a rollback mechanism, retracting the erroneous block while penalizing the L2 entity. The verifier who submits the fault proof will be rewarded according to the system’s design, incentivizing more verifiers to actively participate in the security monitoring of the system.

This marks a significant difference between Rollups and sovereign blockchains. Optimistic Rollups focus on data verifiability, assuming that the data is correct first, and then waiting for independent verifiers to raise challenges. In contrast, sovereign blockchains focus on building decentralized nodes and consensus mechanisms.

Decoupling the Execution Layer from the Consensus Layer

Next, we will dive deeper into the concept of decoupling. Using Optimism as an example, through our previous analysis, we can see that with Optimism’s design, the result is the separation of L2’s block production from L1’s consensus mechanism while maintaining a verification mechanism to ensure security. This, in essence, is decoupling. The core idea behind decoupling is to separate the execution layer from the consensus layer, allowing them to operate independently. In traditional blockchain architectures, execution and consensus are tightly coupled, meaning that transactions must be confirmed by the consensus of nodes before they can be processed.

The Key Role of the Derivation Pipeline

Although in the process of decoupling the execution and consensus layers, L2 can independently produce blocks, the security of the system still relies on the derivation pipeline of L1.

The derivation pipeline resolves the potential security and consistency issues that may arise during L2’s independent block production. Specifically, the derivation pipeline provides a trusted foundation for L2’s block generation. Even if L2 does not wait for L1’s real-time confirmation, it can still ensure the legality and security of transactions through derived data. When independent verifiers replay transactions and discover discrepancies between L2’s data and what was submitted to L1, the derivation pipeline provides sufficient data support, allowing fault proofs to be raised and triggering a rollback mechanism. Without this derivation process, L2 cannot guarantee the verifiability of its data, and fault proofs would become meaningless. Therefore, the derivation pipeline is not only a technical support for L2’s scaling capabilities but also a core component ensuring the security of the system.

Decoupled SVM — The Core Execution Layer of SOON

At this point, we should have a relatively deep understanding of decoupling. The example we used above was Optimism in the Ethereum ecosystem. In fact, within Ethereum, most Rollup solutions have already achieved a decoupling of the execution layer from the consensus layer. However, in SVM, everything is just beginning. SOON adopts a solution similar to Optimism, which is why we used Optimism as an example earlier. However, SOON decouples SVM (Solana Virtual Machine) from the Solana consensus layer. There are even more complex and in-depth issues to explore here. While the core idea is the same, the specific engineering implementation and challenges differ, so we need to continue exploring how SOON decouples the SVM core execution layer from Solana.

In Solana, the SVM is tightly coupled with Solana’s consensus layer. The SVM handles the execution of smart contracts and transactions, while the consensus layer reaches consensus through Tower BFT and PoH (Proof of History) mechanisms. SOON’s challenge lies in decoupling SVM from this tightly integrated architecture, allowing it to operate independently and interface with other consensus mechanisms or Layer 1s. This involves dismantling the existing Solana validator node architecture, removing consensus-related components while retaining the core functionality of the execution layer, and redesigning a mechanism to handle block derivation, data submission, and verification.

To further analyze this problem, we must first deeply understand the structure of Solana validators.

Structure of Solana Validator

Validators play a crucial role in Solana’s architecture, serving as the core components for achieving consensus, verifying transactions, and maintaining the network state. Validators ensure transaction validity and consensus by running multiple modules. Solana’s validator structure is relatively complex, integrating the execution environment (SVM) and consensus mechanisms (Tower BFT and PoH). We will analyze the key modules of the validator one by one.

  • Json RPC Service: The client (such as a wallet or dApp) sends requests to the Solana validator through this service. This service serves as the external interface for the Solana system, handling requests like submitting transactions, querying account statuses, and retrieving block information.
  • TPU (Transaction Processing Unit): This is the core module responsible for receiving and executing transactions. It integrates the SVM, sorts transactions, and processes their execution logic. In the TPU, transactions are first sorted and packaged, then executed in the SVM environment. The transaction results generate state changes, which are submitted to the consensus module in the form of state roots, ultimately reflecting in the entire network’s state.
  • Bank Forks: This module handles chain state forks when the network experiences state divergences (e.g., when multiple validators produce blocks simultaneously).
  • Gossip Service: This protocol manages communication between nodes in the Solana network, propagating the latest state, transactions, and block information across the validator network. Validator nodes receive blocks and transactions from other nodes through this service, ensuring the network’s state consistency.
  • Other Consensus-related Processes or Modules: The Replay Stage re-verifies transactions packaged in blocks, ensuring data consistency by replaying transactions.Validator continuously obtains transaction information, executes, verifies, and accepts information from other nodes, and finally reaches a consensus under the Tower BFT and PoH consensus to produce blocks.

In Solana’s architecture, the execution layer (SVM) is tightly coupled with the consensus layer. However, for Optimistic Rollups, many of these modules are unnecessary. In an Optimistic Rollup, after receiving a transaction, a Rollup node can immediately execute, package, and produce a block without the need for consensus. The only requirement is to periodically submit verifiable transaction batch data and state root information to the DA layer or L1 while ensuring access to deposit and withdrawal transactions, transaction batch data, and state root information from L1 or the DA layer. Combined with the fault proof mechanism, this is enough to ensure security.

Thus, to decouple Solana’s core execution environment (SVM) from Solana, the first step is to remove the consensus-related modules from the Solana validator and reassemble the rest. The next step is to build the mechanism for L2 to derive information from L1, which is the derivation process (detailed earlier). Lastly, the fault proof mechanism needs to be constructed. With these elements in place, SOON can achieve an extremely flexible and secure Optimistic Rollup. This is the work that SOON has undertaken.

Now, the SOON Stack becomes clear. If the destination for submitting verifiable data is replaced by a dedicated DA layer, any DA layer can be used for independent verification and validation by SOON. If the place for verifying blocks, storing state root information, and processing deposits and withdrawals is replaced with a different L1, any L1 can be used as the settlement layer. Based on the SOON Stack, any entity can build a Rollup that chooses any L1 and any DA layer.

Next, we will detail how the Decoupled SVM is implemented from an engineering perspective.

Execution Layer Reconstruction

In SOON’s architecture, the consensus-related modules of the traditional Solana validator have been significantly simplified or completely removed, focusing on transaction execution, state updates, and data submission. The consensus layer no longer directly handles transaction verification and sorting, freeing up a significant amount of computational resources and making the system more efficient.

SOON retains the core modules for transaction processing, packaging, and transaction transmission. Below are some key modules that have been reconstructed and optimized:

TPU (Transaction Processing Unit)

TPU is the core component of SOON’s execution layer, responsible for receiving, sorting, executing transactions, and generating state updates. In the restructured validator, the TPU continues to serve as the primary execution engine. In SOON, the TPU processes incoming transactions, invoking Anza SVM APIs to execute smart contracts and state changes. The TPU handles the following functions:

  • Receiving and Sorting Transactions: The TPU receives and sorts transaction requests from the network, packaging them into batches and ensuring transactions are executed in a predetermined order.
  • Calling SVM APIs: Each transaction is submitted to the SVM for execution.
  • Generating State Roots: After executing transactions, the TPU generates state roots that serve as system state snapshots. These state roots are submitted to the consensus layer or external storage layer via the derivation mechanism.

The detailed transaction processing flow in the TPU is as follows:

  • SigVerifyStage: This stage verifies the signature of each transaction, ensuring that the transaction’s signature is valid before entering the execution flow.
  • BankingStage: After signature verification, transactions move to the BankingStage, which manages account and state updates based on the content of the transactions. Although this stage handles transaction logic, it doesn’t actually execute the transactions but computes the relevant state changes.
  • SVM Executor: This stage is responsible for executing the actual smart contracts or instructions within the transactions. The SVM performs contract calls and execution based on the previous account state and transaction content.
  • Entry Components: This final stage handles transaction entries. It packages the executed transactions that resulted in state changes and writes them into the blockchain, recording both the transactions and state changes on the blockchain.

Derivation Layer and Derivation Pipeline

SOON has restructured the block derivation layer in engineering and designed a complete interface system to support key processes such as derivation, packaging, and fault proofs. This section will explain in detail how SOON’s derivation layer obtains and processes data from Layer 1 (L1) and integrates it into Layer 2 (L2) block production processes, ensuring system consistency and security.

Derivation Layer’s Architecture and Interface Design

During the block derivation process in SOON, L2 block production relies on key information derived from L1. This information includes L1 block headers, deposit transactions, data availability batch information, and more. These data are parsed and packaged into L2 blocks to complete final block production. The specific process is as follows:

Derivation

The first step in derivation is to obtain the latest block header from L1 and extract the key information needed for derivation. This information includes:

  • Block header: Metadata of the block, used to confirm the block’s timestamp and basic information.
  • Deposit transactions: Cross-chain deposit transactions that occur on L1 need to be reflected in the accounts on L2.
  • Data availability batch information: Critical information about data availability to ensure that data on L2 can be correctly accessed and verified by validators.

By implementing the PayloadAttribute trait, this key information is parsed and stored in a struct, preparing it for subsequent block packaging.

Packing

After parsing and obtaining the derived information from L1, L2 needs to package this information with regular transactions submitted by L2 clients (i.e., transactions initiated by users on L2). This packaging process is implemented through the BlockPayload trait, which integrates all transactions and block header information into a unified data structure. During the packaging process, the system processes transactions from multiple sources, including:

  • L1-derived data: Such as block headers, deposit transactions, etc.
  • L2 local transactions: Regular transactions submitted by users on L2, which enter the packaging process through L2’s TransactionStream.

Finally, these transactions are combined into a standardized block payload, which is ready for execution and processing.

Transport and Production

The packaged BlockPayload is sent to SOON’s core module—Engine—which is a kernel module implementing the EngineAPI interface. The Engine module is responsible for executing the packaged transactions on the SVM (Solana Virtual Machine) and generating the final block. During this process, the system handles transaction execution, reorganization (Reorg), and final block confirmation.

Transaction Execution

In the Engine, BlockPayload is passed to the SVM for execution.

Through the new_block function in the EngineAPI interface, the system can generate a new block based on the received BlockPayload and add it to the L2 blockchain. During this process, SVM ensures the correctness of transaction execution and verifies state updates.

Block Reorganization

In certain cases, the system may need to reorganize the L2 blockchain, which usually occurs when L1 experiences a reorg, requiring L2 to synchronize with L1’s state. The reorg function in the EngineAPI interface triggers the reorganization operation, resetting L2’s chain to match L1’s state. This mechanism ensures that L2’s chain state remains consistent with L1, preventing inconsistencies caused by L1 reorgs.

Block Finalization

The final step in block production is block confirmation and finalization. Using the finalize function in the EngineAPI interface, the system can mark the generated block as finalized and record it on the L2 blockchain. This process ensures that the block will not be further modified, guaranteeing the finality of transactions.

Fault Proofs and Challenge Mechanism

SOON’s derivation layer not only supports block production and reorganization but also ensures system security through a fault-proof mechanism. Independent validators can replay L2’s transactions based on the state root and transaction batches submitted through the derivation layer to verify the correctness of the data. If discrepancies are found between L2’s submitted data and the actual execution results, validators can initiate a challenge by submitting a fault proof.

Fault Proof Process:

  1. Validators replay transactions: Using transaction data derived from L1, validators can replay L2 transactions, compute the state root, and compare it with the submitted results.
  2. Submit fault proof: If a discrepancy is found, validators can submit a fault proof through L1, requesting a system rollback.
  3. Block rollback and reorganization: Once the fault proof is verified, the system will trigger a block rollback mechanism, cancel the erroneous block, and use the derivation mechanism to ensure data consistency.

SOON has designed a complete interface to implement block derivation, transaction packaging, execution, and fault-proof mechanisms. Combined with the previous restructuring of the execution layer, Decoupled SVM has been fully realized, decoupling the core execution layer from the consensus layer.

Conclusion and Outlook

After deeply analyzing the technical architecture of Decoupled SVM, we can clearly see that the benefits of decoupling significantly enhance system security, performance, and flexibility across multiple levels. By decoupling SVM from the consensus layer, SOON not only unleashes the execution power of SVM, achieving high throughput (TPS), but also lays a solid foundation for future scalability and cross-chain operations.

One of the core advantages of Decoupled SVM is improved security. By separating the execution and consensus layers, validators can independently replay L2 transactions and verify the results, ensuring data verifiability. The derivation mechanism also ensures that users do not need to worry about asset deposits and withdrawals since L1 deposit transactions are included in the process.

From a performance perspective, Decoupled SVM significantly enhances the system’s parallel processing capability. Since SVM no longer relies on real-time confirmation from the consensus layer, it can independently process and execute transactions rapidly, generating blocks. This separation greatly reduces the burden on the consensus layer and unlocks the performance potential of SVM, allowing it to focus on executing parallel transactions and achieving ultra-high throughput. This efficient processing mechanism is particularly crucial for L2 systems, where Decoupled SVM can significantly improve system responsiveness and overall processing power in high-concurrency blockchain applications.

Architecturally, Decoupled SVM demonstrates remarkable flexibility. After decoupling, one can choose any DA layer (Data Availability) and L1 settlement layer or even merge the DA layer with L1 to implement SOON Stack. Through this design, SVM can seamlessly integrate with various Layer 1 networks (e.g., TON, BTC, or other public chains) and data availability solutions. This not only enhances system scalability but also provides great flexibility for deploying different Layer 2 scaling solutions in the future.

By providing a detailed analysis in this article, we aim to showcase the technical advantages of Decoupled SVM, particularly its outstanding performance in terms of security, performance, and flexibility.

Looking ahead, the SOON Stack based on Decoupled SVM will play a crucial role in cross-chain, Layer 2 scaling, and data availability solutions. The decoupling architecture is not only applicable to existing blockchain ecosystems but also provides an ideal infrastructure for emerging cross-chain protocols and multi-chain applications. SVM can serve as the core virtual machine for various Layer 2 and cross-chain applications, flexibly adapting to different consensus mechanisms and Layer 1 networks to meet the needs of different industries. Combined with the upcoming interoperability products, the potential of the SVM ecosystem will be further expanded.

About SOON

SOON stack is the most efficient Solana Virtual Machine (SVM) rollup delivering high performance on any L1 ecosystem. The execution layer uses a Decoupled Solana Virtual Machine (SVM) as opposed to the forked SVM framework used by most SVM projects. The derivation pipeline and dispute game are implemented according to the OP Stack specs. SOON’s mission is to create the highest throughput roll up stack using the SVM, accelerating SVM adoption, bringing costs down by 10x, and unlocking use cases across ecosystems.

SOON will also launch the SOON Mainnet, on top of Ethereum. The SOON Mainnet, through the SOON Stack, makes use of a Decoupled SVM, which is differentiated from all other SVM projects that are using Forked SVMs. Decoupled SVMs enable fraud proofs, which brings higher security and reduced DA blobspace waste. As the incentive and execution layer, the SOON Mainnet will play a key role in onboarding and keeping developers in the SVM ecosystem.

SOON is aligned with Anza, the spin-off development studio from Solana Labs. The SVM Specs in Anza’s Agave repo serves as the implementation reference for SOON. The team has extensive experience in crypto and is able to execute. Co-founder and CEO Joanna Zeng has been in Crypto since 2017 and led BD & Partnerships at Coinbase, Optimism, and Aleo. The technical co-founder Andrew Zhou is well respected for developing smart contracts and L1s with 5 years Rust and 6 years Golang experience.

SOON is backed by the most well respected angel investors in the space including Toly, Anatoly “Toly” Yakovenko, Co-Founder of Solana Labs; Lily Liu, President of Solana Foundation & Founder of Anagram Ventures; Jonathan King, Principal at Coinbase Ventures; Mustafa Al-Bassam, Co-Founder of Celestia Labs; Amrit Kumar, Co-Founder of AltLayer; Prabal Banerjee, Co-Founder of Avail, and other prominent builders.

--

--

Soon SVM
Soon SVM

Written by Soon SVM

SOON Stack is an SVM framework that allows for any SVM L2 to be deployed on any L1

No responses yet