Cosmos & Polkadot V.S. Layer 2 Stacks: Series 1 — An Examination of Underlying Technology

Gryphsis Academy
30 min readSep 15, 2023

1. Introduction

Recently, leading projects like Optimism, zkSync, Polygon, Arbitrum, and StarkNet, with ETH Layer 2 at the forefront, have each introduced their own Layer 2 stack solutions. These endeavors aim to create an open-source, modular codebase, enabling developers to customize their own Layer 2.

It’s widely recognized that the current Ethereum is notoriously slow with high gas fees. Although the advent of Layer 2 solutions like OP, and zkSync Era has alleviated these issues, whether deployed on the EVM or Layer 2, they still grapple with a fundamental “Compatibility” challenge. This not only pertains to the underlying code of Dapps being compatible with the EVM but also extends to the sovereignty of the Dapps.

The first challenge is at the code level. Because the EVM accommodates a variety of applications deployed on it, it’s optimized for the average use case, ensuring it caters to all user types. However, this isn’t always optimal for Dapps. For instance, Gamefi applications might prioritize speed and performance, while Socialfi users may value privacy and security. Due to the EVM’s generic nature, Dapps often need to make sacrifices, presenting a compatibility dilemma at the code level.

The second challenge is about sovereignty. Since all Dapps share the same infrastructure, there emerged two governance concepts: application governance and foundational governance. Application governance is undeniably subordinate to foundational governance. Some specific needs of Dapps can only be met through upgrades to the underlying EVM. For example, Uniswap V4’s new feature required the EVM to support Transient Storage, relying on the integration of EIP-1153 in the Cancun upgrade.

To address Ethereum Layer 1 (L1)’s limited processing capability and sovereignty issues, Cosmos (launched in 2019) and Polkadot (Launched in 2020) came into existence. Both platforms aim to aid developers in building their blockchains, allowing both the blockchain & Dapps to maintain sovereign governance, achieve high-performance cross-chain interoperability, and realize a comprehensive interchain network.

Four years later, Layer 2 (L2) solutions have also successively launched their superchain network proposals, ranging from the OP Stack to the ZK Stack, Arbitrum Orbit, Polygon 2.0, and finally, StarkNet also introduced its Stack concept.

What kind of interactions and sparks will we witness between the pioneers of complete chain networks like Cosmos & Polkadot and the flourishing L2 solutions? In the following sections, we’ll delve into each proposal, comparing their strategies, and driving values, analyzing their strengths and weaknesses, and exploring future prospects.

Due to the length, we will present this in a 3-part series:

  • Series 1 will outline and compare the technical solutions from various projects.
  • Series 2 will analyze the tokenomics and ecosystems of each solution. It will summarize key factors to consider when selecting Layer 1 and Layer 2 stacks.
  • Series 3 will conclude how Layer 2 can develop their “superchains”.

By dividing the content into 3 easy-to-digest articles, readers can better grasp the key information step-by-step.

2. Cosmos

2.1 Architectural Framework

As mentioned earlier, when an ecosystem comprises a large number of application-specific chains and each chain communicates and transfers tokens via the IBC (Inter-Blockchain Communication) protocol, the overall network can become as intricate and challenging to navigate as a spider’s web.

To address this complexity, Cosmos introduced a tiered architecture that encompasses two types of blockchains: Hubs (Central Pivot Chains) and Zones (Regional Chains).

Source: Cosmos Whitepaper

Zones are conventional application chains, while Hubs are blockchains specifically designed to connect these Zones, mainly facilitating communication between them. When a Zone establishes an IBC connection with a Hub, that Hub can automatically access (i.e., send and receive to and from) all the Zones connected to it, significantly reducing communication complexity.

It’s also important to differentiate between Cosmos and the Cosmos Hub. The Cosmos Hub is just one chain within the Cosmos ecosystem, primarily serving as the issuer of $ATOM and a communication center. While one might perceive the Hub as the central entity of the ecosystem, in reality, any chain can function as a Hub. Making the Hub the central authority contradicts Cosmos’s original intent.

At its core, Cosmos is dedicated to ensuring that every chain operates autonomously with absolute sovereignty. If the Hub becomes a power center, this sovereignty becomes compromised. This distinction is crucial when understanding the role of Hubs.

2.2 Key Technologies

2.2.1 IBC

IBC (Inter-Blockchain Communication) allows for the transfer of tokens and data between heterogeneous chains. In the Cosmos ecosystem, while the foundational SDK framework is consistent and mandates the use of the Tendermint consensus engine, heterogeneity still exists, as chains may possess different features, use cases, and implementation details.

So, how is communication between heterogeneous chains achieved?

It simply requires consensus at the finality level. Instant Finality means that as long as more than one-third of the validators agree, the block won’t fork, ensuring transactions are final once added to a block. Regardless of differences in application use cases and consensus, as long as chains maintain this level of consensus finality, interoperation between chains adheres to a unified rule.

Here’s a basic procedure for cross-chain communication, assuming a transfer of 10 $ATOMs from Chain A to Chain B:

  1. Tracing: Each chain runs a light node of every other chain, allowing each chain to verify the others.
  2. Bonding: Firstly, the 10 $ATOMs on Chain A are locked, rendering them unusable. Proof of this lock is then sent.
  3. Relay: A relay between Chains A and B forwards the lock proof.
  4. Validation: Chain B validates the block of Chain A. If validated, 10 $ATOMs are minted on Chain B. However, these $ATOMs on Chain B aren’t “real” $ATOMs but serve as vouchers. While the locked $ATOMs on Chain A are unusable, those on Chain B can be used normally. Once the voucher on Chain B is consumed, the locked $ATOMs on Chain A are also destroyed.

The greatest challenge in cross-chain communication isn’t representing data from one chain on another, but handling scenarios like chain forks and reorganizations.

Because every chain in Cosmos operates as an independent, sovereign entity with its dedicated validators, malicious partitions can occur. For instance, when Chain A sends a message to Chain B, it’s essential to verify Chain B’s validators in advance to decide whether to trust that chain.

Source: Cosmos Whitepaper

For example, imagine the small red dots in the diagram represent ETM tokens, and users from Zones ABC all want to utilize EVMOS to run Dapps within their respective zones. Due to cross-chain communication and asset transfers, they have all received ETM.

If at this point, the Ethermint zone were to launch a double-spend attack, Zones ABC would undoubtedly be affected. However, the impact would be limited to them. The rest of the network that isn’t associated with ETM wouldn’t be affected at all. This ensures that even if malicious activity takes place within a specific zone, it cannot disrupt the entirety of the Cosmos network.

2.2.2 Tendermint BFT

Cosmos employs Tendermint BFT as its foundational consensus algorithm and engine. This algorithm packages the blockchain’s base infrastructure and consensus layer into a generalized engine solution.

Utilizing the ABCI technology, it can encapsulate any programming language, ensuring compatibility between the core consensus layer and the network. Consequently, developers have the freedom to choose any language they prefer.

Source:https://v1.cosmos.network/

2.2.3 Cosmos SDK

The Cosmos SDK is a modular framework introduced by Cosmos, which simplifies the process of building Dapps on the consensus layer. Developers can effortlessly create specific applications or chains without needing to rewrite the code for each module from scratch. This considerably reduces the developmental burdens. Moreover, it now allows developers to port applications initially deployed on the EVM over to Cosmos.

Source:https://v1.cosmos.network/intro

Apart from this, blockchains built using Tendermint and the Cosmos SDK are also pioneering new ecosystems and technologies that are steering the industry’s growth. Examples include the privacy chain Nym and Celestia, which offers data availability. It’s precisely the flexibility and user-friendliness provided by Cosmos that allows developers to focus on innovation in their projects without having to deal with redundant tasks.

2.4 Interchain Security & Account

1) Interchain Security

Cosmos differs from the Ethereum ecosystem in that it doesn’t have L1 and L2 distinctions. In the Cosmos environment, each application chain is on an equal footing, there are no hierarchical or layered relationships between chains.

However, due to this structure, interchain security isn’t as robust as in Ethereum. In Ethereum, the finality of all transactions is confirmed by Ethereum itself, inheriting the underlying security. But for standalone blockchains that maintain their own security, how should they safeguard themselves?

Cosmos introduced Interchain Security. Fundamentally, it works by sharing a large number of existing nodes to achieve shared security. For instance, a standalone chain can share a set of validators with the Cosmos Hub to produce new blocks for that chain. Since the nodes serve both the Cosmos Hub and the standalone chain simultaneously, they can receive fees and rewards from both chains.

Source: Tokenomics DAO

In the given scenario, transactions originally generated within Chain X would be produced and verified by nodes of Chain X. However, if Chain X shares nodes with Cosmos Hub ($ATOM), the transactions generated on Chain X would be verified and calculated by the Hub’s nodes, producing new blocks for Chain X.

Logically speaking, choosing a chain with a large number of nodes, such as the mature Hub chain, would be the top choice for shared security. To attack such chains, adversaries would need to stake a substantial amount of $ATOM tokens, thus escalating the difficulty of the attack.

Moreover, the Interchain Security mechanism significantly reduces the barriers to entry for new chains. Typically, a new chain without significant resources might spend considerable time attracting validators and nurturing its ecosystem. But in Cosmos, since they can share validators with the Hub, it greatly eases the pressure on new chains, accelerating their developmental process.

2) Interchain Account

Within the Cosmos ecosystem, since each application chain governs itself, applications cannot access each other. As a result, Cosmos offers an interchain account, allowing users to directly access all Cosmos chains supporting IBC from the Cosmos Hub. This way, users can interact with an application on Chain B while being on Chain A, enabling full-chain interaction.

3. Polkadot

Like Cosmos, Polkadot is dedicated to building an infrastructure that allows developers to freely deploy new chains and achieve interoperability between chains.

3.1. Architectural Framework

3.1.1 Relay Chain

The Relay Chain, also known as the main chain, can be likened to the sun in a solar system. It acts as the core part of the entire network, with all the parachains revolving around it. As illustrated in the diagram, the Relay Chain connects various chains with different functions, such as transaction chains, file storage chains, IoT chains, and so forth.

Source: Polkadot

In the Polkadot network, the Relay Chain is responsible for the network’s overall security and consensus, ensuring that all connected chains operate in sync. The different parachains, each with their specialized functions, communicate with the main Relay Chain to maintain interoperability and ensure transactions are correctly processed and secured.

This is Polkadot’s layered expansion solution. One relay chain links to another relay chain, realizing infinite scalability. (Note: At the end of this June, Polkadot’s founder Gavin introduced Polkadot 2.0, which might change the way we understand Polkadot.)

3.1.2 Parachains

The relay chain has a number of parachain slots (Para-Chain Slots). Parachains connect to the relay chain through these slots, as shown in the diagram:

Source:https://www.okx.com/cn/learn/slot-auction-cn

However, to acquire a slot, the contending parachains must stake their $DOT. Once they secure a slot, parachains can interact with the Polkadot mainnet through this slot and share its security. It’s worth noting that the number of slots is limited, growing progressively. Initially, it is expected to support 100 slots. Furthermore, the slots will periodically be reshuffled and reallocated based on governance mechanisms to maintain the vitality of the parachain ecosystem.

Parachains that obtain a slot can enjoy the shared security and cross-chain liquidity of the Polkadot ecosystem. At the same time, parachains are expected to contribute back to the Polkadot mainnet, for example by handling most of the network’s transaction processing.

3.1.3 Parathreads

Parathreads are another processing mechanism similar to parachains. The difference is that while each parachain has a dedicated slot and can operate continuously, parathreads share slots among them, rotating their usage.

When a parathread gains access to a slot, it can temporarily function like a parachain, processing transactions, and producing blocks. However, once its allotted time ends, it must relinquish the slot for other parathreads to use.

Thus, parathreads don’t require long-term staking. They only need to pay a fee each time they gain access to a slot, making it a pay-as-you-go model. Of course, if a parathread garners enough support and votes, it can upgrade to become a full-fledged parachain with a dedicated slot.

Compared to parachains, parathreads are more cost-effective, reducing the entry barrier to Polkadot. However, they don’t offer guarantees about when they can access a slot, making them less stable. They are more suited for temporary use or testing new chains. Chains that wish to operate stably would still need to upgrade to become parachains.

3.1.4 Bridge

Communication between parachains can be achieved through XCMP (explained later). They share security and have the same consensus mechanism. But what about heterogeneous chains?

While the Substrate framework ensures that chains joining the Polkadot ecosystem are homogenous, as the ecosystem matures, large, well-established public chains may also wish to join. It’s nearly impossible to expect them to re-deploy using only Substrate. So, how can message transfers between heterogeneous chains be realized?

To give an everyday example, if an Apple phone wants to send files to an Android phone through a connection, they need an adapter because their ports differ. This is the practical function of the bridge.

It acts as an intermediary parachain between the relay chain and the heterogeneous (external) chains. By deploying smart contracts on both the parachain and the heterogeneous chain, the relay chain can interact with the external chain, realizing cross-chain functionality.

3.2. Key Technologies

3.2.1 BABE & Grandpa

BABE (Blind Assignment for Blockchain Extension) is Polkadot’s block production mechanism. Simply put, it randomly selects validators to produce new blocks. Each validator is assigned to different time slots. Within this time slot, only the validator assigned to that slot can produce a block.

Clarification:

  • A time slot is a method used in blockchain production mechanisms to segment time sequences. The blockchain is divided into fixed-interval time slots. Each slot represents a fixed block production period.
  • Within each time slot interval, only nodes allocated to that time slot can produce blocks.

In other words, it’s an exclusive time period. In time period 1, the validator assigned to period 1 is responsible for block production. Each validator has one period and can not produce blocks repeatedly.

The advantage of this is that random assignment maximizes fairness since everyone has a chance to be assigned. Also, since time slots are predetermined, everyone can prepare in advance, preventing unexpected block productions.

By utilizing this random assignment method, the orderly and fair operation of the Polkadot ecosystem is ensured. So, how is consensus ensured across blocks? Next, we introduce another Polkadot mechanism: Grandpa.

Grandpa is a mechanism for finalizing blocks. It can resolve potential forks that might occur during BABE’s block production due to differing consensus. For instance, if BABE nodes 1 and 2 produce different blocks in the same period, a fork occurs. This is where Grandpa comes into play. It asks all validators: which chain do you think is better?

Validators review both chains and vote for the one they deem superior. The chain with the most votes is confirmed by Grandpa as the final chain, while the rejected chain is discarded.

Thus, Grandpa acts like a “grandfather” to all validators, serving as the ultimate decision-maker, eliminating the risk of forks that BABE might introduce. It ensures that blockchain can finalize a chain that everyone approves of.

In conclusion, while BABE is responsible for randomly producing blocks, Grandpa is in charge of selecting the final chain. Together, they ensure the safe operation of the Polkadot ecosystem.

3.2.2 Substrate

Substrate is a development framework written using the Rust language, provided by FRAME, offering underlying extensible components. This allows Substrate to support a variety of different use cases. Any blockchain built using Substrate is not only natively compatible with Polkadot but can also share security with other parachains and operate concurrently. Furthermore, it supports developers in creating custom consensus mechanisms, governance models, etc., continuously iterating based on the developer’s needs.

Moreover, Substrate offers significant convenience during self-upgrades. Its runtime is an independent module that can be separated from other components. Thus, when updating features, you can directly replace this running module. As a parachain sharing consensus, as long as it maintains network and consensus synchronization with the relay chain, it can directly update the running logic without causing a hard fork.

3.2.3 XCM

If one were to explain XCM in a sentence, it would be A cross-chain communication format that allows different blockchains to interact.

For instance, Polkadot has many parachains. If parachain A wants to communicate with parachain B, it needs to package its information using the XCM format. XCM is like a language protocol; everyone uses this protocol to communicate, enabling barrier-free communication.

The XCM format (Cross-Consensus Message Format) is the standard message format used for cross-chain communication within the Polkadot ecosystem. From it, three different message delivery methods have been derived:

  • XCMP (Cross-Chain Message Passing): Under development. Messages can be transmitted directly or forwarded through the relay chain. Direct transmission is faster, while relay chain forwarding is more scalable but introduces latency.
  • HRMP/XCMP-lite (Horizontal Relay Routed Message Passing): In use. It is a simplified alternative to XCMP. All messages are stored on the relay chain and currently handle the primary cross-chain message delivery work.
  • VMP (Vertical Message Passing): Under development. It is a protocol for vertically passing messages between the relay chain and parachains. Messages are stored on the relay chain and are transmitted after being parsed by the relay chain.

For example, because the XCM format contains various information like the amount of assets to be transferred and the receiving account when sending a message, the HRMP channel or relay chain transmits this XCM formatted message. When another parachain receives the message, it checks if the format is correct, parses the message content, and then executes as per the instructions in the message, such as transferring assets to a specified account. In this way, cross-chain interaction is achieved, and the two chains successfully communicate.

Such a communication bridge as XCM is essential for a multi-chain ecosystem like Polkadot.

After understanding Cosmos and Polkadot, one should have a grasp of their respective visions and frameworks. So, what exactly are the Stack solutions introduced by each ETH L2? Let’s dive into the details.

4. OP Stack

4.1 Architectural Framework

Based on the official documentation, the OP Stack consists of a series of components maintained by the OP Collective. It initially presents itself as software supporting the mainnet but ultimately evolves into the Optimism superchain and its associated governance.

L2 solutions developed using the OP Stack can benefit from shared security, a communication layer, and a unified development stack. Furthermore, developers have the autonomy to tailor their chains to serve any specific blockchain application.

From the information provided, we can deduce:

  • OP Bridge: All superchains within the OP Stack communicate through the OP Bridge, a specialized superchain bridge.
  • Ethereum Consensus: Ethereum acts as the underlying layer, providing security consensus. It serves as the base upon which the super L2 chains are built.
Source: Optimism Documentation

1) Data Availability Layer: Chains using the OP Stack can tap into this data availability module to obtain their input data. Since all chains source their data from this layer, its security is paramount.

The inability to retrieve specific data from this layer may hinder chain synchronization. The diagram indicates that OP Stack employs Ethereum and EIP-4844. In essence, it accesses data directly from the Ethereum blockchain.

2) Ordering Layer: The Sequencer determines the collection of user transactions and their publication to the data availability layer. Within OP Stack, a singularly dedicated sequencer manages this. However, this setup might mean that the sequencer can’t retain transactions indefinitely.

In future iterations, OP Stack plans to modularize sequencers, allowing chains an easy modification of the sequencing mechanism. The diagram depicts both a single sequencer and multiple sequencers.

While a single sequencer model allows the chain to designate anyone as the Sequencer at any given time (posing a higher risk), the multiple sequencer model chooses from a predefined set of potential participants, offering distinct choices for chains built on OP Stack.

3) Derivative Layer: This layer dictates the processing of raw data from the data availability layer, turning it into processed input and transferring it to the execution layer via Ethereum’s API. The image suggests that OP Stack is composed of rollups and indexers.

4) Execution Layer: This layer lays out the state structure within the OP Stack system. Upon receiving inputs from the derivative layer, the engine API triggers a state transition.

The diagram shows that, under OP Stack, the execution layer is the EVM. With minor tweaks, it can also accommodate other VM types. For instance, Pontem Network plans to leverage OP Stack to develop an L2 with Move VM.

5) Settlement Layer: As the name implies, this layer addresses asset withdrawals within the blockchain. However, such withdrawals require validation of the target chain’s state to a third-party chain, which then processes the assets based on that state.

The essence lies in enabling the third-party chain to grasp the state of the target chain. Once a transaction gets published and finalized on the data availability layer, it also gets finalized on the OP Stack chain. Without breaching the foundational data availability layer, the transaction can’t be altered or deleted.

The settlement layer might not have acknowledged the transaction since it needs to verify its outcome, yet the transaction remains immutable. This mirrors the heterogeneous chain mechanism where diverse settlement mechanisms exist, prompting the settlement layer in OP Stack to be read-only, granting heterogeneous chains the authority to decide based on the state of OP Stack.

This layer showcases the fault proofs from Op Rollup. Proposers can highlight states they question, and if not proven erroneous within a set time, they are automatically deemed correct.

6) Governance Layer: The image reveals that OP Stack employs a multi-signature system along with OP tokens for governance. Multi-signatures typically oversee upgrades of the stack system components, with actions executed once all participants have signed. OP token holders can also voice their opinions by voting within the community DAO.

In summation, OP Stack is reminiscent of a blend between Cosmos and Polkadot. It affords the chain customization akin to Cosmos while embracing shared security and consensus much like Polkadot.

4.2 Key Technologies

4.2.1 Op Rollup

OP Rollup ensures security through data availability challenges and facilitates parallel transaction execution. Here’s a breakdown of its implementation steps:

1) User Initiates a Transaction on L2: A user starts a transaction on the Layer 2 network.

2) Sequencer Batches and Processes: The Sequencer gathers transactions in batches for processing. Once processed, the transaction data, along with the new state root, is synced to its smart contract on L1 for security verification. Notably, as the Sequencer processes the transactions, it also generates its state root, which is subsequently synced to L1.

3) Verification and Response from L1: After validation on L1, the data and state root are relayed back to L2. Consequently, the user’s transaction status undergoes secure verification and processing.

4) Optimistic State Root Acknowledgement by OP-rollup: At this stage, OP Rollup considers the Sequencer-generated state root as optimistic and correct. An open time window is then provided where validators can challenge and verify the compatibility of the Sequencer’s state root with the transaction’s state root.

5) Final Determination: If no validators engage during the time window, the transaction is automatically regarded as valid. However, if the Sequencer is found to have acted maliciously, it will face corresponding penalties.

This structure allows for efficient and secure transaction execution, making the most of both Layer 1 and Layer 2 capabilities.

4.2.2 Cross-Chain Bridging

a) L2 Message Passing

Given that OP Rollup utilizes fault proofs, transactions require a challenging period to finalize, which can be time-consuming and potentially reduce user experience. Although Zero-Knowledge Proofs (ZKPs) are more costly, prone to errors, and bulk implementation of ZKPs needs more time, OP Stack introduces modular proofs to solve communication issues between L2 OP superchains.

This allows developers building on L2 Stakcs to freely choose any bridging type. Current options from OP include:

  1. High security, high latency fault-tolerant (Standard high-security bridge)
  2. Low security, low latency fault-tolerant (Short challenge period for reduced latency)
  3. Low security, low latency validity proof (Using a trusted chain verifier instead of ZKP)
  4. High security, low latency validity proof (Ready when ZKP is)

Developers can adapt their choice of bridge according to the needs of their specific chain. For instance, a high-value asset might necessitate a high-security bridge. The diverse bridging technologies allow efficient movement of assets and data between different chains.

b) Cross-Chain Transactions

Traditional cross-chain transactions are completed asynchronously, meaning transactions might not execute in full. To address this, OP Stack introduces the concept of a shared Sequencer.

For example, if a user wants to perform cross-chain arbitrage, having shared Sequencers on Chain A and Chain B can achieve transaction sequence consensus. Payments for transactions will only be made after they are confirmed on both chains, with both Sequencers sharing the associated risk.

c) Superchain Transactions

Due to Ethereum L1’s limited data availability scalability, publishing transactions to a superchain is not scalable. OP Stack thus leverages the Plasma protocol to augment the amount of data the OP chain can access, effectively substituting Data Availability (DA) to supplement more L1 data.

This sinks transaction data availability to the Plasma chain, only recording data commitments on L1, enhancing scalability significantly.

5. ZK Stack

5.1 Structural Framework

ZK Stack aims to provide an open-source, composable, modular codebase built upon the same underlying technology (ZK Rollup) as zkSync Era. This design allows developers to craft their custom, ZK-powered L2 and L3 superchains.

Given that ZK Stack is free and open-source, developers have the liberty to tailor-make blockchains according to their specific requirements. Whether opting for a second-layer network that operates parallel to the zkSync Era or a third-layer one built on top of it, the possibilities for customization are expansive.

As stated by Matter Labs, from selecting data availability modes to employing one’s own token for decentralized sequencing, creators have complete autonomy to shape and personalize various aspects of their chain. Notably, these ZK Rollup superchains operate independently, relying solely on Ethereum L1 for security and validation.

Source: zkSync Documentation

5.2 Key Technologies

1) ZK Rollup

ZK Rollup is central to the underpinnings of ZK Stack. Here’s the primary user flow for ZK Rollup:

Source: zkSync Documentation

Users submit their transactions, which are then collected by the Sequencer into ordered batches. The Sequencer generates a validity proof (STARK/SNARK) and updates the status. Once updated, the status is submitted to a smart contract deployed on L1 for verification.

If the verification succeeds, the asset status on L1 is also updated. The advantage of ZK Rollup is its ability to carry out mathematical verifications using zero-knowledge proofs, offering higher technical and security standards.

2) Interchain Bridge

As indicated by the structure, the ZK Stack can endlessly expand, continually creating L3, L4, and so on. So, how do we ensure interoperability between these chains?

ZK Stack introduces the Interchain Bridge, deploying a shared bridge’s smart contract on L1. This contract verifies transactions on the superchain through Merkle proofs, which, in essence, operate similarly to ZK Rollup. The distinction is that the flow has shifted from L2-L1 to L3-L2.

ZK Stack supports smart contracts on each superchain. They asynchronously communicate across chains. Users can swiftly transfer their assets across these chains within minutes without trusting any intermediary, all at no extra cost.

For instance, for superchain B to process a message, superchain A must finalize its status until it reaches the earliest superchain common to both A and B. In practice, communication delays for the bridge are a matter of seconds, with superchains able to complete blocks every second at a lower cost.

Source: zkSync Documentation

Moreover, thanks to the use of compression technology at the L3 level, proofs can be packaged together. L2 then further amplifies this packaging, resulting in a significantly higher compression ratio, which in turn reduces costs through recursive compression. This enables trustless, rapid (within minutes), and cost-effective (per transaction) cross-chain interoperability.

6. Polygon 2.0

Polygon represents a unique L2 solution, which technically functions as an L1 sidechain to Ethereum. The Polygon team recently announced the Polygon 2.0 initiative, allowing developers to craft their own ZK L2 chains using ZK. They intend to unify these chains with a novel cross-chain coordination protocol, making the entire network feel like a single, cohesive chain.

Polygon 2.0 is committed to supporting an infinite number of chains, where cross-chain interactions can occur securely and instantly, without the need for additional security or trust assumptions. This aims to realize infinite scalability and unified liquidity.

6.1 Structural Framework

Source: Polygon Blog

1) Staking Layer

The staking layer operates on a Proof-of-Stake (PoS) protocol, leveraging the staking of $MATIC for decentralized governance, enhancing the management of validators, and boosting miner efficiency.

From the provided overview, within the staking layer of Polygon 2.0, two main components emerge the Validator Manager and the Chain Manager.

  1. The Validator Manager acts as a common validator pool, overseeing all the Polygon 2.0 chains. It handles the registration of validators, staking requests, unstaking requests, and so forth. Think of it as the administrative department for validators.
  2. The Chain Manager, on the other hand, manages the validator set for each individual Polygon 2.0 chain. Unlike the Validator Manager which serves as a communal service, each Polygon chain has its own Chain Manager contract. It mainly focuses on a specific chain’s validator count (which pertains to the level of decentralization), additional validator requirements, and other conditions.

With the staking layer, the foundational rules and infrastructure for each chain are already set in place, allowing developers to concentrate solely on the development of their individual chains.

2) Interoperability Layer

The cross-chain protocol is vital for the seamless intercommunication of the entire network. How to securely and seamlessly pass messages across chains is something every cross-chain solution should continuously refine.

Currently, Polygon employs two contracts for support: the Aggregator and the Message Queue.

  1. Message Queue: Tailored specifically for the current Polygon zkEVM protocol, every Polygon chain maintains a local message queue in a fixed format. These messages are included in the ZK proofs generated by the chain. Once the ZK proof is verified on Ethereum, any message from this queue can be safely used by its recipient chain and address.
  2. Aggregator: The purpose of the Aggregator is to offer more efficient services between the Polygon chain and Ethereum. For example, it can consolidate multiple ZK proofs into a single one to be verified by Ethereum, reducing storage costs and enhancing performance. Once the ZK proof is accepted by the Aggregator, the receiving chain can optimistically accept messages, given their trust in the ZK proofs, ensuring seamless message transfer.

3) Execution Layer

The execution layer allows any Polygon chain to generate batches of ordered transactions, also known as blocks. Most blockchain networks (like Ethereum and Bitcoin) employ a similar format. The execution layer consists of multiple components such as:

  1. Consensus: Facilitates validators to reach a mutual agreement.
  2. Mempool: Gathers user-submitted transactions and synchronizes them among validators. Users can also check the status of their transactions in the mempool.
  3. P2P: Enables validators and full nodes to discover each other and exchange messages;

Given that this layer is commoditized but complex to implement, existing high-performance solutions (like Erigon) should be leveraged where possible.

4) Proof Layer

The proof layer generates proofs for each Polygon chain and is a high-performance, flexible ZK proof protocol. It usually has the following components:

  1. Common Prover: A high-performance ZK prover that offers a clean interface, aiming to support any type of transaction or state machine format.
  2. State Machine Constructor: Establishes the framework for defining state machines, laying the foundation for the initial Polygon zkEVM. This framework abstracts the intricacies of the proofing mechanism into user-friendly, modular interfaces, allowing developers to customize parameters and build their own large-scale state machines.
  3. State Machine: Simulates the execution environment and transaction format that the prover is certifying. State machines can be implemented using the above constructor or can be fully customized, like by using Rust.

5.2 Key Technologies

Source: Polygon Blog

5.2.1 zkEVM Validium

In the Polygon 2.0 update, the team retained the original Polygon PoS and simultaneously upgraded it to zkEVM Validium.

Source: Polygon Blog

Validium can be understood as a more cost-effective and scalable version of Rollup. However, before its upgrade, the Polygon zkEVM (based on the Polygon PoS mechanism) operated using the (ZK) Rollup principle. This approach achieved notable success. Within just four months of its launch, its Total Value Locked (TVL) soared to 33 million US dollars.

Source: Defilama

In the long run, the cost of generating proofs for zkEVM based on Polygon’s PoS might pose a challenge to future scalability. While the Polygon team has been dedicated to reducing batch costs — having brought it down impressively to just $0.0259 for validating 10 million transactions — one can’t help but wonder, “If Validium is even more cost-effective, why not use it?”

Polygon has released official documentation indicating that in upcoming versions, Validium will assume the roles previously held by PoS, although PoS will still be retained. The primary function of PoS validators will be to ensure data availability and to sequence transactions.

Once upgraded, zkEVM Validium is poised to offer immense scalability and extremely low fees. It’s inherently well-suited for applications that handle a large volume of transactions with minimal fees, such as GameFi, SocialFi, and DeFi platforms. From a developer’s perspective, no special actions are required; following along with the mainnet update will seamlessly integrate the Validium upgrades.

5.2.2 zkEVM Rollup

Currently, Polygon’s PoS (soon to be rebranded as Polygon Validium) and Polygon’s zkEVM Rollup are the two primary networks within the Polygon ecosystem. Post-upgrade, this distinction remains with both networks leveraging the cutting-edge zkEVM technology — one for aggregation and the other for validation, providing added benefits.

Polygon’s zkEVM Rollup offers the pinnacle of security. However, this comes at the cost of slightly higher fees and limited throughput. Still, it’s particularly suited for high-value transactions where security is paramount, like in high-stakes DeFi Dapps.

7. Arbitrum Orbit

Arbitrum, as the current leading Layer 2 public blockchain, has amassed a Total Value Locked (TVL) of over $5.1 billion since its launch in August 2021, securing nearly 54% of the market share among top Layer 2 solutions.

In March of this year, Arbitrum introduced the Orbit version, following a series of ecosystem products:

1. Arbitrum One: The first and core mainnet Rollup of the Arbitrum ecosystem.

2. Arbitrum Nova: The second mainnet Rollup, designed for projects that prioritize cost-efficiency and high transaction volumes.

3. Arbitrum Nitro: A technical software stack supporting Arbitrum L2, making Rollup faster, cheaper, and compatible with the Ethereum Virtual Machine (EVM).

4. Arbitrum Orbit: A development framework for creating and deploying Layer 3 solutions on top of the Arbitrum mainnet.

In this report, we will focus on introducing Arbitrum Orbit.

7.1 Structural Framework

Initially, if developers wanted to use Arbitrum Orbit to create an L2 network, they could start by submitting a proposal for approval through a vote by the Arbitrum DAO.

If the proposal was approved, a new L2 chain would be established. However, when it comes to developing L3, L4, L5, and so on on top of L2, there is no need for permission. Anyone can provide an uncommissioned framework for deploying customized chains on Arbitrum L2.

Source: Arbitrum Whitepaper

It’s evident that Arbitrum Orbit also aims to empower developers by enabling them to customize their own Orbit L3 chains based on Layer 2 solutions like Arbitrum One, Arbitrum Nova, or Arbitrum Goerli. Developers have the flexibility to tailor aspects such as privacy protocols, licensing, token economic models, community governance, and more for their specific chain, granting them a high degree of autonomy.

Notably, Orbit allows L3 chains to use the native tokens of the underlying L2 chain for fee settlement, facilitating the effective growth of their networks.

7.2 Key Technologies

7.2.1 Rollup & AnyTrust

These two protocols are designed to support Arbitrum One and Arbitrum Nova, as previously mentioned. Arbitrum One is a core mainnet Rollup, while Arbitrum Nova is the second mainnet Rollup that incorporates the AnyTrust protocol, which can expedite settlement and reduce costs through the introduction of a “Trust Assumption.”

Arbitrum Rollup is an Optimistic Rollup (OP Rollup), so we won’t delve into it further. Instead, let’s take a detailed look at the AnyTrust protocol.

The AnyTrust protocol primarily manages data availability and is overseen by a series of third-party entities such as DAC (Data Availability Committee). It significantly reduces transaction costs by introducing a “Trust Assumption.” AnyTrust operates as a sidechain on top of Arbitrum One, providing lower costs and faster transaction speeds.

So, what exactly is the “Trust Assumption,” and why does its presence reduce transaction costs and the need for higher trust?

According to Arbitrum’s official documentation, the AnyTrust chain is operated by a committee of nodes, and its security relies on the minimal assumption of how many committee members are honest. For example, if there are 20 committee members, and the assumption is that at least 2 members are honest, this significantly lowers the trust threshold compared to BFT, which requires 2/3 members to be honest. In a transaction, since the committee commits to providing transaction data, nodes do not need to record all data of L2 transactions on L1. They only need to record the hash value of transaction batches, which greatly reduces the cost of Rollup. This is why the AnyTrust chain can lower transaction costs.

Regarding trust concerns, as previously mentioned, assuming that 2 out of 20 members are honest, and assuming the assumption holds true, as long as 19 committee members sign a commitment for the correctness of a transaction, it can be executed securely. Even if the member who didn’t sign is honest, at least one of the 19 signatories must also be honest.

What if members refuse to sign or a significant number of members do not cooperate, causing issues with normal operation? AnyTrust chain can still operate, but it reverts to the original Rollup protocol, with data still being published on the Ethereum L1. When the committee resumes normal operation, the chain switches back to the cheaper and faster mode.

Arbitrum introduced this protocol to cater to applications like Gamefi that require high processing speeds and low costs.

7.2.2 Nitro

Nitro is the latest version of Arbitrum’s technology, and its main component is the validator (Prover), which uses WASM code to perform traditional interactive fraud proofs on Arbitrum. All of its components are fully developed, and Arbitrum successfully completed the upgrade by the end of August 2022, seamlessly migrating/upgrading existing Arbitrum One to Arbitrum Nitro.

Nitro has several notable features:

1. Two-Stage Transaction Processing: User transactions are first consolidated into a single-ordered sequence. Nitro then submits this sequence and processes transactions in order, achieving deterministic state transitions.

2. Geth Integration: Nitro uses the most widely supported Ethereum client, Geth (go-ethereum), to support Ethereum’s data structures, formats, and virtual machine, ensuring better compatibility with Ethereum.

3. Separate Execution and Proof: Nitro compiles the same source code twice. One compilation is for native code execution on Nitro nodes, while the other is compiled into WASM for proof purposes.

4. OP Rollup with Interactive Fraud Proofs: Nitro utilizes OP Rollup, including Arbitrum’s innovative interactive fraud proofs, to settle transactions on the Ethereum Layer 1.

These features make Nitro an ideal choice for Arbitrum’s L3 and L4 use cases. Arbitrum can attract developers seeking customization opportunities to create their own customized chains with these capabilities.

8. StarkNet Stack

StarkWare’s co-founder, Eli Ben-Sasson, announced at the EthCC conference in Paris that they’re on the brink of launching the Starknet Stack, which will allow any application to deploy its own Starknet application chain without permission.

Key technologies within Starknet, such as STARK proofs, the Cairo programming language, and native account abstractions, are fueling its rapid advancement. As developers utilize Stack to tailor their Starknet application chains, the resulting scalability and configurability can vastly increase network throughput, alleviating mainnet congestion.

Although Starknet is still in its conceptual phase with no official technical documentation released yet, both Madara Sequencer and LambdaClass are actively developing components compatible with Starknet, ensuring smoother integration. Efforts are also underway to prepare for the imminent StarkNet Stack, including the development of full nodes, execution engines, validators, and more.

It is worth noting that StarkNet recently submitted a proposal for a “simple decentralized protocol”, hoping to change the current status quo of L2S’s single-point operation sequencer.

Ethereum is decentralized, but L2s is not, and its MEV revenue makes Sequencer degenerate.StarkNet listed a number of options in its proposal, such as:

1.L1 Staking and Leader Campaign: Community members can join the Staker collection on Ethereum Static without permission. Then, according to the distribution of set assets and random numbers on the L1 chain, a group of Stakers are randomly selected as Leaders to be responsible for the block of an EPOCH. This not only lowers the threshold for Staker users but also effectively prevents MEV gray revenue.

2.L2 consensus mechanism: Based on Tendermint, the Byzantine consensus proof consensus mechanism is participated by the Leader as a node. The consensus confirmation is then performed by the Voter, and the Proposer calls the Prover to generate ZKP.

In addition, there are ZK proofs, L1 status updates, etc., combined with the previous major move to support the community to operate Prover code without permission, StarkNet’s proposal seeks to solve the L2 is not decentralized enough, and tries to balance the blockchain impossible triangle problem, which is really eye-catching.

Source: Starkware

9. Conclusion

Through the technical explanation of CP (Cosmos & Polkadot) and various major Layer 2 Stacks, it becomes evident that while current Layer 2 Stack solutions effectively address Ethereum’s scalability issues, they also bring forth a series of challenges and problems, especially in terms of compatibility. The technology in L2s’ Stack solutions is not as mature as that of CP; even the technological concepts from CP three to four years ago remain valuable for current L2s to learn from. Therefore, at the technical level, CP still far outpaces Layer 2.

However, having advanced technology alone is not sufficient. In the upcoming Series 2 article, we will delve into token value and ecosystem development, discussing the respective strengths and weaknesses of CP and L2 Stacks, thus offering a more comprehensive perspective for our readers.

References

https://medium.com/@eternal1997L

https://medium.com/polkadot-network/a-brief-summary-of-everything-substrate-and-polkadot-f1f21071499d

https://tokeneconomy.co/the-state-of-crypto-interoperability-explained-in-pictures-654cfe4cc167

https://research.web3.foundation/Polkadot/overview

https://foresightnews.pro/article/detail/16271

https://v1.cosmos.network/

https://polkadot.network/

https://messari.io/report/ibc-outside-of-cosmos-the-transport-layer?referrer=all-research

https://stack.optimism.io/docs/understand/explainer/#glossary

https://www.techflowpost.com/article/detail_12231.html

https://gov.optimism.io/t/retroactive-delegate-rewards-season-3/5871

https://wiki.polygon.technology/docs/supernets/get-started/what-are-supernets/

https://polygon.technology/blog/introducing-polygon-2-0-the-value-layer-of-the-internet

https://era.zksync.io/docs/reference/concepts/hyperscaling.html#what-are-hyperchains

https://medium.com/offchainlabs

Declaration

The present report is an original work of @sldhdhs3, @GryphsisAcademy trainee, under the mentorship of @Zou_Block and @artoriatech. The author(s) alone bear the responsibility for all content, which does not essentially mirror Gryphsis Academy’s views or that of the organization commissioning the report. The editorial content and decisions remain uninfluenced by the readers. Be informed that the author(s) may own the cryptocurrencies mentioned in this report.

This document is exclusively informational and should not be used as a basis for investment decisions. It is highly recommended that you undertake your own research and consult a neutral financial, tax, or legal advisor before making investment decisions. Keep in mind, the past performance of any asset does not guarantee future returns.

--

--