Future of DeFi on Scalable, Interoperable Blockchains

New Order
NewOrderDAO
Published in
13 min readOct 25, 2022

In order for DeFi to reach its full potential, it must be able to function at similar speeds to traditional finance. As we know, current applications and blockchains do not achieve this. Although there are countless solutions, most come at the expense of decentralization.

However, there are many solutions that can provide improved scalability which we’ll cover in this article. As the need for scalable blockchains increases, so too does the need for mechanisms that will allow these blockchains to interoperate. These interoperability solutions are commonly referred to as cross-chain messaging protocols (sometimes “bridges”), and have a variety of implementations with different security measures and requirements.

Execution layer that can handle HFT

To enable on-chain high-frequency trading, there must be an execution layer that exhibits high throughput and low latency. For many professional traders and institutions, their mechanisms demand over 1,000 transactions per second (TPS).

The fundamental problem with every L1 or L2 we could develop on is that none can handle even close to the throughput needed to run a first-class orderbook and matching engineFounder of dYdX

As an example, dYdX sees problems with the current scaling solutions of Ethereum (at least pre-sharding and EIP-4844). To meet the level of required throughput, dYdX decided to move to an application-specific blockchain built with the Cosmos SDK. Since application-specific chains tokenize their own blockspace, they don’t have to compete with other apps for blockspace. Thus, transaction fees on trades are paid to validators (and as a result to stakers of the native token of the chain). Blockchain applications are increasingly experiencing high demands, causing expensive transactions for general users and calldata for rollups. For this reason, applications are likely to become more modular — a consideration especially for ones that need low fees with high volume or have the desire for their own blockspace. Similarly, modular execution layers will increase in popularity. Whether the execution layers mean more Ethereum rollups, or app-specific chains, or rollups on a data-availability layer, the trend towards modular tech stacks is almost a complete certainty.

There are also various solutions in the works to allow the creation of application-specific chains much easier. These include Saga using interchain security as a hub, or even dYmension which is working on enshrined inherited security rollups on Cosmos (ala Ethereum). A common thesis is that they enable vertical and horizontal scaling, while not being reliant on a congested layer 1 and instead function purely as a bridging and verification layer. It is also worth mentioning Solana here as they’re enabling parallel transactions with low transaction fees, although at the cost of decentralization. Solana is in a unique position to steal away HFT from other blockchains as they have an optimized execution layer with a very configurable VM.

Considering an on-chain order book using the Tendermint Consensus Algorithm, approximate block times at 1.5–2.0 seconds still might not be the same or better than an off-chain order book (e.g. dYdX). However, in a world where we modularize the layers, we can specialize and optimize the layers separately which could potentially usher in a new era of speed.

If we are to get optimized on-chain order books that are able to handle HFT, then it will most likely happen on application-specific chains that aren’t competing for blockspace alongside many other applications. Best examples of applications taking this route are dYdX and Sei Network, both going to the Cosmos ecosystem.

One issue with enabling HFT on blockchains is one that we see in the real world as well — miner extractable value (MEV), specifically malicious MEV such as frontrunning. The transparency of blockchains in combination with the latency of transactions makes market manipulation easier. For example, adversaries can perform front-running or even commit denial of service attacks as they fill up blocks meant for traders. To combat this on-chain, you could make use of batch auctions (though this needs various mechanisms to prevent DoS attacks) or possibly time-lock mechanisms with ZK proofs for instant finality.

One way to enable HFT on-chain, arguably the most unpopular one, is either through enterprise validators or consensus models with a low amount of active validators in geographically advantageous positions close to traders. However, this isn’t very different from the current solutions that we see with stock exchanges.

Existing consensus algorithms usually do not take into account the different bandwidth overhead and network delay of each node in the network. Thus when data synchronization between nodes is performed, most nodes wait for a long time for a few nodes with high network delay. This is because of decentralization, which is of course the pride and joy of blockchains in particular, and for security reasons as well. In high-frequency trading scenarios, this is a negative. However, as we continue to specialize in the various layers of blockchains, this is something that we might see become possible in the future while retaining decentralization. This is also often perpetrated by strict hardware requirements for validators, as well as a set upper limit on the number of validators allowed in the network.

Often in BFT consensus models, the TPS will scale negatively with the amount of nodes participating in consensus as seen below in this paper.

Also, something to take into account is the latency of said blockchains and its effect on throughput. The way most blockchain systems handle requests has a huge impact on latency. It scales with the number of potential outstanding requests and as such queuing for transaction availability can cause significant latency issues. This is why you often add a set deterministic TPS for blockchains to get optimal use of the system’s capacity.

As such, you are often presented with an L-shaped graph of latency vs throughput as presented in the recent Paradigm post. This basically puts an upper bound on the throughput as a function of latency.

Though in decentralized and permissionless chains, you often don’t have the freedom of predictability as you’ll see either high or low load depending on market conditions and activity. This means it can be quite hard to predict what the range should be.

To get around this, there are two particular areas to optimize:

  • Overprovisioning: This means that the system should operate under the saturation point of latency and throughput so that times of heavy load are absorbed rather than leading to increased queues.
  • Increasing the batch size of transactions: This can help to add load at the cost of latency by delaying blocks with increased batches while allowing for faster queues during heavy load.

A lot of protocols have focused on consensus as the bottleneck of blockchain systems in the past few years, but it has rather become evident that there are bottlenecks in all layers of blockchains. And as such a modular approach which allows for specializing in various parts of the stack is more likely than not the way to go (this is the road Ethereum and Celestia are taking), with execution layers like rollups, optimizing the pure execution part of the stack.

Bridging and Interoperability

We’ve seen the importance and popularity of bridges, in particular token bridges, as more and more blockchains start to build out their own ecosystems. As a result of this, there’s demand for liquidity to be moved from one chain to another.

In most bridge applications and cross-chain messaging protocols, there are a range of actions that contribute to the efficacy of it. These are primarily:

  • Monitor: State monitor such as a light client, validator or oracle.
  • Relaying: Takes care of message passing/relaying such as a relayer, takes information from source to destination chain.
  • Consensus: Agreement between actors monitoring the chains, this could be in the form of a trusted third party valset, such as with Axelar.
  • Signatures: Bridging actors cryptographically sign messages, this could be validators signing a cryptographic signature such as with IBC.

You can organize bridges into four major types:

Asset-specific: Providing wrapped assets that are collateralized in a custodial (through a third party) or non-custodial (smart contract) way. Examples of this type is wBTC with a centralized merchant like wbtc or BitGo who then mints an equivalent ERC-20 token (wBTC).

Chain-specific: These are usually two-way bridges between two chains such as the Harmony Bridge, Avalanche Bridge and Rainbow Bridge, all connecting their respective chains to Ethereum via a smart contract. These are often either protected by a set amount of validators, or even just a multisig. We’ve seen how the small multisig solution has proven insecure — namely with Ronin Bridge and recently also Harmony bridge.

Application-specific: An application that provides access to several chains, but is used just for that particular application. For example: Thorchain, which also operates its own chain with a separate validator set. The trust assumption with these often lie within a single validator set that controls and handles all messages and transactions across the network that connects to different smart contracts on various chains.

Generalized Cross-Chain Messaging: Cross-chain messaging protocols that allows for generalized messaging across chains with set instructions, we’ll cover this in-depth in the next section.

Asset bridges help with interoperability by allowing chains to interact with each other and the applications within those chains (although, applications make better use of liquidity/data layers and cross-chain messaging). All these bridge types come with varying degrees of security and trust assumptions, primarily validator sets, optimistic bridges with watchers, locally verified liquidity layers, and light clients with relayers who can also utilize ZK proofs for added trust.

However, real interoperability comes with composability by chains using a similar cross-chain messaging protocol, or by applications building on top of data/liquidity layers enabling smooth and secure bridging.

This resource covers the non-natively verified bridge TVLs well: https://defillama.com/protocols/Bridge

Cross-chain messaging

Cross-chain composability has become an increasingly important value to have for applications as it allows for more possibilities between applications across chain ecosystems. This is enabled by cross-chain messaging between chains that applications can take advantage of, often built on top of data layers.

One of the ways to also obtain compatibility between applications (in this case application specific chains) is by using Interchain Accounts between two chains connected by IBC with ICS-27. Most are probably familiar with IBC, while not being familiar with ICS. ICS stands for interchain standard and are module specifications for IBC transactions between chains. For one chain to communicate with another it is required to have the same module standard. For example: ICS-27, which is what enables the interchain accounts module. Interchain accounts and IBC allow for native smart contracts across different application-specific blockchains to communicate with each other. This means that all Cosmos chains in theory become agnostic. This also means that interchain accounts will let blockchains/users control accounts or smart contracts on other chains. So one chain could execute transactions on another (staking, voting etc.) through one interface.

Another cross-chain protocol is XCMP which is utilized for relaying messages between para chains in the Polkadot ecosystem through the Relay chain, which is Polkadot itself. XCMP is Polkadot’s subprotocol that enables a parachain to communicate with another.

Polkadot’s XCMP is only responsible for distributing messages from sending to receiving parachains. It is not responsible for distributing messages within parachains. This is the responsibility of each parachain, specifically their networking layer, and may be different per parachain. With XCMP, the trust assumption lies with the valset of the relay chain (Polkadot), while with IBC the trust assumption lies within the two valset of the chains bridging.

Data Layers/State Sharing

These are interoperability protocols designed specifically for transferring arbitrary data across multiple blockchains. Generally, these protocols become the base layer for dApps and make it possible for them to achieve cross-chain composability. Examples: Celer’s inter-chain Message Framework, Nomad, Data layer of Movr (Socket), Abacus, LayerZero.

These are generalized cross-chain communication protocols (often application-specific, but Connext makes use of Nomad for example for some routes) that allow users to build applications and transfer assets across different chains. They allow cross-chain applications to be built and deployed to virtually any blockchain. This means these applications can become natively interoperable and allow users to seamlessly bridge assets between different chains, such as with Aave V3.

These often make use of routers that serve to connect smart contracts on various chains with an off-chain component. This means that they are often not built with formal security assumptions, but rather rely on watchers and fraud proofs to prevent malicious actions.

Data movement and state sharing (often referred to as composability as well in regards to smart contracts reading another smart contract, however in this case it is cross-chain) is the next step in enabling true composability across chains. It allows apps to extend beyond the boundaries of chains and build protocols in a chain agnostic way. With this, a smart contract on chain A can call, or read the state of, any smart contract on chain B. For example: a protocol on Polygon can read Aave’s APY on Arbitrum; or a protocol on Fantom can deposit funds into Aave on Optimism

Though it is vital to understand that there’s a disparity in trust. The trust system varies from one blockchain to the next. Transferring data from one blockchain to another that has a greater or lesser number of validators could result in a third-party causing malicious actions. This is why the creation of an interchain standard is important, such as with ICS within IBC.

Interchain Standard

The base thesis is that eventually, all chains will decide on a standard for inter-blockchain communication (e.g. with IBC/ICS). We’ve also seen increasing interest from NEAR in enabling IBC for their ecosystem, and between the two ecosystems as well. This clearly shows that there’s interest for an interchain standard for asset and messaging transfer between chains, and one that there’s clear value in establishing.

IBC was designed to allow communication between different chains with similar finality assumptions. For seamless IBC implementation, the ICS (inter-chain standard) specification was specified. As long as each blockchain implements its modules in accordance with the ICS specification, it will be possible for every chain to communicate with one another.

IBC requires a light client of each chain and a relayer relaying information between the two. The trust assumption is the valsets of the two chains, as they sign off on transactions and messages.

The behavior of light clients varies from chain to chain, but what they all have in common is that it is possible to write Merkle roots in the header of a block and request for Merkle Proofs from nearby full nodes whenever they want to verify them. IBC relies on finality before minting/releasing tokens on the destination chains.

The idea of an interchain standard can also be seen in the way cross-chain messaging data layers are being developed by projects such as Nomad, Celer, Socket, and others. They’re setting up a clear distinction between chains for how to interoperate for applications built on top of their solutions.

Standardization is crucial to enabling cross-chain messaging and is something particularly the IBC and XCMP team alongside the various cross-chain messaging/asset protocols are working on. Standards help with network effects and allow users and applications to seamlessly interoperate alongside each other.

Composability

Interoperability is often used to explain the seamless transfer of assets between applications and chains, while composability is used to describe the idea of a shared infrastructure between those specific applications, such as deploying an application on several chains with minimum work. This is the type of work being done by previously described data and liquidity protocols and cross-chain messaging protocols.

Composability is really only sustainable if transaction costs are low between chains and applications. Otherwise, it loses the principal value that composability grants you. Composability leads to more choices for the end-user and the ability to seamlessly do actions across various ecosystems and applications while removing prior obstacles such as building ecosystems from scratch.

Applications become composable when they’re able to interact with other applications such as automated liquidity positions, lending etc.

Expanding composability requires an analysis of two types of composability:

Synchronous composability: “Composability between smart contracts where interactions occur within a finite amount of time.” — Alex Beckett

Asynchronous composability: “Composability between smart contracts where interactions occur within an unknown and unbound amount of time.”

While it may seem that synchronous composability is the optimal choice for blockchain interoperability, the truth is that asynchronous composability is the optimal solution. Asynchronous composability allows applications to connect to each other via data bridges without having to bridge any tokens. Data is what is being bridged, not tokens, thus eliminating the main risk associated with bridging. If a problem occurs with a bridge that is not asynchronous composable, the user’s tokens will be lost. However, in asynchronous composable bridges, the tokens are never bridged, so if a problem occurs the only thing that is lost is time since the users funds are transferred when they wanted them to be

For those interested in reading about rollup composability, we recommend that you read Alex Beckett’s article here.

Scaling and composability solutions we’re most excited about

We’re very excited about scaling solutions that are taking a composable, modular and interoperable approach. This includes, but isn’t limited to, projects such as rollups like Arbitrum, Scroll, Starkware and Fuel, and also projects that are working on bridging the gap between various layer 1s, rollups and others. These could be projects like those working to implement IBC in a more efficient way, or projects like Socket, Connext and Celer that are trying to create composability between chains that aren’t alike in architecture.

L2 rollups (vertical scaling) on Celestia and Ethereum enable blockchains to scale without modification on the base layer, but the performance improvement results in different security guarantees for off-chain and on-chain transactions. Horizontal scalability in homogeneous networks such as Polkadot and Cosmos enables scalability by giving applications their own specific chains while retaining composability through native cross-chain messaging. There also exist horizontal scaling solutions such as Avalanche subnets and Polygon’s Supernets (Edge). However, those at this point don’t have native cross-chain messaging, which severely limits their composability.

--

--