Celestia vs. Eric Wall

Separating the myths from the reality of blockchain architecture.

cardfarm
Momentum 6
14 min readMar 11, 2022

--

In Celestia’s last Twitter Space, Eric Wall is tasked with ruthlessly and thoroughly critiquing Celestia’s entire tech stack and value proposition. It’s Eric doing what he does best, and we love it when teams working on truly revolutionary technology are ready and willing to face his toughest questions. Here’s my summary of a fantastic analysis and interrogation of everything Celestia stands for, as well as comments on what the future might look like as these cutting-edge tools become usable and recognized for their exciting fundamental improvements.

Eric’s strong opinions and tough questions

“No L1 that just focuses on scalability or just throughput at the base layer is actually meaningfully addressing a long-term sustainable solution to the fee problem” — @ercwl

He talks briefly about Bitcoin’s correlation with the equities market and how he’s been reviewing Putin’s speeches. He’s slowly becoming an expert in international conflict. He will probably pivot to this as a primary focus…jokes!

Eric has been sharing notes with John Adler of Celestia Labs, since before Eric was known as the ‘Alt-slayer’. Eric recognizes John took the path of identifying and designing logical solutions to scalability problems. Eric admits he’s been a “complainer” presence in the space. (I love this guy and read almost everything he posts, so I strongly agree. We’re glad Eric is here asking the hard questions)

He’s here to find the worst thing he can about Celestia and pick it apart to find flaws. Because he respects the Celesta team, he tries hard to find faults to maintain his credibility.

Eric praises “parallel running, app-specific chains” and admits that Celestia is setting up the core of the foundation that can provide a good solution to current scalability issues and fee problems by starting with a scalable data availability layer.

The idea that “if we increase the throughput of the system, the fees will go down” hasn’t worked out. Blockspace demand has repeatedly grown to outstrip supply. If you increase the block size, you might increase throughput in the short term, but it opens the door to centralization issues. You don’t see the same gas bidding wars on Solana, but that’s part of their unusability issues: usage spikes, fee spikes, and outages.

Is Celestia just creating the most optimal solution that also fails? He’s asked to explain the current blockchain fees and congestion issues, misconceptions, and underlying truths.

First, we must clarify how Celestia ties into this problem and their vision. Cosmos felt thin until Celestia came along. The creator of a project called Super Tanker said he created this thesis five years ago. Celestia was the original idea pitched to him, but he said it wouldn’t work. Then Mustafa came along and invented data availability sampling (DAS), and now we have what Cosmos was going for five years ago. Data availability sampling is the breakthrough.

Eric wants to help people understand that each Cosmos Zone is its own chain with its own security. Therefore, you can “rob the bridge” if you can compromise any of these chains and do a big reorganization of the chain. You need shared security to avoid this. There can’t be reorganization in any chains that use Celestia as the data availability (DA) layer. With a shared security layer for app-specific blockchains, Celestia creates a security lifeline for all of these app chains.

The Cosmos project has been interesting because high fees on one chain don’t transfer to other chains on asynchronous chains. This can allow speculative degen-chains to exist alongside high throughput gaming chains with tons of micro-transactions. People are happy to pay higher fees when they’re speculating on the rapidly changing value of meme coins but need to keep fees very, very cheap for the gaming chains.

Q from Eric — Can Celestia solve the “fee problem?”

Eric:

Let’s say there’s an app running on one of the execution environments and uses Celestia’s data and security/consensus, and it creates a massive demand for blockspace. This will affect the entire Celestia ecosystem. If it’s wildly successful and a few other micro-payment apps also have a huge demand for blockspace, will Celestia eventually run up against fee constraints?

Mustafa, CEO of Celestia Labs:

No blockchain can guarantee cheap fees.

What’s the block size limit, and how do we decide on an acceptable limit? What’s the max possible size? How big can they get before you face issues of becoming highly centralized?

Any chain is susceptible to getting congested by outsized demand. Scalability is defined as the throughput divided by the end-users cost to validate the chain’s correctness by running a full node to verify.

Solana isn’t concerned about user verification, but it’s essential. Celestia’s Data Availability Sampling (DAS) primitive ensures the amount of data that needs to be sampled by any single individual end-user will always be a manageable size.

Celestia is a chain where you can dump data. It’s a foundational layer that does a few simple but critically important things very well. Providing the availability of data is a stateless operation so that block production can be much higher and with more room in each block.

How does sampling work?

It uses erasure codes. A block producer splits the block into hundreds of chunks then applies an erasure code to the chunks. What’s left is a small sample that an end-user can verify and be 99.9% sure the entire block is available and that nothing in the block is being hidden.

How can I verify other nodes have successfully sampled?

There’s no concrete way to verify how many honest nodes there are in the network at any given moment, so Celestia, like others, is using off-chain governance to manage things like block size limits and other parameter adjustments.

As long as enough users are using the network, it’s solid. If the sampling succeeds, you get your answer that there are enough nodes providing data for you to sample.

How do you scale while maintaining the same data availability assurances?

The more users you have, the larger the sampled dataset can become. The system is designed so that users can contribute to the storage of the network without needing to store the entire blockchain.

Is ‘availability’ different from ‘retrievability’?

Eric:

Does having proof that the data was there mean we will always be able to retrieve that data, like using emergency exit mechanisms to bridge back from a broken chain manually.

Mustafa:

Celestia doesn’t guarantee retrievability; DAS just ensures that a specific block was published at a particular time by a specific block producer and that the data in the block is still there and available. This allows an “honest majority assumption” that the data is retrievable.

He expects to see incentives created for guaranteeing retrievability as demand for this grows. You can think of Infura as a centralized data retrievability layer. End-user applications built on Celestia can make their own assumptions about retrievability.

What DAS allows is for an individual to personally verify and be sure that a block was published to the network, published by the block producer that claims to have published it, and all the data in the block is valid, all without needing to have a fancy computer that meets the minimum node requirements to be a validator.

DAS allows a user to download some very small pieces of a block and prove with a very high probability assurance that 100% of the block is available. It works kind of like BitTorrent.

Are old blocks ever re-sampled?

Nodes that come online will at least sample from the most recent checkpoint, but you could also start from genesis.

Where does value accrual happen in the system?

The base is consensus and DA, so blocks are ordered according to Tendermint consensus. On top of this base is an execution layer like Evmos, this is also a settlement layer that must be able to dispute the current state of the exec env. On top of this, you have an application layer. The app can have a token, the settlement layer can have a token, the rollup can have its own token, and Celestia plans to have its own token. Where do we see the value accrual happening?

Mustafa:

Yeah, there will be a token for every layer, but NBD, because the end-user only needs to be exposed to the token being used on the very top app layer and the mechanics function in the background like a supply chain.

Q from John From Delphi — Sees an issue that the light clients interested in a particular rollup on Celestia would have to assume that there’s a full node for that specific rollup that can provide the validity and fraud proofs they’ll need for them to remain secure. Will the light clients be able to enforce a reasonable block size limit on the rollups to combat the chance of them becoming just a centralized execution layer on top of Celestia?

John Adler:

This ties into the previous point that there’s no concrete way to know how many nodes are on the network. Ultimately the end-users running the nodes control the parameters and constraints of the chain through off-chain governance. The community of node runners is responsible for enforcing these types of parameters.

Mustafa:

Yes, the rollups themselves can have a block size limit; this is fairly trivial to do.

Q from DegenBlock — How would you compare Polkadot’s architecture to Celestia’s? Could we consider Polkadot’s Relay Chain as a data availability layer?

Ismael:

Every L1 is kind of a data availability layer. What Polkadot does differently is validators also execute part of the state on its Parachains, which comes close to being what we think of as rollups.

Q from Brandon Curtis — How can Cosmos Zones integrate with Celestia?

John:

Two approaches:

  1. Ethereum rollups can use ‘Celestiums’ using the ‘Quantum Gravity Bridge’, which is a rollup that operates on top of Ethereum and performs adjudication of validity and fraud proofs. They can deposit and withdraw coins using a bridge contract.

The important difference is that data is posted to Celestia instead of Ethereum (he mentions there are some caveats around long-term fee markets).

  1. Cevmos — an EVM settlement-only layer running on Celestia. You can build execution layers on top of it. Use Cevmos as the adjudication layer (verify execution) and Celestia for DA.
  2. A possible third option is a ‘sovereign rollup’ that runs directly on top of Celestia. Celestia-1st sovereign rollups may become a thing.

Could a current Cosmos Zone become a rollup?

Ismail (CTO of Celestia Labs):

Yes, becoming a sovereign rollup would work well. You could use the Cosmos SDK, but instead of using BFT consensus, you could just dump the blocks on Celestia.

Q from @apriori — Cevmos can have both EVM-based rollups and CosmWasm based rollups? Would this have interoperability or composability issues?

Mustafa:

If you want to have a layer specifically dedicated to settlement and a rollup on top, you’d probably want a CosmWasm based settlement layer. It is possible though, if both can compile to an intermediary language. Optimism has a fraud-proving project called Cannon that can compile arbitrary MIPS code, which means you can fraudproof any program that can compile to MIPS. So in theory, you can compile Wasm code to MIPS and have interoperability with EVM.

Q from no excuses — What’s your response to Vitalik Buterin’s opinions about the future being “Multi-chain, but not cross-chain” due to bridge security issues?

Consensus and data availability can’t be delegated across chains without sacrificing consistency and security guarantees.

Mustafa:

He agrees with Vitalik for the most part, Mustafa has expressed these same ideas in the past. However, the argument is that every chain must use the same settlement layer to have shared security because you can’t do bridges safely. This may still be possible with the right tools though. Committee-based bridges can be an effective solution. He adds that Vitalik may be over-rating the utility of the settlement layer.

How does Celestia compare to Cosmos’ shared security model?

Cosmos’ Interchain Staking model is similar to Polkadot’s. As a ‘Cosmos Zone’, you can pay the main Cosmos validators to validate your chain. This is fundamentally not scalable because all validators would have to execute all transactions on your chain.

Cosmos’ Interchain Staking model is better designed for Cosmos having chains working “underneath” it, like subsidiary chains, all sharing the same governance responsibilities. This isn’t the best long-term solution to shared security.

Q from @Alphakey on Splitting out the data availability layer:

John:

Modular blockchain architecture allows a much higher throughput of data availability by separating data availability from other blockchain operations. Simply by separating these things, we can build applications that are much more data-heavy, like rollups.

Everyone was asked to reflect on the last year and project into the next one.

Mustafa:

L1 space:

In 2017–18, Most projects that had big investment rounds were based around being ‘Ethereum — but more scalable’, there was a big synchronous execution environment/world computer narrative at the time.

Now we’ve seen the results of this narrative, and we’ve learned you can’t have a single (monolithic) chain. They’re all struggling, all having issues after claiming to be built better than Ethereum. We realized building a multi-chain paradigm is the obvious direction.

Like Eric said, Cosmos’ vision has been trying to get all the execution happening on rollup chains. All solutions have trade-offs. The biggest one with a multi-chain paradigm is the issue of composability.

Longer-term he also sees execution happening on rollup chains and DA and consensus on a separate layer. He sees more myths than truths coming from the current L1 narrative. The great thing about the ‘world computer ’ model is its composability. The smart contract model makes composability more complicated. We’re just now starting to see the potential of IBC.

John:

“One of the key distinguishing factors that make the modular blockchain architecture so compelling is that for the first time, we can have this decentralized network that can grow its capacity as the number of users grows.” — @jadler0

He’s seeing more acceptance of the modular blockchain vision — DA and consensus chains that are separate from execution are the best you can get, this doesn’t necessarily mean “the most scalable in all respects, but it’s the best cross-section of properties.”

More people accept that this is the best path to get the ‘best cross-section of properties’, not necessarily the most performant. As Celestia becomes a reliable DA platform, he expects many execution layers as DA layers will get adopted and become more scalable.

“We need something to execute transactions.” — His vision of a ‘multi-chain world’ will have many inter-communicating execution layers. These execution layers can now share security using Celestia’s consensus/DA layer.

We can start to get the best of both worlds. We have the scalability benefits of sharding without all the complexities and problems that sharding strategies face. We can now experiment with new execution layers and various new systems without every version needing to bootstrap its own validator set and security.

The biggest tradeoff with this strategy is composability.

This is the first time we can scale a network’s capacity as the number of users increases. The amount of data that can be made available is also scaling as the network grows.

One subtle point: the specific capacity that’s growing is the capacity to make data available.

Celestia supports bytes/second, not transactions/second. Celestia does not scale execution! But you can do things like running multiple chains in parallel: sharding without sharding. Having a scalable data layer is very important as the base, but it’s only half the problem. These execution layers can now share security.

“I think there’s some growing excitement in the community to build out Celestia’s first sovereign rollups that do not rely on another blockchain to verify proofs.” — @jadler0

Nick White:

“Sovereign execution layers as rollups is, at least so far, the most compelling solution to scaling, and a lot of other problems that we face” — @nickwh8te

Agrees with Eric that all the attempts at scaling solutions since 2017 have helped alleviate the congested blockspace. Still, they aren’t viable long term, eventually running into the same issues as Ethereum.

As modular frameworks mature, we’ll start to see a combination of scalable specialized layers form a viable long-term model. We’ll see the maturation and development of this modular framework strategy, much is still being built. A combination of a scalable data layer with sovereign execution spaces as rollups is so far the most compelling solution to scaling we’ve seen as far as interoperability.

Ismail:

Pretty much aligned with Mustafa and John: running as a rollup on top of Celestia's data availability layer is going to be huge. He agrees that app-specific chains not needing to bootstrap their own consensus network will be a game-changer allowing proliferation and experimentation.

Q from Eric — On Ethereum, there are a couple of heavily used applications that affect the fee market for all other users (Opensea and Uniswap). You could potentially have something similar on Celestia. Is Celestia essentially a “monolithic data availability layer?” Suppose Celestia’s fees go up enough that it’s no longer feasible for dApps that perform a lot of microtransactions to use Celestia. Does it make sense to split Celestia itself into separate “Celestia Subnets?”

John:

Yes, as demand grows, the limited capacity of blockspace could become more expensive, but there are several things we can do to manage this in ways that traditional monolithic chains cannot. DAS’ scalable capacity is one example. The fact that every block is stateless, not relying on any previous block existing, is another. A third way Celestia can manage increased demand for blockspace is the property of being highly parallelizable by splitting each block into sections to perform operations on different columns and rows of that block simultaneously. They’re calling this “internal sharding.”

Can we still run into fee constraints after all this?

Mustafa:

“The fee market is a free market” The target hardware requirement for block producers is the thing to be thinking about. Celestia doesn’t want to have requirements as extreme as Solana’s, for example.

Users without advanced hardware would require something like Celestia, as well as ZKproofs and fraud proofs that will still allow end-users to verify the state of a chain using reasonable general-purpose hardware (like a typical laptop) even though requirements for the actual block production are pretty high.

“If we want Google-scale performance, we would need Google-scale block producers.” We must ask ourselves if we’re comfortable with that.

There are some ways to have decent censorship resistance with a somewhat centralized block-producing-validator set, but the question remains: To what extreme do we want to take this? If we want to keep it away from the extremes, it will require multiple chains with multiple data availability layers with varying security guarantees and security fragmentation.

Evmos running on Celestia would limit its use cases to only arbitrating roll-up disputes. How do you limit this in practice without making it permissioned?

Mustafa:

Basically, by making Evmos economically impractical for anything besides settlement. If you have an app that requires a lot of state-reads and state-writes, you’ll be economically incentivized to use a rollup.

Congrats if you made it this far. I don’t think this conversation needs much of a wrap-up, these guys really drilled down and gave some great insight into some of the hardest questions in the industry. We’re bullish on the Celestia team and appreciate all the intellectual stimulation.

Have an early-stage blockchain startup or even an idea for one without a team? Momentum 6 is an early-stage fund focused primarily on decentralized finance, Web 3.0, gaming and metaverse, and NFTs.

Incubation (M6 Labs)

• We leverage our deep knowledge of blockchain and past experiences building successful startups to build market-leading companies. Submit your idea or startup for incubation here.

Investment (M6 Ventures)

• We have successfully invested in over 130+ blockchain startups. Our diverse portfolio shows that we have positioned ourselves as one of the leading investment firms.

Research (M6 Labs)

• We use our unique approach and analysis to produce quality research surrounding industry trends and opportunities: https://t.me/M6bullets.

Portfolio (135+ Projects)

Managing Partner: Garlam Won

Head of Labs: Kadeem Clarke

--

--