Is the EVM Alone Enough?

Siddharth Rao
IOSG Ventures
Published in
11 min readJun 21, 2023

Special thanks to John Burnham from Lurk Labs for his review and feedback on the article

The Proficiency of the EVM

Every operation on the Ethereum mainnet costs gas, and the number of computations required to run basic applications that we use fully on-chain will either make the app or the users go broke.

This helped give rise to L2s: sequencers were introduced for optimistic rollups to bundle transactions and then commit them to the mainnet. This helps the apps assume the security of Ethereum while giving a better experience to their users who could save gas and sign transactions faster. While this made operations cheaper, rollups still rely on a native EVM as the execution layer. EVM zkRollups like Scroll, and Polygon zkEVM use/will use EVM-based zk circuits where zk proofs will be generated for every single transaction or a batch of transactions that it carries out on its provers. While this helps developers build “fully on-chain” apps, is it still resourcefully and economically efficient enough to run high-performance applications?

What are these high-performance applications?

The first things that come to mind are Games, on-chain order books, Web3 social, machine learning, genome modeling, etc. All of these applications are compute-heavy and will be very expensive to run on L2s as well. The other issue with the EVM is that the computations are not as fast and performant as other systems available today like the Sealevel Virtual Machine.

While an L3 EVM could make computations economically more efficient, the structure of EVM itself may not be the best way to perform heavy computations because of its inability to compute parallel executions. With every new layer that is built above, to maintain the ethos of decentralization there will be a need for new infrastructure to be set up ( a new network of nodes) and this would still require the same set of providers to expand or a completely new set of node providers (individuals/businesses) to provide resources or both.

So every time more advanced solutions are built the existing infrastructure should be upgraded or a new layer has to get built on top. To combat this issue we need a post-quantum secure, decentralized, trustless, highly-performant computation infrastructure that can perform complex calculations in the real economy using quantum algorithms to make computations for decentralized applications.

A case can be made for alt-L1s like Solana, Sui, and Aptos that enable parallel executions but market sentiments, liquidity crunch, and the lack of developers in the market, there might not be any EVM killers due to a lack of trust to cross the large moat that Ethereum has built. ETH/EVM killers probably don’t exist as of today.

The question here is, why should all computations be on-chain? Can there exist an execution system that is also trustless, and decentralized that is not a Blockchain? This is where DCompute systems could show promise.

DCompute infrastructures to be decentralized, post-quantum secure, and also trustless, need not or rather should not be a blockchain/DLT technology, but it is very important to verify the computations and result in correct state transitions and finality. The EVM chain will behave exactly how it does, maintaining network security and immutability while decentralized, trustless, secure computations can be moved off-chain.

What we are ignoring here mostly is the problem of data availability as well. This piece is not focusing less on data availability as solutions like Celestia and EigenDA are already building that direction.

This is how outsourcing computations off-chain can look like are in two ways (inspired by research work done by Jacob Eberhardt and Jonathan Heiss from TU Berlin, in their paper titled “Off-Chaining models”)

1: Only Compute Outsourced

Source: Robin Chen “Off-Chaining models

2. Compute + Data Availability Outsourced

Source: Robin Chen “Off-Chaining models

When we look at type 1, zk-rollups are already doing this but they are either constrained to the EVM, or they have to educate developers on a completely new language/instruction set.
The ideal solutions should be performant, efficient (cost and resource), decentralized, private, and verifiable. ZK proofs can be constructed on top of computations that run on an AWS server, but they are not decentralized. Solutions like Nillion and Nexus are trying to solve this for general computations in a decentralized manner. These are still not verifiable without ZK proofs.

Type 2 combines off-chain computational models with the data availability layer kept separate, but the computations will still require verification on-chain.

Let us look at the different decentralized computation models available today that are partially trusted and possibly fully trustless.

Alternative Computation Systems

Ecosystem Map for Outsourced Computation of Ethereum

Secure Enclave Computations/ Trusted Execution Environments (TEE)

A TEE is like a special box inside a computer or a smartphone. It has its own lock and key that only certain programs, called trusted applications, can access. When these trusted applications run inside the TEE, they are protected from other programs or even the operating system itself.

It’s like a secret hideout that only a few special friends can enter. The most common example of TEEs is Secure Enclaves that are present on divides that we use such as Apple’s T1 chip and Intel’s SGX that run critical operations within the devices such as FaceID.

Since TEEs are isolated systems, the attestations cannot be corrupted because there is a trust assumption in the attestations. Think of the existence of a safety gate that you trust is safe because Intel or Apple built it, but there are enough safe robbers in the world (both hackers and other computers) that can break this safety door. TEEs are not “post-quantum secure”, which means that a quantum computer with unlimited resources can break the security of TEE.

With computers getting rapidly more robust, long-term computational systems and cryptographical schemes must be built keeping post-quantum security in mind.

Secure Multi-Party Computation (SMPC)

In an SMPC network:

Step 1: The inputs to the computation are transformed into shares, which are

distributed across the SMPC nodes.

Step 2: The actual computation takes place, which typically involves the exchange

of messages between the SMPC nodes. At the end of this step, every node will have

a share from each one of the computation output values.

Step 3: The result shares are sent to one or several Result Nodes, which run LSS to

reconstruct the outputs.

Think of a car manufacturing line where the building and manufacturing components of the car (engine, door, mirror) are outsourced to OEMs (worker nodes), and then the assembly line where all the components are put together to make the car (result nodes).

Secret sharing is important for privacy-preserving decentralized computation models. This prevents one single party from obtaining the full “secret” (in this case, the input) and acting maliciously to produce an incorrect output. SMPC is probably one of the easiest and the most secure systems to decentralize as well. Although decentralized implementations don’t exist, logically, it is very feasible..

There are MPC providers like Sharemind that provide MPC infrastructure for computations, but the provider is still centralized. How can one ensure privacy? How can one ensure that the network (or Sharemind) is not acting maliciously? This is where zk-proofs come in with zk-verifiable computation.

Nil Message Compute (NMC)

NMC is a new distributed computing methodology developed by the team at Nillion. It is an upgraded version of MPC where the nodes don’t have to talk to each other with their outputs. For this, they use a cryptographic primitive called One-Time Masking (OTM) that makes use of a series of random numbers called blinding factors to mask a secret, similar to one-time padding. OTM is designed to deliver correctness with efficiency, meaning NMC Nodes are not required to exchange any messages to perform a computation. This means that NMC does not suffer from SMPC’s scalability problem.

ZK Verifiable Computation

ZK Verifiable computation is producing a zero-knowledge proof over a set of inputs and a function and proves that the computation performed by any system has been performed correctly. ZK Verified computation is very nascent but a very critical portion of the scaling of Ethereum as it also can support a transitionary phase which we feel will happen in the adoption of Ethereum.

ZK proofs come in various shapes and forms and one can see the comparison of each of these systems in the table below that was composed in the paper “Off-Chaining Models”

Comparison of different Computation Systems (Source)

Let’s dive into what is required to verify a computation with a ZK proof:

  1. The choice of a particular proof primitive that is cheap to generate, not high on memory requirement, and not hard to verify
  2. A zk circuit that has been designed to generate proofs of the above primitive over computations
  3. A computational system/network that will perform computations over a given function through provided inputs and will give an output.

Developer TAM — Proof Efficiency Dilemma

Onboarding developers into solidity is hard enough as it is, and now asking developers to learn Circom, etc. to build circuits, or learn a specific programming language like Cairo to build zk-apps seems like a far-fetched dream, especially when Web3 is no longer the talk of the (tech) town.

Source: Artemis Dashboard
Developer distribution for Web2 programming languages, Source: Statista

Looking at the above statistics, it seems more sustainable to bring Web3 to a developer’s environment rather than bringing a developer to a Web3 environment.

If ZK is the future of Web3 and Web3 apps need to be built using existing developer skillsets, there need to be ZK circuits designed in such a way that supports proof generation over computations performed by algorithms written in languages like JavaScript or Rust.

Such solutions do exist and two that come to mind are two players: RiscZero and Lurk Labs

Both teams share a very similar vision where they allow developers to build zk-apps without having to go through a steep learning curve.

Lurk Labs is in its earlier stages but the team has been working on the project for quite some time. They’re focused on generating Nova proofs through a general-purpose circuit. Nova proofs were conceptualized and simulated by Abhiram Kothapalli of Carnegie Mellon University, Srinath Setty of Microsoft Research, and Ioanna Tziallae of New York University. Nova proofs have special advantages over other SNARK systems to do incrementally verifiable computations (IVC). Incrementally verifiable computation (IVC) is a concept in computer science and cryptography that aims to enable the verification of computations without the need to recompute the entire computation from scratch. When the computations are long and complex, proofs need to be optimized for IVC.

Comparison of different SNARKs

Nova proofs are not as “out-of-the-box” as the other proof systems out there since it is just a folding scheme where another proving system like Spartan is needed to generate the proof, which is why Lurk Labs built the Lurk Lang, which is a LISP implementation. Since LISP is a lower-level language it makes proof generation over the universal circuit easy and it also transpiles into JavaScript very easily. This will help Lurk Labs tap into that 17.4 Million Javascript developer TAM. The language transpilation can also be done for other common languages such as Python

All in all, Nova proofs seem like a great primitive proof system. Although they do have their drawbacks of the proof size increasing linearly with the size of the computation, on the flip side, Nova proofs have further scope for compression.

STARK proof sizes don’t increase with the size of the computations, so it is better suited for verifying very large computations. To further improve developer experiences they also released the Bonsai Network which is a decentralized computation network that is verified by proofs generated by RiscZero. Here is a small infographic representing the working of RiscZero’s Bonsai network.

Source: Bonsai Network

The beauty in the design of the Bonsai network is that the computation can be initiated, verified, and the output can be used all on-chain. All of this sounds like a utopia, but STARK proofs come with their issues. The verification costs are too high.

Nova proofs seem great for repetitive computations (its folding scheme provides a lot of efficiencies) and small computations, which possibly makes Lurk a great solution for ML inference verification.

Who wins?

The table below gives a comparison of the different systems:

Is there a clear winner here? We don’t know. But each of these has its own merits. While NMC looks like a clear upgrade from SMPC, the network isn’t live and hasn’t been battle-tested.

The benefit of using ZK verifiable computation is that it is secure and privacy-preserving but it doesn’t have a built-in feature of secret sharing. If a system is using purely zk-verified computation, the computer (or a single node) must be very powerful to perform heavy computations. To enable load sharing + balancing while preserving privacy there must be secret sharing. This is where a system like SMPC or NMC can be combined with a ZK-proof generator like Lurk, RiscZero, Nil Foundation to create robust decentralized-verifiable-outsourced compute infrastructure.

This becomes important especially when MPC/SMPC networks today are centralized. The largest MPC provider today is Sharemind and a ZK verifiability layer above it can prove to be useful. To decentralize MPC networks, the economics haven’t been figured out yet. The NMC model theoretically is an upgrade to the MPC system but we are yet to see the success of the network and its economics.

Among the ZK proofs as well, there might not be a winner takes all situation. Each proof is optimized for certain types of computation and there is no one-size fits all model. Computations are of many kinds and it is up to the developers to make trade-offs on each of their proof systems. I think there is a place for both STARK-based systems, SNARK-based systems, and their future optimizations to have a place in the promised future of ZK.

--

--