Snark: Architecture Q&A #3E9A

Isn’t the fact that this is a stateless version of the EVM a rather severe limitation?

Exactly right, and a great observation.

First, a primer on stateless and state-full processing. When we consider the EVM we first think about tokenized assets. Specifically ERC20 tokens. An ERC20 token needs its own Distributed Ledger Technology (DLT) and thus has to have state. Modelling an ERC20 token requires state-full processing.

Stateless processing or rather, non state shared processing is another set of processing. Specifically, a few use cases include; cloud computing, Lambda’s, AI model training, 3D rendering, anything where compute power is currently being sold off and managed by third parties. These compute as a service projects do not require shared state, they simply require secured execution.

Technically, we should refer to it as off-chain VM, this could be encapsulated by a zero proof secured JVM, EVM, Docker instantiation, Kubernetes, or a complete bespoke solution (currently our prototype, but we want to facilitate any kind of execution environment).

From an architecture point of view, the EVM exists to be a generalized execution environment. The DLT exists to be a state consensus mechanism. The abstraction of an off-chain zero proof secured EVM gives execution capabilities for execution use cases, however keeps the constraints of shared DLT consensus and thus keeps the same flaws as shared DLT consensus (Addresses via Eventually Consistent Consensus and Consensus First Design protocols)

So to circle back to the question, it is perhaps most accurate to say, that it adds no inherent benefit to something like an ERC20 token (which is a shared DLT concern, and not an execution concern), and instead provides benefit where asynchronous or stateless processing use cases become available.

In Consensus, how exactly are the rounds established?

We love this question, because it creates so much internal conflict with regards to design.

We currently have three prototypes;

No rounds, with this design there is no concept of a round. Transactions are simply finalized and removed from processing as soon as they reach the hard limit of 2/3+1. Since state is updated with every transaction applied the zero knowledge proof and merkle tree is updated with every transaction. This does not allow for determinism.

Witness rounds, a round begins when it starts with unfinalized transaction tx1, a round initiates finalization as soon tx1 has been finalized and continues until all transactions received from t at tx1 until t at tx1 finalized are finalized. A new round initiates from receiving txn after tx1 finalized. This initiates rounds based on max witnesses and network propagation. Allows for full determinism, leads to disagreements on when tx1 was finalized unless perfect node awareness is held by all participants.

Hash rounds, difficulty based time hashes are created (same as blocks) to create a result at a fixed interval. This requires proof of work and we want to avoid this if at all possible.

Why need rounds? First, we need to discuss determinism. The key aspect of a deterministic system is that you can start from event 1 and continue until event n and you will always arrive at the same outcome. Blockchains are examples of deterministic systems.

Why do we need determinism? Determinism is what provides trust. Anyone can independently validate event 1 till n and arrive at the same conclusion. So, if I had event 1 I need to be able to ensure that it was finalized. If however, stake is mutable and constantly changing during time, I need to fix stake to specific points in time.

Solution 1: Rounds, stake is locked in at round 1 and remains locked until a new round starts. We record all of the round info, round number, all nodes participating, stake of each node and we record all transactions finalized in the round. Replaying this information will let us arrive at a deterministic solution.

Solution 2: Self awareness, when a transaction is first witnessed it records the total node info it is aware of, as validators finalize it adds each value to its own internal state. This solution seems elegant, but has the flaw of agreement towards “first witnessed”, a block is a hard rule to a round, we need another way to present this.

Solution 3: Trust. When you receive your balance at the bank, you trust that they did not make any mistakes. Can’t believe we used the trust swearword can you? When we say “trust” we mean zero proof. What if you didn’t require determinism, but could still prove the current state is accurate? What if we provided you with a zero knowledge proof of the current state that you could independently validate?

As soon as we leverage solution 3, we no longer need a concept of rounds since there is no longer a need for determinism. This is difficult to explain in messaging, since it’s easy to understand a deterministic system, but it’s difficult to understand a zero proof system. The end result is the same, in a deterministic system you prove that the current state is the true answer, in a zero proof system you prove the same.

The current design that incorporates zero proofs for the system state, excludes rounds entirely.

But what if we still wanted to allow for full determinism? This is where the internal conflict keeps occurring, we both want to allow for a zero proof system as well as a fully deterministic solution, however then we require rounds. If we require rounds we require round consensus, we keep slowly following this process in reverse, until we arrive at blocks and proof of work as the best solution for a deterministic system.

This leads us to zero proof finalized transactions. What do we want to accomplish with rounds? We want to fix the inputs and outputs. We want to be able to say round 1 had 5 participants each with 25% of total weight and they witnessed the following transactions. And we want that information to be secured. We are trying to address determinism here, and not blocks or scaling. So what is it we are trying to prove? We are trying to prove that tx1 was finalized. So if instead, we had an append only finalized transaction log? Turns out, there is a great tool for this, specifically Conflict-free replicated data types (CRDTs), which allows for eventually consistent replication of immutable concatenated data. While this does not allow for the determinism of stake, this does allow for the determinism of state. We are currently using this implementation to allow for determinism of the zero knowledge proofs.

Does the zero knowledge proof built from all finalized transactions play the role of a block?

Yes, with a disclaimer, it technically facilitates the role of the entire blockchain. The single proof along with its corresponding merkle tree give you the current state of the entire system. This now again goes into the conversation of determinism as mentioned in the question above, please read through for a full explanation.

Is 51% or 67% the threshold for finalization?

Another fun topic of internal debate. There are two issues here. Issue one finality. When can the reversal of a transaction put the entire state of the system at risk? This is achieved as soon as a transaction has more than 50% agreement (51%) this transaction is now finalized. In a perfect system where all nodes had awareness of all participating nodes and their exact stake, this is the perfect solution, all of them would arrive at the 51% answer at the same time and each one will finalize the transaction and stop publishing it.

We unfortunately know, real world implementations are never that smooth. So the next issue comes in with when to stop broadcasting. With Consensus First Design each node is broadcasting information on it’s transactions, so if it reaches 51% it can’t yet stop broadcasting since it needs other nodes to be aware that it has reached 51%, if this node contributes 40% of stake and it stops broadcasting (because it finalized at 51%) the rest might still be at 49% and now having stopped broadcasting it is at 9%. So instead we need another value where broadcasting can be removed which is a value that calculates that most of the system has received the 51% confirmation. That number is 2/3+1 or 67%. This is not about finalization, but about broadcast termination. But doesn’t the same issue occur then? We simply moved the goal from 51% to 67% but can still have the same issue?

Here we need to start talking about transaction ordering. When a transaction reaches the bottom of the stack and it has 51% finalization, we can also remove it. But couldn’t we do that sooner? Both mechanisms are implemented to ensure faster propagation and cleanup. The primary rule is built for the 90% and the secondary rule is built for the remaining 5% with the last 5% assuming to be out of state.

So finalization is 51%.

Do tokens as first-class citizens diminish the native currency?

Yes. So why did we include it? This journey actually started with one of our developers being primarily focused on dApps. They would build a dApp that was to be fueled with their token, however, on systems such as Ethereum, even though their token could live in Ethereum, all interactions had to be addressed by Ethereum fuel (Ether). Their solution? Fork Ethereum and launch a mainnet to be fueled by the dApp token. We started seeing this more and more in this space.

So how can we have dApp developers build systems on our blockchain while having all of the benefits they want to achieve? And this is where we started focusing on economic abstraction. And this is what eventually led to multi state concurrency and multi token concurrency (we have yet to release an article on this last one, it is a more straight forward implementation of multi state that simply allows a single wallet to have multiple assets, if an account state is seen as public key => value this can be seen as public key => []values). So this allows dApp developers to be able to keep their dApps on Snark while still having all of their own benefits.

But again, to the question, this diminishes the native currency. The native currency inherently only becomes important for stake, and nothing else, since a popular dApp will be entirely fueled by their own token.

What is the purpose of a token? The purpose of a token is to reward validators to secure the network. Does this have to be the native token of the system? No, it simply needs to be profitable to the miners.

Here the discussions starts to become more fundamental and less technical, the purpose is to design a system for the future. Something that can mutate as the environment on it mutates. As the dApps of the future are being designed and become popular, they will allow for validator incentives and allow for a rewarding structure. The Snark token itself will become more of an Anchor that simply allows for validators to participate and less of a velocity item. As validators increase token scarcity will increase and eventually the token value could become entirely fixed. It will however empower the entire network for future generations.

Final thoughts

We are truly honored for the interest in the project so far, we still have many components in flux and currently in research. We have a few fundamentals ironed out and tested that we are happy with, but the goal behind Snark was always simple; It was meant to be a research house for the issues we see in the market and how to address them. Our original objective was to compartmentalize each issue and offer them as patch-on scaling solutions to current blockchains. We also realized that they could be packaged together.

The goal is still a simplistic one, use technology to secure decentralized systems in the future and provide a platform to facilitate this.