A Storage-based Computation Paradigm Enabled by Arweave

outprog
outprog
Nov 23, 2020 · 8 min read

Digital consensus is essentially storage consensus.

The storage-based computation paradigm is a trusted computation paradigm for blockchain to achieve ultimate scalability.

Since early 2020, DeFi on Ethereum has seen explosive growth. As the yield farming led by COMP and YFI gained popularity, Ethereum became congested sharply, increasing the gas fee to 500 GWEI and thus the transaction fee of DeFi to hundreds of dollars. Therefore, a scalable smart contract solution is urgently needed to alleviate the congestion on Ethereum.

In early October, Vitalik posted “A Rollup-centric Ethereum Roadmap” , which led to the popularity of Layer-2 solutions. Among those solutions, ZK Rollups(ZKR) and Optimistic Rollups(OPR) stand out. The key difference between ZKR and OPR is the type of proof they adopt. The former relies on validity proofs, while the latter relies on fraud proofs.

In a ZKR, a validity proof is used to prove the integrity of transactions. To be specific, a batch of transactions needs to be sent along with their computational integrity proof to an Ethereum smart contract for verification. If the proof is valid, the transactions will be accepted; if not, the transactions will be rejected. However, a fraud proof adopted by OPR works in a different way. Operators have to publish a new state along with a deposit. Anyone can submit a non-interactive fraud proof within a predetermined period of time. If no one can prove the state is incorrect, the state will be finalized. If someone does, the operator will be slashed.

The difference between ZKR and OPR is how they finalize a new state. I think ZKR is similar to PoW in using a cryptographic algorithm to verify a computed result. However, OPR is more like PoS because it requires operators to deposit some funds (akin to ETH2 Staking) and takes a series of governance-oriented measures. Both of them also have their own flaws. Due to the restrictions of cryptographic theory and technology, ZKR can hardly deal with complex programs with its excessively limited opcode set. As to OPR, its governance mechanism requires operators to lock up some funds, which causes lower liquidity.

To solve the scalability issue in Ethereum, ZKR and OPR both turn to data compression and off-chain computation. As shown below, data is compressed greatly to reduce the load on Layer 1, and computation is transferred to Layer 2 for processing. In this way, Layer 1 stores less data and only needs to verify states. Hence, Layer 1 is relieved from heavy load.

Although the load on Layer 1 can be reduced, there is always a cap on it due to block size limit and gas limit. As an increasing amount of gas is expected to be consumed by Layer-1 DeFi and composable applications, Layer 2 will have to compete with Layer 1 for limited on-chain resources. If there is more than one Layer-2 protocol, a competition will also be triggered among them. Will the less competitive Layer-2 protocols fail to package transactions and include them in the Layer-1 chain, or even come to a halt in the longer term? What happens when some less competitive Layer-2 systems cannot offer attractive gas price to summit data and proof? Will they come to a halt? If not, Layer-2 transactions waiting for finality may accumulate to a large amount that cannot be dealt with in a timely manner, breaking the UX. Besides, it will be detrimental to the composability of Layer 2.

Ethereum is known as a “World Computer”, which is designed to deal with computation and storage on a blockchain. It means all the nodes in the Ethereum network have to get involved in the computation process. Hence such an on-chain computation model has a relatively high cost. Even if Layer-2 solutions can compress both data and computation, the computation process (i.e. validation) still needs to be done on Layer 1.

This post will introduce a new computation paradigm, which is different from the on-chain computation model of Ethereum. In the new paradigm, a blockchain only needs to ensure the availability and finality of the data stored on it, with the whole computation process moved off-chain. Let’s assume that if all the inputs of a function remain unchanged, the output will always be the same. Take z = x + y for example. If the input values of x and y are respectively 1 and 2 and are recorded on-chain, the output value of z will always be 3 no matter where the function is computed. As long as all the parameters of a program are recorded deterministically, the output is unchanged whether the program is run on-chain or not. In other words, the program can actually be run by anyone, anywhere off-chain. Given that the computation process is totally decoupled from the chain, the parameters completely rely on deterministic storage. So it is called a storage-based computation paradigm.

Turing Machine as Its Origin

As we all know, whether a computer is of Von Neumann architecture or Harvard architecture, it is in essence a general-purpose Turing machine.

A Turing machine is a hypothetical machine. It consists of a tape of infinite length and a tape head with a state register. A tape head moves back and forth on a tape and writes new parameters onto it. Such a machine can deal with any complex calculations, which is called Turing completeness.

With the help of blockchain technology, the tape of a Turing machine can be replaced with a blockchain. Then we can get a storage-based computation paradigm, as shown below:

As shown above, we can write a program and upload its code to a blockchain (at the block height of 102). Anyone can download the code from the blockchain and run the program off-chain. And the inputs and outputs of the program come from the blockchain, which acts as a tape. Due to the traceability and immutability of a blockchain, the inputs and outputs of the off-chain program are the same with those of its on-chain counterpart. That is to say, with the deterministic parameters from a blockchain as the inputs, the states generated by a program are also deterministic.

In a storage-based computation paradigm, the source code, inputs and outputs of a program are stored on a blockchain. When you run a program with the trusted code from a blockchain for an off-chain computation, the outputs must be consistent as long as the inputs are trusted parameters from the chain. In other words, anyone can run the program and the output is consistent so that the computation is trusted.

The paradigm works in theory, but the usage of Ethereum as storage infrastructure is costly for applications. Besides, the block size limit may restrain the scalability of applications on Ethereum. However, with the help of Arweave and its permanent storage, the storage-based computation paradigm can be put into practice.

Introduction to Arweave

Arweave is a blockchain-based file storage protocol. It charges users a one-time, upfront fee for permanent data storage. By using a set of simple rules, it incentivizes users to store data forever.

Permanent storage is the core feature of Arweave. To achieve it, Arweave solves two key problems. First is how to estimate the cost of permanent storage. Statistics show that the storage cost is decreasing at a staggering rate every year. The average storage cost decreases by 30.57% per gigabyte every year. As time goes by, the cost will converge to a constant, which is considered as the permanent storage cost. Arweave uses the converged permanent cost as the fee benchmark for data storage. As shown below, it costs 2.45 AR ($9.8) to store 1 GB of data. It should be noted that the storage cost will change as the price of AR fluctuates.

Second is how to incentivize miners to store data forever. Arweave introduces a brand new mining mechanism. Miners are required to refer to a random recall block in order to mine a valid new block. To be specific, they have to prove that they can access the data in the recall block. It can incentivize miners to store as many recall blocks as possible, especially rare ones. When a rare block is required, the miners who store it have an edge over others in mining a new block. That is how Arweave achieves permanent storage.

Arweave enables data to be stored permanently at a stable and cheap price. With the help of blockchain technology, the data stored in Arweave is verifiable and traceable. Therefore, Arweave is well suited to be a tape for a Turing machine in the trusted computation paradigm.

Conclusion

The key to a storage-based computation paradigm is permanent storage and a fixed/low cost. Only permanent storage can ensure data availability. With the data as an input, an off-chain computation can always output the same state as an on-chain computation. A fixed/low cost enables applications to maintain their consensus cost in a stable range, so as to avoid a fierce resource competition in the case of network congestion like Ethereum. Hence a stable cost makes applications more usable.

By combining the storage-based computation paradigm and Arweave, we can get a relatively perfect deterministic Turing machine, which is a new trusted computation model for real use.

The storage-based computation paradigm has the following advantages:

  1. It enables computations of arbitrary complexity. Computing power depends on the performance of an off-chain machine;
  2. It lowers consensus cost by separating storage and computation. The computation cost will be borne by the application operators (i.e. off-chain computers);
  3. It has composability and sharding capability. Application operators can download data according to their demands. When multiple applications compose with each other, the operators only need to download the data of those applications, instead of all the data (for example, a DApp has to be run on a whole GETH client);
  4. It has excellent scalability. On the one hand, the lower consensus cost increases scalability. On the other hand, the paradigm enables upload sharding in addition to download sharding. Hence, the performance is only limited by network bandwidth.
  5. It has no requirements on programming languages. You only need to upload the source code of the destination program and the serialization of all the inputs to the blockchain in advance.

According to our research on rollups and ETH2.0, all the efforts are made to pursue off-chain computation. The ultimate goal seems to achieve off-chain computation completely. Through all this, we can conclude that when a program is free from ambiguity, as long as the inputs and outputs are stored deterministically, the computed results of the program must be deterministic.

The storage-based computation paradigm is totally different from any previous computation models. It may take a lot of time for the public to accept and recognize the new paradigm. However, I do believe it is an outstanding trusted computation paradigm that is closest to the essence of a Turing machine.

everFinance

everFinance for everyone, everywhere

Medium is an open platform where 170 million readers come to find insightful and dynamic thinking. Here, expert and undiscovered voices alike dive into the heart of any topic and bring new ideas to the surface. Learn more

Follow the writers, publications, and topics that matter to you, and you’ll see them on your homepage and in your inbox. Explore

If you have a story to tell, knowledge to share, or a perspective to offer — welcome home. It’s easy and free to post your thinking on any topic. Write on Medium

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store