A Journey of Cognitive Breakthroughs

A few months ago, I decided to attempt an experiment. The resulting technology unlocks a new world of powerful applications.

Tal Muskal
The DAPP Network Blog
5 min readAug 6, 2019

--

It all started with an idea.

A few months ago I decided to attempt an experiment: What if I could create an airdrop that wouldn’t need to store genesis data in RAM?

So in I went, into brainstorm mode, and out came the following plan:

  1. Create a dataset containing all the genesis accounts and their respective balances which is accessible to everyone off-chain (through torrent/IPFS/HTTP).
  2. Calculate and store a Merkle root for that dataset on RAM.
  3. When a user performs a “claim” operation, the dApp client-side script can pass a Merkle proof along with the user’s specific data entry (or bucket of entries) from the original dataset.
  4. The contract verifies the proof and issues the actual tokens for the user while setting a “claimed” flag for the user in a table.

I implemented it and it worked perfectly. This got me excited. A huge, read-only database that a contract can access without needing RAM would have major ramifications for EOS.

Now, I pondered, how can I enhance this mechanism to allow modifications?

Eureka!

What if the contract could recalculate the relevant Merkle nodes as well as signal the changes externally? Both the changes of the Merkle tree, as well as the data entry that changed, could be calculated from within the contract. The data and cryptographic proofs will always be part of chain history, meaning a client can potentially replay the history in order to sync up with the updated data that must be sent to the contract with each action.

So I implemented it. The result, a huge read & write database accessible by the contract, was pleasing — but still incomplete. The mechanism lacked transparency in terms of both the contract and the client.

Could we tweak the system to enhance its transparency?

Eureka! Again!

  1. Instead of the client sending the proof of the data, we can add a service layer of EOS compatible API nodes that do this work for the user.
  2. Instead of sending the data and proof on every transaction. we can treat the RAM multi-index table as a cache layer, warm up this cache before usage and evict it once it is no longer in use.
  3. A generic communication layer that allows the contract to communicate with the service node through failed asserts and console prints (those failed transactions don’t actually propagate to the BP nodes). This layer can be used to signal requests for other external services as well. But more on that later…

Now, instead of just signaling the data to the external world, a contract can know the URI that will contain the data. With the file hash in hand, a contract can cryptographically prove that nobody tampered with the contents of a data block which is being loaded to the cache layer. It also removes the need to replay the entire history for every single API node. Reconstructing and repinning the IPFS entry from chain history is only necessary if there are no copies of the given entry. All I needed to do was convert the Merkle trees and indexing layer so that they would be implemented on top of the now accessible IPFS “block storage” layer.

A new special kind of Merkle tree and one patent later, we began to witness the birth of what would come to be known as vRAM.

There was still one key puzzle piece missing. We needed a way to incentivize this novel form of IPFS pinning and the external service-supporting API nodes.

This is when my co-founder, Beni, comes in.

A Free Market for Scaling Services

Beni quickly realized that, as with any service, if we wanted to optimize for both quality and low cost we had to leverage the power of competition inherent in free markets. By allowing any individual or entity who wishes to run a node on the service layer to do so, we would be maximizing the utility to the end-user utilizing this system for data storage. These service providers need a native incentive mechanism to justify spinning up a node to store and fetch data, leading to the birth of the DAPP utility token.

Service provider rewards would be baked into the DAPP protocol. Users stake DAPP towards the packages offered by DAPP Service Providers (DSPs), and in exchange, DSPs would earn inflation. The supply of DAPP inflates at between 1%-5%, annualized. The final amount may be decided by the community, on a block-by-block basis. Those tokens are distributed between the DSPs proportional to the amount staked towards their service packages.

And just like that, vRAM came to life.

A huge read & write database, accessible by the contract, with seamless access in terms of the API and contract compatibility all the while not requiring any change to the protocol nor any trust in this layer. And the finishing touch: A free market for service providers who are natively incentivized to run nodes.

This was only the first piece of a new fully-featured service layer.

DSPs are now offering a host of powerful services, including oracles (LiquidOracles), free accounts (LiquidAccounts), and CRON services (LiquidScheduler) on a single, versatile network. With the recent introduction of LiquidLink, these services are starting to expand beyond a single chain, bringing mass-scale dApps within our grasp.

These powerful services are starting to expand beyond a single chain, bringing mass-scale dApps within our grasp.

(Click to Tweet)

Tal Muskal is the CTO of LiquidApps.

--

--