vRAM Guide for Experts
A Technical Deep-Dive Into The DAPP Network’s Decentralized Storage Solution
EOS — Home of dApps
The EOS blockchain was launched with the aim of housing scalable applications that are fully decentralized and service a mainstream audience. In just a few months since launching its mainnet, EOS has surpassed its competitors in terms of blockchain activity. Delegated proof of stake and 500ms block times have given EOS the sheer processing capability to distinguish itself as the best-positioned protocol to support the next wave of paradigm-shifting dApps.
RAM Isn’t Being Used Properly
Despite its stellar performance, the cost and limitations of network resources on EOS are constraining dApp developers from building and scaling their applications. In particular, the requirement of having to store both dApp smart contracts and their state information, such as the balance for each user, permanently in RAM is presenting a barrier to dApp scalability. EOS RAM is a misleading term as it functions more like a hard disk drive than a random-access memory device whose role is to store data relevant to live operations.
The EOS mainnet launched with a limit of 64GB of RAM, with Block Producers voting to increase the total RAM supply by an additional 64GB a year. However, dApps being built today that need to store user profiles, account balances and updated state information can often have RAM requirements of several gigabytes, a constraint that renders them essentially incompatible with the current RAM model.
Introducing The vRAM System
The vRAM System is an end-to-end decentralized alternative storage solution for developers building EOS dApps that is RAM-compatible. It aims to enable the storage and retrieval of potentially unlimited amounts of data affordably and efficiently.
The vRAM system enables EOS RAM to be what it was intended to become — a lightweight cache layer for storing in-use dApp data only. It removes permanent data storage functionality from EOS RAM, allowing it to serve as a whiteboard for in-use data.
vRAM is the first product to utilize the DAPP Network infrastructure to power a range of dApps previously unimaginable due to the systemic limitations of the existing technology stack. The DAPP Network is comprised of a provisioning layer and a DAPP services layer which together form the base for building services that provide dApp developers with extra storage capacity, secure communication, and other critical utilities. Sitting on this DAPP foundations is the IPFS Service Layer and the unique vRAM library which allow developers to work with multi-index tables, a familiar data structure optimized for efficient data retrieval.
At the heart of the DAPP Network are the DAPP Service Providers (DSPs) which provide critical services to developers building EOS dApps. DSPs create service packages which they then offer to developers in a free market. In order to access a specific service package, a dApp smart contract needs to stake a sufficient amount of DAPP Tokens as per the Service Agreement set out by the DSP. The ‘dappservices’ smart contract is responsible for managing the staking mechanism, package provisioning and quota management. Each service request is recorded on-chain and decreases the remaining quota of actions available to the dApp smart contract.
DSPs can provide:
- Services based on-chain as described in the following sections.
- Services based off-chain which are accessible from a client-side application like the history-as-a-service utilization described in this article.
In the case of off-chain services, DSPs use the provisioning layer to report usage and manage quota.
DAPP Service Layer
For DSP services that require interaction with a dApp smart contract, referred to as a User Contract in our whitepaper, the DAPP Service layer contains the protocol with which a User Contract can request a specific service from the DSP. A service request is a “function-like” call to the DSP in that it has specific parameters and can trigger a response back to the contract with specific results.
There are two kinds of service requests: synchronous and asynchronous.
Synchronous Requests (blocking): Synchronous requests block the User Contract from executing a transaction and propagating it on a peer-to-peer network until the request returns a response to the contract.
In the vRAM System, the DSP node receives a transaction and runs it locally at first. This causes an exception to be thrown since the necessary data is unavailable in the RAM table referenced by the transaction.
The exception is a way to signal the DSP that the transaction is not ready yet, thus requesting its service (e.g. returning a local IPFS cluster file, processing a web oracle request, or generating a random number). Once the DSP has loaded the necessary data into RAM, the transaction can be relayed to the block producing nodes.
If the user sends a transaction that requires a service directly to a non-DSP API node it will simply fail, since only a unique DSP node is capable of parsing the exception as a service request.
Asynchronous Requests: The contract dispatches an event representing the service request and continues executing a transaction without failing.
The way asynchronous requests operate in the DAPP Service Layer of the vRAM System is as follows: DSPs detect that the contract has requested service by listening to the stream of events on-chain and executing those requests (e.g. IPFS Service Clean & Commit, log events, schedule a transaction request).
Since asynchronous requests are not dependent on receiving a request response, they do not need to be sent directly to the DSP API node. Rather, this request can be sent from any EOSIO node and will be parsed by the DSP node which is listening to events on the blockchain.
IPFS Service Layer
Local IPFS cluster is a distributed data storage solution that uses content-based addressing to access files. Unlike the traditional client-server model, where each file lives on a server with a specific IP address and is retrieved by the client requesting that address, local IPFS cluster files are hashed to give a Unique Resource Identifier (URI) which serves as a pointer to the given file. The DAPP Network supports three different service requests that harness the power of the local IPFS clusters to store files on a decentralized peer-to-peer network and retrieve them securely and efficiently.
In its IPFS Service Layer, the vRAM System uses three kinds of service requests: Warmup Requests, Cleanup Requests, and Commit Requests.
Warmup Request: A User Contract sends a service request to retrieve a file from a local IPFS cluster using its URI. Parsing the service request, the DSP returns a payload containing the file to the contract. Since the URI is also the file’s hash, the contract can easily verify the integrity of the file by hashing the data and comparing it to its identifier. Warmup requests are synchronous and block the execution of the contract until a response has been returned to the contract.
Cleanup Request: A cleanup request sends a request to the DSP to evict a file from the cache. This is an asynchronous request.
Commit Request: The commit request instructs a DSP to write new data to their local IPFS cluster node. A developer can utilize the setData function from within their smart contract to first hash the new data in order to return a URI, before dispatching a commit request which is caught by the DSP node. In a similar way the getData function can be utilized in order to fetch the data for the smart contract or request a Warmup in case it is missing.
The vRAM layer provides a familiar interface for developers from within the smart contract — multi-index tables. vRAM makes it significantly more efficient to retrieve information from the local IPFS cluster and manipulate it in a way that is familiar to dApp developers. DSPs are not exposed to the vRAM layer — it exists solely within the User Contracts (using the vRAM library) — allowing for system upgrades and optimizations without altering DSP software. vRAM uses a Merkle tree to represent the entire database. Each node in the Merkle tree is represented as a file on local IPFS cluster, requested on demand only when a proof is needed. In order to locate a specific file, one needs to traverse the tree until they reach the leaf node representing the data required. Only the current root node representing the entire database needs to persist in RAM.
The Merkle tree based structure plays a dual role in the vRAM layer, serving as both an index enabling faster database querying as well as a proof of integrity against which the validity of the data can be verified.
vRAM under the hood
To illustrate how vRAM can empower developers to create a new generation of dApps, we will walk you through a Super Mario-style runner game. Let’s call it Super DAPP (🍄). The Super DAPP smart contract has two actions: “Start game” which loads a player’s progress and current score before a new game session begins, and “Modify Score” which updates a player’s score once the game session has concluded.
Our example’s sequence of transactions is done in 5 steps:
- Load and Transact
- Clear Cache
(1) A dApp smart contract that uses vRAM, referred to in our whitepaper as a ‘User Contract’, receives a transaction from a client through a DSP node’s EOSIO API.
(2) Since the required data to carry out the transaction is not found on RAM, but rather exists on vRAM, the dApp smart contract proceeds to run the transaction which throws an exception. This failure is a means of signaling to the DSP that its services are needed.
(3) Parsing the service request, the DSP detects all the necessary data not found in RAM that exists in vRAM.
🍄 In our illustration, Alice, ready to start a new round of ‘Super DAPP’, sends a ‘Start Game’ transaction to the Super DAPP contract. However, when the transaction is run locally by the DSP node, her last checkpoint and current score are missing from RAM. Since these data points are required in order to load a new game session, the transaction throws an assertion failure. The DSP that ran the transaction picks up on the signal and parses the service request.
(1) The DSP verifies that the dApp smart contract has a sufficient amount of DAPP staked for the particular service package and sufficient amount of available Quota.
(2) The DSP node relays the local IPFS cluster files representing the missing data points. Since only the Merkle root, representing the current state of the entire dataset, lives permanently in RAM, the integrity of the data is verified by retracing the Merkle tree representing the dataset until the leaf node representing the data is reached.
(3) Using the cryptographic proof, the dApp smart contract verifies that the data requested has not been tampered with. This ends the “Warm-Up Request” phase.
🍄 Going back to our illustration, once the DSP has parsed the service request, it proceeds to fetch Alice’s data from local IPFS cluster. It relays her last checkpoint and current score to the Super DAPP contract along with a cryptographic proof which allows the contract to verify the integrity of the data.
Load and Transact
(1) The dApp smart contract loads the necessary data into a temporary cache table residing in the RAM.
(2) The transaction can now process successfully before being propagated to Block Producing nodes on the blockchain, as all the necessary data is currently found on RAM.
(3) If the transaction failed for any other reason, a cleanup process is performed to clear the unused cache.
🍄 In our illustration, having verified the data, the DSP loads Alice’s scores and checkpoint into a temporary cache table on RAM by sending a transaction to a BP’s P2P endpoint. Now that all the necessary data is available in the RAM, the transaction can be sent to the Block Producer node.
(1) Whenever a smart contract modifies data in vRAM based multi-index tables, it dispatches a commit event with the modified data and the merkle tree nodes that were affected by the change. The data points and the merkle tree nodes are represented as local IPFS cluster files.
Still following? Great, because the really cool part is coming up! 🍩
(2) Since the local IPFS cluster URI and the hash of the data are the same (the Merkle tree dual role, remember?), the contract knows the expected URI before the data was actually committed to the local IPFS cluster by the DSP. By the same logic, two different DSPs caching the same data independently or replaying the history will pin the data to the local IPFS cluster under the same local IPFS cluster URI.
(3) The contract saves the new data and the new Merkle tree nodes in the RAM cache table.
(4) The contract saves the new Merkle root permanently in RAM.
(5) Like any event on the blockchain, the commit event with the data becomes part of the chain history. This ensures that the data can be recovered by any DSP by replaying the history.
(6) The DSP catches the event by listening to the stream of events coming from the chain using a demux service. When the event is detected, the DSP caches and indexes the files in a local IPFS cluster for fast retrieval.
(7) The DSP sends a commit response to the contract.
At the end of this process, the Merkle root is modified on EOS RAM and the new data point is cached on the DSP’s distributed file storage system.
🍄 Continuing with our analogy, Alice finishes a level and saves her progress. She has now advanced in both points scored and in terms of her checkpoint progress. The Super DAPP contract sends an event with the new data which modifies the data in EOS RAM. At the same time, the DSPs listen for the event and modify the data on a local IPFS cluster to reflect Alice’s latest score and checkpoint as per the data on RAM.
(1) After the transaction has ended, the DSP dispatches a cleanup event to the dApp Smart Contract to evict the data from RAM.
(2) The DSP sends a cleanup action to the dApp smart contract, which then deletes the data from RAM.
(3) The dApp smart contract leaves the cryptographic signature (merkle root) in RAM. This is needed to verify the integrity of the next Warm-Up Request.
🍄 After the game ends, the Super DAPP contract deletes the data off RAM, leaving behind the cryptographic proof required to validate the next Warm-Up Request.
End-to-End Decentralized Storage
Whenever data is modified, the Merkle root of the dataset which is stored in RAM is updated. When that data is required from within a contract, a Merkle proof is relayed along with the required data points to the dApp smart contract by the DSP node.
A single Merkle root representing the entire database allows a dApp smart contract to verify the validity of any specific portion of the dataset relevant to current operations in a process known as the ‘Warm-Up Request’. This way, the smart contract can ‘warm up’ a single entry from a large database with terabytes of data without needing to download the entire dataset or introduce any additional trust requirements. In addition, whenever a smart contract commits or modifies data using vRAM, the data becomes part of chain history and can be recreated by replaying said history should the data be unavailable due to unforeseen circumstances.
DAPP to the Future
vRAM is just the first implementation of the DAPP Network harnessing the power of DSPs, developers and the DAPP Token to unlock dApp scalability.
As adoption of the DAPP Network continues to grow, we expect the DAPP community of developers to expand the functionality of DAPP services by designing novel use-cases for the vRAM System.
LiquidApps invites dApp developers to join the dedicated Devs Telegram Channel, provide feedback and take an active role in the ongoing discussions. For an additional deep-dive into the technical aspects of the vRAM System, please check out our GitHub Repository.