Leveraging the graph protocol
- How our frontend interacts with the blockchain
- Need for a data source which can provide us with analytical data
- How the graph protocol solves the problem
Interacting with the blockchain
Just like most of the other DApps, Timeswap also interacts with the blockchain using a node provider. All the user-driven transactions are sent via the wallet the user has connected with. This means if a user is using Metamask as their wallet, the transaction is signed with the private key which is kept locally, and the signed message is sent to the node. Well, that solves the writing problem (i.e. sending a transaction).
What about getting the current state of the blockchain? What about extracting data from the events emitted/logs?
The most obvious answer would be to use the same node provider like Infura or Alchemy to fetch the details regarding the chain. Using a node provider and a web3 library like ethers one can easily get the state of the contract, filter, and fetch event details. And that is precisely what we use for getting transaction-specific details.
Well, problem solved right?
Even though we can get transaction-specific information easily via a node provider, several calls are usually required to fetch user balances, pool information, etc. This makes the process of fetching user/pool relevant information on the frontend time consuming, complex and expensive. This led to us writing a small service cache pool-related information, to reduce the calls made to the node provider. This allowed our frontend to make fewer calls to the node provider and also faster. However, a problem that remained was how to fetch aggregate information for analytical tasks?
Analytical data meant getting all sorts of details with regards to a pool, user, transaction, and more. It helps us gain relevant insights as to how a pool is performing, no of users interacting with the pools, total volume locked among others. Getting this data would mean making a call to the node provider for every transaction, filtering the events, and mapping them together. This is extremely inefficient and laborious as there are several thousand transactions/events per pool.
Here, Dune Analytics provided to be an acceptable solution just for this. It provides us with a service to build analytical dashboards by indexing blockchain event logs and function calls in Postgres, all we had to write was SQL queries. Though it was easy to build complex analytical dashboards via SQL queries, two problems remained.
- Accessing the data in an API like format, so that it may be consumed on the frontend
- It worked only on mainnet, which means we cant use it to dune analytics on different testnet
This leads us to our problem statement for an ideal data source that can fetch aggregate data of timeswap pools :
- A way to obtain data from the blockchain with regards to timeswap contracts, which satisfies the following criteria
a) It should be accessible in an API like the format b) It should be accessible across testnets c) It should be fairly straightforward to write logic to map the raw information together such that, meaningful aggregated data may be obtained d) It should be able to discover our pools without hardcoding different pair contract addresses
Enter the graph protocol
What is the graph protocol? -The Graph is an indexing protocol for querying networks like Ethereum and IPFS. Anyone can build and publish open APIs, called subgraphs, making data easily accessible.
The blockchain is essentially a decentralized state machine, though it’s good at storing state in a decentralized fashion a layer to query/index relevant information is absent. The graph protocol solves exactly this problem in a decentralized fashion. We can use this to curate aggregate information regarding timeswap pools.
- The graph protocol uses GraphQL to query information, hence we can query information from any client that supports GraphQL.
- It supports a wide variety of testnets including rinkeby and polygon Mumbai testnet
Thus we wrote our own subgraph for timeswap, which indexed many entities including transaction, pool, pair, user, etc. Though this solves our problem in running complex aggregate queries on different timeswap pools, one problem that remains is how we would automatically detect new pools being created and index their data.
A common pattern of writing contracts is writing a factory contract which creates instances of the desired contract. In this architecture there usually is a factory contract, which is solely deployed for the sake of deploying similar instances of a particular contract/s. In our architecture too, there exists a factory contract which is responsible for creating new pair contracts. Each pair contract hosts pools of different maturity with the same asset-collateral pair.
Thankfully the graph protocol has a solution just for this, Data Source Templates
We first define a data source for the factory contract, which means in our case the TimeswapFactory.sol. Here we hardcode the address of the factory contract in the address attribute.
Once we add the data source templates to the manifest. Usually, one template is defined for a contract the factory creates. That is if the factory creates 4 kinds of contracts, there usually are 4 templates defined in the manifest. The only difference between a data source and a template is that one has a predefined address while the other one is instantiated each time a particular contract is deployed by the factory.
Though the template has been added, TimeswapPair still has to be instantiated, each time the factory contract deploys the pair contract. This is done by the event handler. In our case that would translate to:
This would ensure a fresh data source is created for the Pair, each time it’s deployed. It is also to be noted that a data source newly added, can only index events and another information post the block it was created. This can now be used to fetch pool information of all timeswap pools just by hardcoding the factory address, the rest is handled in an automatic manner
With this, we can now access the timeswap pool-related data, both individual/aggregate data at ease using the graph protocol.