In the first post in our series, “Airnode: The first-party oracle node”, Burak gave a high level overview of Airnode. As the release of Airnode approaches, I want to give a more technically focused overview of how the software works.
We’ve been hard at work over the past year planning and building Airnode, the software that will act as the gateway for API providers to directly connect their data to the blockchain. Airnode will also serve as the backbone for the API3 solutions, allowing the API3 DAO to construct transparent, decentralised APIs by aggregating data points from first-party oracles operated by API providers.
Everything is a function
Interestingly, Airnode itself is not what might typically be considered a “node”— a long running server-type application. Rather, it is just a collection of simple, single-purposed functions. These functions link together to provide the complete functionality of Airnode. There is a main “coordinator” function that serves as the starting point and initiates other “worker” functions. The coordinator function typically runs every minute, but any of the functions can be invoked whenever (and however) they are needed.
In the serverless context, these functions provide unique isolation protection against a wide variety of potential vulnerabilities. For example, if a response from an API is very large (maliciously or not), this might typically cause the entire system to reach the limits of available memory and either stall or crash completely. With a serverless-ly deployed Airnode, a large API response is contained within a single “call API” function that only extracts and returns a single, simple value. A crash would be contained to that particular worker function, while leaving the coordinator function untouched.
The request-response cycle
Initially, Airnode will have the functionality to listen for and respond to requests made on-chain. Typically, this will work through user-created contracts that implement Airnode client functionality. Creating these Airnode client contracts is outside of the scope of this post, but you can refer to the API3 documentation if you’re interested in learning more.
In the above diagram, green blocks represent the single coordinator function and blue blocks represent one or more worker functions.
- The coordinator function starts and creates worker processes for each configured blockchain provider.
- Each worker process fetches and filters the requests. These requests are either API calls or withdrawals from requester wallets, but there may be more types in the future. The results are returned to the coordinator.
- The coordinator groups duplicate API calls together. For example, if you have several providers, all for the Ethereum mainnet chain, you would not want to execute an API call for each provider. Each unique client request results in a single API call.
- The coordinator initiates another worker process for each aggregated API call. These workers execute the API call and return the single, desired piece of data to the coordinator.
- The results are then merged back to the individual requests for each configured provider.
- The coordinator initiates workers for each configured provider to fulfill the requests. The providers initiate transactions belonging to the same request using the same nonces. Since a given wallet can use a nonce only once, the duplicate transactions that get processed later are simply rejected at no cost to the requesters.
- The coordinator concludes and logs some final run statistics.
It is important to note that the requesters themselves pay all related fees for making a request. This is made possible through the use of a Hierarchical Deterministic (HD) wallet, which allows the API provider to easily create more than four billion “sub-wallets”, all derived from the same master key of the API provider. Requesters then reserve and fund these wallets before making requests. When a request is made, the fulfillment fees are deducted from the requester’s wallet.
API providers can configure their Airnode through the use of two configuration files: config.json and security.json. These files instruct Airnode on where and how it should be deployed (as well as re-deployed), which chains and providers to listen on, the endpoints the API provider wants to serve, as well as references to secret values such as the API keys.
As a unique feature, a single Airnode can be configured to serve over multiple chains and providers simultaneously. This means that a single Airnode can listen for and respond to requests on Ethereum main network, Ethereum test networks (such as Ropsten and Rinkeby), Matic, xDai and more. Non-EVM based blockchains are planned to be supported too.
“Providers” in the above context refer to blockchain nodes or blockchain infrastructure service providers. As a person or company interacting with a blockchain, running your own blockchain node will provide you with certain permissionlessness guarantees (actually there is more to be said about the security assumptions one makes about their providers, but should probably be a separate post). However, blockchain infrastructure service providers often have much better availability compared to blockchain nodes operated by non-specialists. Therefore, the oracle node being able to support multiple providers (of any source) is critical.
Airnode has first-class support for multiple providers of the same blockchain network as an additional level of redundancy. In a recent outage, Infura users experienced several hours of downtime. People only depending on their in-house blockchain node suffer from such outages even more frequently, as they typically are not better at running blockchain nodes than professional infrastructure service providers. An Airnode operator using multiple different provider sources would have been unaffected by this downtime.
Airnode has no hard dependencies by default, meaning that it has no database, no cache, no web server, etc. Dependencies introduce complexity and points of failure, while introducing potential attack vectors and additional costs.
An oracle node maintaining a parallel database(s) to the blockchain is not a simple exercise. Events such as a chain reorganisation can cause a host of discrepancies between the database and the blockchain, some of which can cause the node to stop processing. Instead, Airnode works with a single source of truth: the blockchain.
When Airnode is hosted as a serverless function on platforms such as Amazon Web Services (AWS) or Google Cloud Platform (GCP), the API provider will often pay no hosting fees while receiving little to no traffic. This is possible as many of these cloud providers offer a “free tier”, which often covers much more than what Airnode requires. This significantly reduces the barrier to entry for node operation.
Airnode provides a lightweight, transparent and simple way to serve API data to the blockchain, while also serving as the backbone for the API3 network. The request–response protocol will be the first protocol launched on Airnode that will allow API providers to start serving their data. Many of the requirements for this protocol have already been built and are rapidly approaching a limited testing and alpha launch phase.
Once the request–response protocol is completely stable, the next protocol to be implemented will be a pub–sub protocol. This will allow requesters to subscribe to certain conditions (e.g., did the price deviate by 1%?) and then have an on-chain event triggered once that happens. This will also serve as the backbone for some of the dAPIs, but that’s for a future post.
If you’re looking to get involved in any way or are an API provider looking to starting serving your data on the blockchain, feel free to reach out through any of the API3 social channels or send an email to email@example.com