Will the real decentralized oracles please stand up?

Kipkap
KUUCRYPTO
Published in
27 min readApr 30, 2020

As the crypto economy and DeFi have evolved, smart contracts and oracles have proven to be important in tandem. Lots of oracle platforms are attempting to bridge real-world information onto the blockchain. A major focus will be on what projects are currently doing live, while noting pertinent roadmap information. Centralization, consensus, and tech will be key themes throughout the analysis. Oracles in the crypto space are essential variables. This article will define smart contracts and oracles, their relationship with one another, oracle projects competing in the space, while identifying the best practices of an ideal blockchain oracle protocol.

What is a smart contract?

A smart contract is an automated program that cannot be tampered with or shut down once deployed to the blockchain even from its creator. It receives an input, executes some logic, and updates the blockchain state accordingly. The smart contract enables the exchange of value without the need for a middleman or intermediary party that’s entirely run by code. A smart contract works similarly to how a vending machine works with the added feature of being tamper-proof. Money is used in a vending machine to select a snack. In parallel, a smart contract requires a token on a blockchain ledger. Once parameters are set, like the choice of a candy bar, the smart contract output is triggered. This comparison is an oversimplification of how a smart contract works, but thinking about it this way gives us a clearer picture of its different use cases.

A straightforward use case of a smart contract is betting against someone on whether the price of Bitcoin can go beyond $100,000 before the end of the year. These parameters then get deployed onto the blockchain. Once the end of the year passes, money gets deposited or withdrawn from both parties accounts based on the outcome of whether Bitcoin’s price goes over $100,000 by the end of the year.

The above example further demonstrates the excellent use case for a smart contract. It is fully decentralized, automated, and is tamper-proof so no one can cheat. There are no middlemen involved and no third parties involved in executing the smart contract. But wait a second, if a blockchain system is supposed to be completely isolated from the outside world, how did the smart contract know about the price of Bitcoin? That is where an oracle comes in. If you think of a smart contract as a regular person, the blockchain ledger is the universe where that person resides. So, an oracle is an entity that can break through the inter-dimensional barrier and can know what’s going on in the other universes(like the traditional internet). That is the simplest way to think about what an oracle is with regards to the blockchain world.

What is an Oracle?

The mechanics of smart contracts have been detailed and analyzed. It’s time to look into the role of an oracle and how it works. While smart contracts are programs that run entirely autonomously, the first thing that they require is an input. It is this input that triggers a smart contract to be executed in the first place. In the betting example from above, this input is the price of Bitcoin. But how exactly is the price of Bitcoin retrieved, and who is in charge of sending this price data to the smart contract? That is the job of an oracle. It extracts some data from websites that keep track of Bitcoin prices and then sends it to a smart contract. No matter how secure the smart contract is, an oracle is still its weakest link. A smart contract cannot be completely decentralized until it can get data into it using some decentralized mechanism that’s resistant to any manipulation.

Smart contracts on the blockchain can’t deterministically verify the external inputs that are fed by an oracle. Based on this fact, the oracle plays one of the essential roles in determining how secure smart contracts truly are. That doesn’t mean that all smart contracts require the input from the external world. However, without incorporating real-world data, the use of smart contracts is minimal, and more sophisticated use cases cannot occur in sectors like decentralized finance, insurance, lending, etc. In that way, a smart contract is only as trustless as its weakest link, which in many real-world scenarios, involves an oracle of some kind. Each application may have a different type of stake in what the smart contract does so the data one contract wants to receive may be different compared to another contract that wants to receive another data. Each application may require different kinds of data, with the most straightforward data dealing with numbers, such as Bitcoin price feed. So, while many platforms are competing to become the de-facto oracle solution, this will probably never happen as each application may have different use cases with various degrees of cost-effectiveness.

Potential Problems with Oracles

Oracles connect the blockchain world to the real world, and without them, the blockchain platforms will be like a walled garden with no means of interoperability with the outside data. Hence, they are not very useful if they are closed-off. Mainly, without oracles, smart contracts are limited to only transferring of tokens from one account to another. If you are to use an oracle to connect the two worlds, that introduces a severe risk to any smart contract no matter how secure and safe they may be. This dilemma is the famous oracle problem. How can we ensure and guarantee that the data fed by oracles is legitimate? Centralized oracles need their smart contracts to have trust in 3rd parties to retrieve data. Some oracles rely on some reputation and penalty mechanisms to prevent manipulation. In contrast, other oracles use multiple data sources and aggregate the data and take the median of the data before feeding it to smart contracts.

The Oracle problem is rooted back to the simple premise of whether information is true or false. If the information provided is correct, how can it be proven or disproven? The validity of data is not an easy problem to solve, and research is still ongoing. Still, there have been numerous oracle projects that have decided to tackle this problem to provide the most reliable data to smart contracts. Once some data is put onto the blockchain by an oracle, who can verify this data to be the absolute truth? Is there any incentive to verify the correctness of some task? This incentivization quandary is what’s referred to as the verifier’s dilemma. The Oracle problem and the Verifier’s dilemma go hand to hand, and there are no fail-proof systems. However, best practices include the use of multiple data sources, various oracles, staking and penalty mechanism, and trusted execution environments. All of them have their advantages and disadvantages because while it’s quite easy to create an oracle system, it is rather challenging to create a completely trustless oracle system.

Some of the major attacks that may happen on Oracle systems are the following:

  1. 51% attack: When a single entity owns a significant number of oracle nodes, they can control what data to feed and what data to claim something as the absolute truth. The more distributed nodes are, the better off they are. Still, it’s rather difficult to determine whether a single entity controls the majority of nodes without some strict KYC guidance, and as soon as you introduce that, it’s not decentralized.
  2. Mirroring attack: An oracle node can get the data and share it with other nodes in its control. This sort of attack has the potential to spread false information, thereby degrading security. Mirroring attacks may occur when different nodes copy the other nodes’ answers, also known as freeloading.
  3. Data Manipulation: Even if an oracle node gets the exact data from external sources, there is a chance that those external sources have manipulated data. Hence, the oracles end up retrieving a bad value without knowing. This problem is especially prevalent when the system uses specific exchanges rather than allowing the nodes themselves to retrieve the data from wherever they want.
  4. Liveness Issues: There may be a scenario where none of the nodes push the data on-chain. While this is unlikely to happen all the time, malicious actors may do so to halt the oracle feed on-chain intentionally.

What does it mean for an Oracle system to truly be decentralized?

An end-to-end decentralized oracle system is something where every component that’s part of the system is in some way, shape, or form decentralized. When you look at most of the current oracle solution providers out there, they do not pass this test no matter how much they claim to be decentralized. We can look at all the different oracle systems available on the market at the moment and apply the following checks to figure out whether they are genuinely decentralized or just claim to be decentralized:

  1. There should have been no pre-mined or pre-minted tokens given to the system developers. Third parties have a hard time gaining trust in a project with this form of centralization based fears of token dumps in the market at any time. Whether this is a good approach or not is debatable. The main issue is whether the project implements a proper distribution mechanism after pre-mining/minting coins.
  2. Anyone should be able to participate in the system. Introducing any sort of permission or preference of some nodes over other nodes loses the permissionless factor of the said system. This dynamic further gives data providers more incentive to be on the “good” side of the original authors so they can solidify status as one of the trusted sources. It would then become difficult for new nodes to join the network in fear that users of smart contracts may only want to use the preferred nodes list from the original authors of the system.
  3. The oracle system has to have a proper mechanism to initiate and resolve disputes based on data discrepancy. Resolving these disputes must be a decentralized process without compromising security.
  4. All upgrades and changes to the system must be done in a decentralized manner, and no one party should be the authoritative source when it comes to upgradability.
  5. There is no one-shoe-fits-all consensus algorithm for a completely decentralized system. PoW(Proof of Work) has its drawbacks based on the high probability of a potential 51% attack. PoS(Proof of Stake) makes it so that those who own the largest amount of tokens have total control over the consensus mechanism, thereby leading to a 51% attack differently. Perhaps a combination of different consensus algorithms, depending on its use, maybe an ideal scenario to combat this issue.

Oracles can be said to be decentralized if all of the above conditions are met. Keep in mind that there may be other features to determine the trustlessness of a system, but these give the best approximation to working towards a perfect oracle system.

Projects working on Oracle infrastructure

There are many projects currently working on building their own oracle platform, and each of them is tackling the infamous Oracle problem in different ways. No matter what, everyone is working towards a fully decentralized oracle platform as that will be the one most trusted by end-users. Some are partially decentralized, some start centralized and work towards decentralization piece by piece. In contrast, others focus more on the consensus protocol, and yet some put more effort into different incentive and penalty mechanisms. Whatever the case, the goal is to reduce the reliance on intermediaries, thereby working towards a trustless end-to-end decentralized infrastructure.

This article will be a detailed overview of the top five crypto oracle projects currently available on the market. These are protocols that have made a name for themselves over the last few months. There may be other oracle projects that are not listed in because it’s impossible to cover each one of them in detail. Each project is analyzed concerning how decentralized they genuinely are and whether everything described in their whitepaper is live yet. It is one thing to write about how the platform works in your whitepaper. However, to analyze the whitepaper alone is not the best way to judge a project. The current state of the project very rarely reflects what’s actually outlined in the whitepaper.

In addition, the following questions will be addressed explicitly for each project:

  1. How does the project handle potential oracle problems such as mirroring attack? This attack is when an oracle node gets the data and shares it with another node in its control leading to a bad value added on-chain.
  2. How does the project handle an issue when none of the nodes push the data on-chain? While an unlikely scenario, it’s still a possibility that opposing actors may want to halt and degrade the integrity of the system.
  3. How does the project implement a dispute mechanism if a bad value does get added on-chain?
  4. How are the data providers selected?
  5. How long has the project been live on the mainnet? If not, what is the current state?
  6. What is the current token distribution model for the project?
  7. Is everything outlined in the whitepaper already live on the mainnet?

MakerDAO’s Oracle

MakerDAO is a decentralized organization where its protocol, referred to as the Maker Protocol, employs a two-token system — MKR, the governance token, and DAI, the stable coin. Their platform unlocks the power of decentralized finance to anyone in the world using Ethereum or any Ethereum-based asset as collateral to generate Dai. We will examine their oracle stack that they use to feed prices for digital assets used as collateral.

Maker has an oracle smart contract module for each collateral type where the oracle nodes feed in the price data. The oracle stack deals with feeds or data submitters. As part of the oracle stack, there are also what are known as Global Settlers(or Emergency Oracles). All the data submitters(or Oracle nodes) are external actors with special permissions in the Maker system, and they first need to be whitelisted by the system. MKR voters choose a set of trusted Oracle Feeds. Also, the price inputs themselves are fed from Oracle Feeds into what is known as Oracle Security Module(OSM), which acts as a defense mechanism between the Oracle nodes and the Maker Protocol. This dynamic makes it so that the price feeds are delayed by one hour, so in case of emergency, Emergency Oracles can freeze an oracle if it is known to be compromised. MKR voters select emergency oracles. The Medianizer smart contract takes the median of different price feeds and uses that as the official price feed if there are no issues.

Addressing the most common questions

While MakerDAO’s oracle architecture is immune to mirroring attacks because each node computes its own data, it is not Sybil-resistant. While the organizations running Feeds are well known to the public, the individuals running the Feeds are pseudonymous to protect themselves from the risk of extortion and blackmail, so the platform has no way to handle Sybil attacks effectively in rare scenarios.

Furthermore, because there are only a limited number of Feeds that are trusted, it wouldn’t be out of the realm if a large party were to collaborate with the majority of the Feeds to launch an oracle attack. This issue can be mitigated by the Oracle Security Module(OSM) to act as a delay on oracle prices. If it is seen that some Feeds are pushing no values, an emergency governance vote could be initiated to remove the misbehaving Feeds. However, there are cases where even the OSM can fail to work correctly. For instance, there may be authorization attacks and misconfigurations whereby access to core contracts can be revoked, causing mayhem as prices fail to update. There is no automated system in place to handle these kinds of situations. Consequently, the only solution is due diligence and relying on the entire community to do their part in keeping the system secure, which may be burdensome on users.

Last but not least, in terms of the token distribution model, the total supply of MKR was designed to be no more than 1,000,000; however, as of April 2020, it is currently 1,005,576. This supply discrepancy is due to the system’s dual token mechanism that also includes DAI, a stablecoin pegged to $1. If the system debt exceeds the surplus, the MKR token supply increases through a Debt Auction to recapitalize the system. The circulating supply of MKR is also 1,005,576, which indicates that all the tokens are circulating on the market as of now.

Summary

In version 2 of their oracle stack, MakerDao does the majority of its computation off-chain whereas on-chain, the Medianizer is involved along with Oracle Security Module and the Maker Protocol itself. While the oracle stack of Maker is a small part of the Maker system, it plays an integral role. However, there are some shortcomings with the way Maker has implemented their oracle stack because, for one, there are a minimal number of oracle feeds, and adding more feeds requires MKR voters to vote on the proposal thereby leading to only the trusted list of Oracles to feed data. There is also a high probability that the voters may choose only the most popular oracle feeds on the market, thereby making it next to impossible for newbies to join the oracle stack. MakerDao’s Oracle stack is specifically designed for the Maker system, thus limiting its capabilities beyond Ethereum. Furthermore, whoever has the majority of tokens has the potential to hijack the entire network if proper security measures are not in place. The token takeover issue is not unique to Maker but also many of the other blockchain projects that only use their tokens as a means of staking or governance mechanism.

Witnet

Witnet is a decentralized oracle network(or DON for short) that connects smart contracts to the real world. What sets Witnet apart from other oracle infrastructure is that it allows anyone to act as a “decentralized oracle node,” also known as witnesses, who retrieve information from any web address and deliver this info to smart contracts. Reputation, rather than computing power, decides each witness’ weight in the network.

The data that gets delivered using the Witnet protocol comes from randomly selected anonymous nodes. The only information about them that’s public is their reputation to the system. Consequently, the more honest the nodes are, the more probable they are likely to be chosen to act as oracle nodes. The Witnet protocol isn’t constructed to detect fake data but instead tries to guarantee an exact match between what’s published online and what data is delivered by the nodes. Witnet has its own blockchain with its native token — WIT. The oracle mechanism relies heavily on its reputation system. It employs a unique Proof of Eligibility, where each node computes their eligibility for performing various tasks such as mining and witnessing data requests, which are then verified by the network. The Witnet blockchain can use Bridge Nodes to connect to external public blockchains such as Ethereum.

Addressing the most common questions

In Witnet, committees for resolving a data request are selected randomly based on the previous behavior of said nodes. This behavior is tracked through reputation points, so an attacker could only succeed in launching a mirroring attack if they control a majority of the committee. While this is unlikely, the potential for it to happen is there, regardless. On another note, if none of the nodes push data on the system, the data request gets resolved and tagged as “insufficient consensus”. This data request does not remain in Witnet’s chain forever. With that said, however, if a malicious value is inserted onto the chain, currently, there is no dispute mechanism in place. This lack of consensus could potentially be dangerous in cases where the smart contracts are dealing with high-value data requests. If enough reputed nodes collaborate to send a bad value, whether intentionally or accidentally, the integrity of the system may be in jeopardy.

When it comes to the token distribution model, as of April 2020, the total supply of WIT is set to 2,500,000,000, and because the platform is not on mainnet yet, there is no circulating supply. Every 90 seconds, 500 WITs are supposed to be generated. The rewards also have a halving schedule similar to BTC, which would cause the WITs emission rate to be cut in half every few years. The only tokens not distributed through mining are those invested through Token DPA(Debt Payable Assets) or SAFT(Simple Agreement for Future Tokens) along with the core builders of the foundation. Witness nodes will mine 70% through block rewards, 30% were minted in the genesis block, 20% Witnet received 20%, and 10% sold to investors.

Summary

While Witnet seems like a promising project for the future, it is just that — reserved for the future. They’re not live on mainnet yet. Also, not everything outlined in the whitepaper has been implemented in the system yet, or the whitepaper may have outdated information. The Witnet platform relies on its critical Reputation-based system where reputation is subject to demurrage, which puts an expiry date on every reputation point, which is suitable for new nodes wanting to join the network. The network also relies on 50% of the witnesses being honest at all times, and if a bad value gets fed to the requester, there is no way to contest it.

Band Protocol

Band Protocol is built on Ethereum and is a protocol where independent parties work cooperatively to provide trusted data on-chain, which is later consumed by other applications. Oracle providers of data stake the BAND token on the network. Only the top staked tokens are selected to perform work. The verification of data submitted on-chain must be signed by more than ⅔ of active data providers to be valid. Therefore the Band Protocol implements a DPoS consensus by which the data is deemed accurate, works by a 2/3rd validation.

Data points, such as Ethereum price, require a constant feed. For such cases, the Band Foundation directly invokes data requests to the coordinator node so the price feed is always available for other applications to ingest on-chain. The project seems to be in its early stage as it currently relies heavily on the Coordinator Node that is only run by the Band Foundation. The coordinator node is what dispatches the data request to all the active Provider Nodes(or Oracle Nodes) in the network. It’s also the job of the coordinator node to aggregate all results and then provide them to active oracle nodes in exchange for their signatures of 2/3rd validation. The provider nodes then perform the aggregation of all the results, sign the data, and pass them back to the coordinator node. The Band Protocol also relies heavily on the actual uptime of the provider nodes and whether to make them accountable. If the coordinator node run by the Band Foundation goes down, the whole system halts, which means this is its central point of failure. Redundancy issues are apparent based on the data being passed back and forth between the coordinator and provider nodes more times than necessary.

Addressing the most common questions

The Band Foundation team hopes to build a network of data providers with some stake and incentive to provide data. Because these data providers are essentially top stakers on the system, they will need to stake a high amount of BAND tokens as collateral to make sure they provide honest and correct data, maintain uptime, no double signing, etc. This staking system still doesn’t address the infamous mirror attack problem where multiple data providers controlled by the same entity can essentially get the data first and share the data amongst each other. This problem could lead to bad value to be put on the blockchain leading to data manipulation. Furthermore, when it comes to liveness issues where none of the nodes even push any data to the chain, the Band protocol currently has no way to handle this scenario. Now, if one supplies the wrong data, it is the burden of token holders to unstake the data providers. There is no proper dispute mechanism in place to prevent this sort of issue.

The Band team seems to emphasize usage simplicity. The protocol has “packed transactions” where data can be sent with the same transaction as a user transaction allowing dApps to access near real-time price information. Making block times faster for something like high-value data feed may be a double-edged sword because, without a proper dispute mechanism in place, one single bad value could destroy the integrity of the entire system.

When it comes to the token distribution model, as of April 2020, the total supply of BAND is set to 100,000,000, while the circulating supply is 19,894,033, which means more than 80% of the tokens are not available to be used at all. Of the initial token supply distribution, seed sale investors comprise 10% of total token supply, while private sale investors hold 5% of the total supply. In addition, public sale investors include 12.37%, ecosystem at 25.63%, the team at 20%, advisors at 5%, and the Band foundation at 22% of the total supply, respectively. While it can be debated what’s the best way to distribute tokens, there is always a fear of initial investors dumping BAND tokens on the market as they’re released, which may create a massive doubt amongst users and developers trying to utilize the Band Protocol.

Summary

The team behind Band Protocol rolled out the initial solution on Ethereum but quickly changed the platform. Now they’re focused on building it on the Cosmos platform to aim for a lower block time compared to Ethereum and to easily extend features that cannot easily be added to Ethereum such as custom programming language for writing oracle scripts. The team also seems to be moving away from using a coordinator node to using validator nodes the same as Cosmos. However, at this current juncture, it may not be ready to be used by applications in production environments because of its over-reliance on the Band Foundation itself. Furthermore, the migrated version of the Band Protocol built on Cosmos is not live on mainnet yet. In the initial launch, there is no on-chain slashing mechanism for providing bad value data, so it is dependent on users themselves unstaking the data providers if they do end up putting a bad value on-chain.

Chainlink

Rather than providing an oracle solution like others, Chainlink uniquely offers a framework where other oracle networks can be built and be used to feed data inputs to smart contracts. Each of these oracle networks is a collection of independent node operators that give users access to any API to request any type of data from external services. Furthermore, according to the whitepaper, Chainlink can be used to request data from one or more of its oracle nodes, thereby reducing a single point of failure.

Users have their own contract that they use to call another contract(also known as Chainlink contract) with the relevant query to be executed. Parameters are set, such as the number of oracles and other service level agreements to use. Chainlink then matches the service level agreement, such as the type of data requested and the number of oracles needed. Next, Chainlink nodes monitor the blockchain for this event and begin the process of requesting data such as price feeds. The retrieval of the external data occurs via Adapters. Chainlink nodes then retrieve the data from various sources and then submit the data to the contract on-chain. The results are aggregated and finally submitted back to the user contract.

Addressing the most common questions

Chainlink aims to solve some of the problems such as mirroring attack and liveness issues through the penalty and deposit contracts(Staking) along with the reputation systems. However, they are not implemented on the mainnet yet and are still in the development phase. Because of this, there is no dispute mechanism in place to dispute bad values put on chain currently. Theoretically, a Chainlink node could put a bad value on-chain and get away with it.

The Chainlink team themselves have currently implemented a reviewed node operator process. In essence, each node that is part of the Chainlink network first has to be validated and reviewed, making it built on something like Proof-of-Trust mechanism. This structure does prevent Sybil structure. In the event someone were to put a bad value on-chain intentionally, they could be removed from the trusted nodes by the Chainlink team. There is always a trade-off to be had as to how decentralized you want your system to be. At Chainlink’s current state, the nodes are selected in a completely centralized manner. This selection process makes it difficult for developers willing to use the system to request data. It makes sense why the team decided to take this route as the reputation mechanism, and proof of stake system is still in development and has not gone live to mainnet yet.

One noteworthy tidbit on Chainlink is that data providers in the network aggregate the data from multiple sources and remove outliers themselves. When trusted nodes send data on-chain, the aggregation is done via a smart contract to ensure high quality of data. Requesting data using Chainlink varies in time based on each node operator. Each node has a specific response time configuration. If the node operator has configured their node to send data to the blockchain in the next block, it would take 15 seconds(the block time of Ethereum upon which Chainlink is built on) to get a response. However, if the node operator has configured to send a reply in 2 minutes, it would take 2 minutes and 15 seconds for the data to be added on-chain. Based on this response time configuration by the nodes, currently, users have no control over when their data request will have the answer submitted to on-chain. However, there are various aggregator contracts deployed by the Chainlink team that pull specific data, such as price for Ether and Bitcoin, from multiple Chainlink nodes, aggregates them, and then pushes the value on-chain every 7200 seconds or so. These aggregator services can then be utilized by anyone to pull the price for Ether and Bitcoin at any time. This means, if some people want more speed, they can choose a node directly to request some data. In contrast, if they want more security, they can choose to utilize the price feeds put on-chain by the aggregator contracts, such as the one currently deployed by the Chainlink team.

When it comes to the token distribution model, as of April 2020, the total supply of LINK is set to 1,000,000,000, while the circulating supply is 350,000,000, which means around 65% of the tokens are not available to be used at all. On their ICO(Initial Coin Offering), the Chainlink team offered 35% during the public sale, 35% was allocated for node operator incentives, and 30% went to the company. LINK is predominantly held by externally owned accounts(non-contract accounts).

Summary

One thing to note from all this is that the Chainlink nodes operate entirely independently from one another as there is no peer-to-peer networking among them. Hence, it is theoretically blockchain agnostic in that sense. Furthermore, Chainlink does not have any mechanism in place currently to handle bad value submitted by the data providers on-chain, whether it is done intentionally or accidentally. One of the most significant risks lies with the fact that only the trusted nodes can participate in the network currently. Chainlink’s staking and reputation system seem to still be under development. The way it works is, every data request has a predetermined Chainlink node that can fulfill it, so it is not yet fully decentralized as outlined in the whitepaper.

Tellor

Tellor is a decentralized oracle that aims to provide the most secure solution for high-value off-chain data in their smart contracts. Tellor is unlike any other oracle service provider out there. While Tellor may be slower than other platforms, this is by design. When it comes to security and decentralization, speed is a secondary factor.

Currently, this is how the oracle system works on Tellor:

  1. Users submit a query to the oracle, where they can also choose to put “tips.” Consequently, miners may decide to choose this query over others. Additional users can also put tips on the same query.
  2. The oracle selects the best-funded query every 10 minutes, provides a new challenge that the miners have to solve.
  3. Miners then solve the problem according to the difficulty defined along with the data returned from the user query to the oracle contract.
  4. The oracle contract takes the values from the first five miners who successfully submit the values, takes the median, and then rewards all five miners equally.
  5. The returned data query is saved to the blockchain, at which point any user can request the query.
  6. Anyone with TRB can initiate a dispute for the value that was submitted by any of the miners. From this point on, the miner in question will be locked out and not able to participate in the operation until the vote is complete.
  7. If the vote is in favor of the disputer, the disputer gets the miner’s stake into their wallet, and otherwise, the miner receives the disputer’s fee instead.
  8. The system keeps everyone honest, thereby preserving the integrity of the oracle mechanism.

Addressing the most common questions

Tellor doesn’t rely on any one consensus mechanism but utilizes a hybrid of both Proof of Work and Proof of Stake for two very different purposes and scenarios. Users submit their request, and miners then compete against each other to add the answer to the request to an on-chain data bank that is then accessible by all Ethereum smart contracts. Miners have to stake 1000 TRB(as of this writing) to participate in the consensus and, after that, are allowed to mine blocks where they are also submitting the answer to various requests at the same time. If a miner puts in a wrong value to the blockchain and TRB holders vote this as a bad value, their 1000 TRB stake has the potential to be slashed. Anyone in the community can challenge a value that’s added to the chain. If they lose the challenge, the miner receives the fee. Miners also earn TRB from the inflationary rewards for each block they get to mine. The mining system is unique in that instead of one miner reaping all the benefits, a total of five miners get to share the reward for each block reward, thereby preventing race conditions and other potential problems.

Furthermore, by selecting five miners as the winners rather than one, it makes it that much more difficult for the same miner to successfully submit three of the five values, making it expensive to do a 51% attack. Having miners solve a cryptographic problem prevents Sybil attacks where the same party could operate under multiple identities at the same time, thereby undermining the security of the entire system. And combining PoS on top of PoW makes it so that the largest holders would have to gain 51% not only in terms of hashing power(which is an enormous task by itself as proven by the Bitcoin blockchain) but also risk their stake in case their values are deemed to be malicious. This hybrid system is flexible enough that more miners can join the network as the staking amount is relatively low(~1000 TRB) and robust enough that more contributors are encouraged and incentivized to report misbehavior. It’s the best of both worlds.

Since Tellor has PoW, anyone can be a miner, but it also has PoS, so you need a fixed staking amount before you can be eligible to be a miner. Anyone on Tellor can start a dispute. The dispute fee is a variable that is determined by the total number of miners, count of currently staked miners, and the total stake amount in Tributes. The dispute fee can be anywhere between 1.5% of the staking amount and the maximum of the staking amount. Last but not least, anyone can vote. By combining all these carefully selected variables in each process, the system becomes more and more decentralized as it invites and encourages lots of participants whether you’re mining or voting or disputing a bad value.

When it comes to the token distribution model, as of April 2020, the total supply of TRB is 1,230,172, while the circulating supply is 1,140,120. There is some discrepancy between the circulating supply and the total supply because 10% of each block reward is given to the Tellor team as devshare. One thing to note is that unlike the other four projects, Tellor did not have a pre-mine of their tokens, nor conduct an ICO of any kind. Token holders do not have to worry about the founders dumping coins on the market. Most of the projects fail to check this part off. While it’s understandable that you may need some funding to get started, it tells a lot about a project wanting to grow organically.

Summary

Tellor seems to check off most of the points that we defined earlier in terms of what is needed for a project to truly be decentralized. It has a system in place to handle mirroring attacks, Sybil attacks, and liveness issues. Further, it also has a working dispute mechanism whereby in case someone pushes a bad value on-chain, anyone in the community can dispute it. After that, the miner is locked from contributing to the network until the dispute is resolved. Tellor also implements a unique combination of PoW and PoS, so it isn’t over-reliant on any one system. There are proper checks and balances in place for everyone participating in the network to hold each other accountable. One of the features that is limiting in Tellor is that it only allows for one data request to be added to each block at a time. However, the team seems to be addressing this limitation with their V2 upgrade by allowing multiple data points. The current block time is also 10 minutes, which may be limiting. However, the V2 upgrade is supposed to modify this down to 2 minutes. Currently there’s a minor problem of people joining mining pools or banding together to be one of the PoW nodes on the network. These activities lead to a risk of becoming over centralized. In order to combat this issue, the V2 upgrade is poised to reduce the risk by allowing each miner to win a block only once every 30 minutes. If miners want to submit more blocks, the total stake would be raised to a double. For example, instead of staking 1000 TRB, they would have to stake 2000 TRB and so on.

Comparison Chart

Conclusion

This article serves as a brief introduction to the way smart contracts are run, the importance of oracles, and why it is essential to have a very well thought out, secure and decentralized system for bringing data into the blockchain world. There is no room for error because it undermines the entire security of the blockchain system upon which it stands on. It is not always about speed when it comes to a robust oracle infrastructure but rather about how reliable it is at the end of the day. There were many other oracle projects we didn’t discuss in this article, so an ideal solution for any application would be to use multiple oracle platforms if the data they want to retrieve is of high-value and very sensitive. In contrast, you can probably get away with a single oracle platform if the data is not of utmost importance. There are always trade-offs, and no one-shoe-fits-all oracle system out there exists.

--

--