The Simplest Trustworthy Oracle

Oracles have a reputation for being a very hard problem in the blockchain space. An oracle in short is the ability to provide information about the real world to smart contracts that live only in computers. It might be surprising that a blockchain, sometimes called a “trustless protocol”, has no way of seeing the world outside its own internal computer, and the external accounts that interact with it.

I just saw a very smart panel at the CryptoEconomic Security Conference, featuring Vitalik Buterin, Joseph Poon, and Karl Floersch call oracles a hard problem. I think Oracles are perceived as a hard problem because people are pursuing an impossible quality from their oracles, which I’ll call “objective truth”:

Objective Truth: Any number of third-parties can base the outcomes of their agreements with any amounts of value at stake on the output of this oracle without fear of manipulation.

As soon as you try to introduce a notion of truth objectivity, you have to devise a fair and perfect arbitration system, in case external accounts disagree about the result. Now you need to figure out where to add possible penalties, maybe a voting system, probably a token sale, and I would argue these are all monkey patches, moving the goal post, running away from the real problem:

I don’t get to tell you when you can feel betrayed. Betrayal is a personal experience, which is rooted in understandings that could be completely unique to your experience, or private between you and a private group, and the potentially very informal agreements and understandings that you believe are in effect.

Sure, you can host a token sale and distribute them to experts as voting shares, and they can be widely known as the most trustworthy people in the world on a given topic, and they could represent trillion dollar companies who are responsible for that particular piece of information, and there may be no rational reason to suspect them of any collusion or manipulation of this on-chain report, but if you want the highest security as an individual, the guarantee that matters most is not whether everyone else in the world agrees the outcome is true. You could be victim to the world’s largest trolling campaign. Any system that tries to ensure objectivity will be gameable because achieving universal public consensus on an external, non-machine-verifiable fact requires challenges from the public, which fundamentally empowers parties with nothing at stake in your particular affairs.

As a voluntary participant in any distributed system, instead of objective truth, I think we should instead pursue one guarantee:

Perfect Insurance: The insurance is redeemable at any time the recipient feels it should be.

This is easily possible given a robustly developed social collateral network. If the agreement has been violated in terms that are easily agreeable to the mutual social network of the recipient and the oracle, then only the dishonest party is punished. If the recipient is clearly redeeming dishonestly according to their social network, then their access to liquidity is restricted in the future.

Sound too good to be true? Then you must not be familiar with the social collateral pattern, which this blog focuses on applications for. A social collateral network relies on just a couple simple assumptions, which are largely based on human competence in using the system, which I think is a good and honest primitive to build on, because it is actually an implicit assumption for any protocol. The usage assumptions are:

  • Users become habituated to extending stakes/bonds/“permissions to spend”/trustlines (verbiage refinement is important) to people they trust, in an amount never exceeding the amount the other person values the relationship.
  • - Users lock/withdraw the amount of funds they would like insured along the maximal flow of trustlines from the oracle to themselves.
  • - User client software is capable of presenting evidence of wrongdoing to users in the case of arbitration.

There is also an optional assumption that could be culturally imposed, and this is the context around this withdraw/lock-up.

For example: A very considerate protocol might request pre-approval from the involved members of the maximal flow, that they are willing to arbitrate the result of this oracle in case of a dispute.

A less considerate protocol might simply provide the justification at the time of lock-up, which imposes the burden of arbitration in the case of dispute, but in the case of no dispute, the intermediary parties may not even need notification of the withdraw, since their total balance could remain constant, only flowing between parties they trust and parties that trust them.

At the very least, even in a less considerate protocol, members of the flow should

For this less considerate protocol, members of the network need initial context that the extended “permissions to spend” are open-ended, and can be used for resolving disputes in the case you ever betray that individual. This permission to spend can be thought of as a universal, context-free peer-to-peer personal stake/bond.

Lastly, either protocol needs an arbitration interface, which allows the recipient/claimant to “make their case” that they deserve to keep the bond, and this case is bubbled in the reverse direction of the maximal flow, all the way back to the oracle account itself, which if perfectly blamed of manipulation by the network ultimately is slashed and ostracized from the network.

I hope I’ve introduced anyone interested in blockchain oracles to the notion of social collateral. If you’d like to learn more about how it can be used, please check out my other articles here on Capabul.