Token Model Simulation #1 Fools’ Agreement Part 1

Taeheon Lee
DECON
Published in
7 min readNov 6, 2018

Token Model Simulation Series

#0 Why Simulation?
#1 Fools Agreement Part 1: Introducing ‘Fools Agreement’ and Simulation Environment
#1 Fools Agreement Part 2 : Simulation Result Analysis

In our previous post, Token Model Simulation #0 : Why Simulation? we looked into why simulation is needed in the token model design process. As Decon is researching simulation system for token design, we would like to dig into Reinforcement Agent Based Simulation that was introduced in our previous post.

The first topic of Decon’s simulation series is the “fools’ agreement” problem. Before going into simulation analysis, we must understand what fools’ agreement is and why such problem occurs on the blockchain. Then, we will move on to simulation environment.

“Fools’ Agreement” Problem and Simulation Objective

Significance of Oracle

Services based on smart contract need an entity that observes outside data and inputs them into the blockchain. Such an entity that performs this function is generally called an oracle. The data that oracles feed the blockchain serves as a bridge with the outside world, which is why it is the linchpin of blockchain service operation.

However, unlike transaction that happen on the blockchain, outside data that the oracle brings in cannot be verified through the blockchain. Establishing an organization to substantiate outside data degrades the meaning of decentralization. For this reason, the authenticity of external data, aside from the blockchain’s integrity, can be a weakness to blockchain service and may pose a serious threat to the service’s normal operations

Therefore, ensuring that an oracle inputs valid data onto the blockchain service through mechanism design is critical. This is referred to as the oracle problem.

Oracle Mechanism’s Premise : Majority Consensus equals truth”

Most services try to resolve the oracle problem through votes by the token holders. Votes usually take place with tokens. Voters would deposit their tokens on the data they believe is right, and the smart contract decides that the data that received more tokens is valid.

Such mechanism functions to assess the validity of outside data based on the premise “majority consensus = truth”.

When the Premise “Majority Consensus = Truth” is Wrong

The voting mechanism based on the premise “majority consensus = truth” has a flaw — in some cases, majority vote cannot guarantee the corroboration of certain data.

First, majority vote has little significance when the voting rate is low because when there are only a handful of voters, the results could be manipulated at low cost. For instance, in a vote with 1,000 voters, you would need hundreds of people to manipulate the result. Yet, if the vote involves only ten people, then you could easily dictate the result by colluding with three or four people.

Even when the voting rate is high, it is difficult to guarantee a correct outcome if the voters cast their vote without proper consideration. If voters do not do their homework on candidates for class president, or presidential election, it would be hard to expect the most appropriate candidate for the class or the country to be elected. The same applies to voting on truth. If many token holders do not contemplate the integrity of data and allow it to be input into the blockchain, it would be difficult to verify the data.

In actuality, many blockchain projects have introduced various mechanisms to prevent these two circumstances mentioned above. In this simulation, we will try to analyze some mechanisms that can prevent the second situation and the results.

Fools’ Agreement

from : Red M, The Redolution is now!

We will refer to the situation where holders randomly voting on outside data and producing an erroneous result as the fools’ agreement. Why does this fools’ agreement problem happen in actual blockchain services?

Ideally, for voting on oracle data to occur effectively, all voters should do their research before casting their vote. However, in whatever form, that research will entail cost. Despite research cost, voting mechanisms assume that voters will study before voting because of economic drivers such as penalties/compensations regarding the vote.

However, it is a different story when token holders could reap the benefits without any research cost, or do not benefit from researching prior to casting their vote. In this case, many token holders would be reluctant to do their research and cast their vote randomly. Following votes will literally become fools’ agreements, continuously degrading the integrity of vote results. Eventually, more and more wrong data will be imported and will inflict substantial repercussions to services.

Simulation Objective: Analysis on “Fools’ Agreement” Solving Mechanism

For mechanisms to prevent the fools’ agreement problem, the focus is on increasing the ratio of voters who conduct research before voting. Through this, the objective would be to have votes incorporate the validity of outside data as much as possible.

To assess the validity of each mechanism, we have conducted simulation-based research. The analysis focused on how the research rate of voters changed and whether oracles produced good results. Furthermore, we observed through simulation the reason of failure if mechanisms did not function properly and reason of success if they worked effectively.

Simulation Environment Setting

Before going into the simulation result analysis, we will look into simulation environment. Please reach out to us if you have any feedback on our simulation setting or feel free to verify the validity of our analysis by experimenting with our setting for yourself.

Possible Vote Results within Simulation Environment

We simplified the vote results to come out true or false. Here is how we defined true and false.

· True : when true data was selected via vote

· False : when wrong data was selected via vote

Variables Related to Environment Setting

Variables related to the setting were objectivity, research cost, and lockup period.

Objectivity can be understood as difficulty of assessing data. If the objectivity of the data an oracle is dealing with is high, service users would not be able to evaluate the integrity of the data even with research. For instance, if the objectivity was 0.9, even voters who have conducted research would have a 10% possibility of voting false.

Research cost refers to the cost incurred to agents when they conduct research before voting. Research cost does not affect agents’ token holdings, but influence their level of study.

Agents who win the vote are rewarded their deposited token and reward token after the lockup period. There are many cases were tokens are rewarded at a certain period of time after the oracle vote takes place.

Number of agents simply means how many agents took part in the vote in the simulation environment.

The simulation was conducted by setting the value of the variables as follows:

· Objectivity : 0.8

· Research cost : 50

· Lockup Period : 3

· Number of agents : 30

Agents

In reinforced learning, agents are those who act within the environment and learn based on compensation and penalty. The agents in our simulation were set to be reinforced learning agents who vote on oracle data and learn based on the reward and penalty from their actions

Agents have two behavior options.

· Random Vote : Agents have 50 : 50 chance on choosing true or false.

· Research : Voting after conducting research. Agents have a probability of objectivity : 1-objectivity in selecting true of false.

We set the learning purpose for agents to maximize their token holdings and used Multi-armed Bandit을as their learning algorithm. Using the Multi-armed Bandit algorithm, agents update their profit forecast based on their action options and vote accordingly on the next vote.

Token and Relevant Environment Setting (Initial Distribution, Deposited Tokens)

To incorporate actual token distribution as much as possible, we used AUGUR token distribution status as reference for token initial distribution. We judged and excluded the top 50 accounts among the top 1,000 holders of AUGUR. With the holding ratio of the remaining 950 holders, we randomly chose accounts equal in number with the number agents, and distributed tokens.

For token deposits on each agent’s vote, we set the agents to deposit a certain amount of tokens based off of the value derived from a Gaussian distribution that averages 0.5 per vote.

Simulation Timing Axis (Episode, Timestep) and Data Deduction

One simulation is comprised of 300 episodes, and each episode consists of 50 timesteps. In order to analyze the data and information of our analysis and deduce meaning from them, it is important to correctly understand the concepts of episode and timestep.

Timestep is the minimum unit of a simulation. It might be easier to consider timestep as each oracle vote. At each timestep, agents vote on outside data and learn from being compensated or penalized based on the vote results. In other words, at each timestep,, the token holdings and behavior pattern of the agents change.

An Episode is a bundle of series of timesteps that agents learn. Every episode, the agents’ token holdings are initialized and the agents continue to build on their learning. The information on agents’ learning tendencies and oracle market’s tendency changes are analyzed per episode.

In order to explain the results of our simulation, we have accumulated statistical data by repeating the simulation comprised of timesteps and episodes 50 times.

Conclusion

This post dealt with the fools’ agreement that we aimed to analyzed through simulation and briefly introduced the simulation environment setting. In our next post, we will cover the analysis of the simulation results.

--

--