TruSet Token Beta Competition 2 Recap

The TruSet Token Data Beta has been live for four weeks and we have completed Competition 2 of the Beta which ran from December 20 to January 3. The TruSet Beta continues to see strong traction with higher levels of participation and user contributions compared with Comp 1. We now have community contributed and community validated reference data on 85 tokens. We continue on our trajectory to create the most accurate and trusted set of token reference data available in the industry.

Competition 3 is now live! The prize pool for Competition 3 is 14 ETH. Competition 3 runs from Thursday Dec January 3 until Thursday Jan 17. See details below.

Join our Slack and Telegram to talk to the TruSet team and hear from the Beta community!

Competition 2 Highlights

  • 83 users onboarded to the TruSet dApp since launch
  • 29 users earned a share of the 10 ETH prize pool
  • Validated records were created for 45 additional tokens, bringing the platform total to 85 tokens
  • 80 tokens now have 2 or more validated sections
  • Beta users have cast a total of 1,734 votes
  • Beta users proposed token record data 142 times during Comp 2

Thank you to all our Beta users for helping test our product and generate high-quality token reference data records.

Tokens

TruSet continued to add tokens to the platform for our Beta community to publish and validate critical reference data sections. For Competition 2 we added the 40 next highest market cap tokens. By the close of Competition 2, data was community validated on all 85 tokens currently on the platform.

How Does TruSet Prevent Bad Data?

As part of our Beta test, TruSet wants to see if users find ways to game the system, primarily through employing strategies to earn rewards without doing the work.

We have seen a few examples of gaming. In some cases the current TruSet design and community engagement have prevented those strategies from being successful. In others, we are adjusting rules and incentives to make those strategies unproductive.

Example 1: Lazy Publishing

We have seen a few instances where users are publishing blank or mostly incomplete proposals. Possibly this was done in the hopes of earning the publisher reward for an a validated record despite not doing the work to collect the information. Good news! You, our TruSet community, have not rewarded lazy publishers. All of these proposals were either actively rejected by the community or became expired when more complete records were accepted by the community instead. No TRU rewards were paid to the publishers of those proposals.

  • Empty Proposals: Of the 9 empty proposals submitted, 0 were accepted, 6 expired, and 3 had not closed by the end of Competition 2.
  • Minimalist Proposals: Of the 25 proposals with the least data in them, 2 were accepted, 14 were rejected and 9 expired. Additionally, the two that were accepted both look to be legitimately short, i.e. the WEEV Description and KuCoin’s listings — there just isn’t much more to say in either case.

Example 2: Blind and Educated Guessing

It is possible for voters to guess rather than actually validate proposals, and attempt to earn rewards that way. In competitions 1 and 2, stakes and rewards for validating were set to discourage “blind” guessing, i.e. where guessing that accept/reject occur equally frequently would yield zero net return.

However, “educated” guessing was still a possibility. We have seen that some users only vote “Yes” on proposals. It is possible that this reflects those voters’ true beliefs about the accuracy of those proposals. But, it seems likely that at least some of these voters are applying a “vote Yes on every proposal without actually doing the validation work” strategy in the hopes that they will get rewarded more often than not when the votes close. Under the Competition 1 & 2 staking rules, that proved to be a viable strategy. The former staking rules created a positive expected value for guessing (or lazily voting “Yes” on every proposal), as the likelihood of proposals being accepted was higher than that of them being rejected. As a result, just guessing “Yes” proved to be a winning strategy.

To disincentivize this strategy, we are changing the staking rules in Competition 3 (see below).

Example 3: Using Comments To Coordinate

This is an example of good community behavior to influence voting outcomes. We have seen several examples of comments left by validators pointing out the inaccuracies they found and flagging that to the attention of other voters. This has helped ensure that inaccurate proposals are correctly rejected by voters. This effect is particularly powerful when a validator rejecting a proposal leaves behind a comment with links to evidence of their claims.

This effect depends a bit on timing — it is much more effective when the comment is left behind early in the validation process, as there are more subsequent validators to be influenced by the comment.

Competition 3: The Great Work Continues!

Competition 3 is now live! The prize pool for Competition 3 is 14 ETH. Competition 3 runs from Thursday January 3 until Thursday Jan 17.

For Competition 3, you will still earn TRU tokens for successfully publishing and validating. Like Comp 2, we are adding new tokens to the list every week. We are looking for you to both publish the original section proposals and validate your peers’ proposals during this competition. At the end of Competition 3, we will calculate the net tokens earned from the close of Competition 2 until the close of Competition 3. Your share of the Competition 3 prize pool will be calculated based on your net tokens earned during this period. Like in Competition 1, votes will close after a minimum of 36 hours from publication and a minimum of 5 votes.

We have adjusted the staking and reward rules for validating for Competition 3 to eliminate the positive expected value of guessing “Yes” votes:

Not signed up yet for our Token Beta?

Click here to register