HOPR DAO v0.1 Incentive Payout and Analysis

Qianchen YU
HOPR
Published in
7 min readJun 11, 2021

The first governance experiment for HOPR DAO v0.1 ended a few weeks ago. This is the first in a series of posts analyzing the results and explaining some of the thinking behind the experiment, the lessons learned, and what changes and improvements we’ll be making for future experiments. This first post will focus on the incentivization scheme, where 50,000 HOPR tokens were awarded to DAO participants based on their contributions.

Creating an incentivization scheme for something as subjective as governance discussion is a daunting challenge. There are countless factors to balance, and only limited metrics you can use as a proxy for nebulous concepts such as quality of contribution. Someone who made 100 posts could be an active and thoughtful contributor to the debate, or they could be a raging troll.

Similarly, someone might not make many visible contributions, but could be reading everything and bring it all together into one single great proposal. How do you ensure they get the reward they deserve?

HOPR faces the added challenge that we’re a privacy project, which doesn’t always sit well with data scraping and analysis. Our position is that governance should be transparent and public, and users who participate do have to give up a certain amount of anonymity (and should be fairly compensated for doing so), but our goal is for the discussion and referendum phases to be pseudonymous, and the vote stage to be entirely anonymous. This cuts off some avenues for tracking and rewarding contributions.

It’s also important to us that, while the governance discussions are moderated, the moderators’ should play a neutral role. Therefore the incentives should be awarded based on a predetermined system, with minimal moderator input to prevent bias.

Finally, there’s the problem of gaming the system. If you announce the rules for the incentives, people are encouraged to maximize their rewards rather than participating in good faith. We opted to pre-write the rules but keep them a secret during the experiment, but this presented its own problems: we could only guess how the discussion would unfold, and wouldn’t be able to update the rules on the fly.

The Incentive System

In the end we settled on a few general concepts:

First, rewards would be limited to the discussion and referendum phases, not the vote. With tokenized voting, a huge amount of power is concentrated in the voting phase. The HOPR DAO design is based around mitigating that imbalance by giving individual users a lot of power in the early stages, regardless of token holding. Crudely speaking, you may not be able to outvote the whales, but you do have a lot of say in what the whales get to vote on.

Second, we wanted to compensate people for giving up their privacy. Even though the forum stages were pseudonymous, it’s still less private than the voting stage. That sacrifice should be recognized and rewarded, ideally without further privacy costs.

Third, the goal was to identify and reward actions which furthered and enriched the discussion without a huge overhead in terms of system design or manual moderation (which would be extremely time consuming and hugely susceptible to bias).

In general, creating a valid proposal was considered the most valuable action. Creators and signatories of proposals were rewarded, with the goal of giving a higher payout to proposals which made it further through the discussion and referendum process and the highest payout to the winning proposal.

We also wanted to reward high quality posts and consistent contribution to the discussion. We used likes as a proxy for this, which felt less than ideal but there aren’t many other options available.

The full details of the incentivization scheme can be found here.

The payout was split between three categories, each with three subcategories, to try and reward a balance of behaviours:

  • Proposals: 35% across three subcategories to reward people for creating and signing proposals. 5% was reserved for whichever subcategory had the lowest average payout, to mitigate imbalance.
  • Discussion: 30% across three subcategories, to reward people for creating high quality posts.
  • Miscellaneous: 30% across three subcategories, to reward people for consistent participation throughout the entire process. Again, 5% was reserved for whichever subcategory had the lowest average payout, to mitigate imbalance.

Finally, 5% of the pool was reserved to be allocated at the discretion of the moderators, to reward any users or group of users which the moderators felt lost out due to the automated distribution.

The custom data plugins we used to allocate the rewards

Around a dozen custom data plugins in the forum backend tracked users’ contributions and the number of likes (or signatures in the case of proposals) which each post received. We were then able to automatically calculate the payout for each user across all nine subcategories. This process didn’t use any data that wasn’t publicly available to all forum users.

Results

So how did this play out?

First, the rewards themselves. The incentive pool was set to grow based on the number of users who participated, with unannounced potential to grow to 100,000 HOPR tokens. In the end we had 257 participants, which was around what we’d hoped for. This meant the reward pool was 50,000 HOPR tokens.

Of those 257 participants, 220 will receive some kind of reward, distributed as shown in the chart below.

Reward distribution, with users anonymized by numbers

The average payout was just over 110 HOPR tokens, and the median payout was just under 50 tokens. The highest payout was 5287 HOPR tokens, to the user who proposed the winning proposal. 13 out of the top 15 recipients created proposals.

Analysis and Lessons Learned

In general this went very well. The payout system seems to have rewarded the intended people to roughly the intended degree. There were some complications like multi-author proposals and combined proposals, but they were easy enough to fold into the automated plugins.

Analysis of full payout results

Because this was the first experiment, one major goal was to build in mitigation mechanisms in case any of our assumptions were wildly incorrect. Both the Proposal and Miscellaneous categories assign 5% to whichever subcategory results in the lowest average payout per user. The hope was that this would help to correct any imbalances that arose, and it seems to have worked quite well.

The one exception was the third proposal subcategory: 10% for proposals which were valid but which did not meet the threshold for validity but failed to get enough signatures to reach the referendum stage. We felt it was important that all valid proposals be rewarded, to not disincentivize bold or unusual ideas, but in the end only a handful of proposals fell under this payout subcategory, and their creators were probably disproportionately rewarded, even with the 5% leeway built in. It’s only a few hundred tokens overall, but that subcategory should probably have received 5% of the allocation, not 10%.

If there was any major misstep, it was tying so much of the payouts to the forum Like function. Because the details of the incentive system were kept secret it was impossible to do more than nudge people towards using this, and a lot of good posts went unliked, even if there was clear evidence that people supported them (e.g., follow-up posts expressing appreciation).

We think this is probably just a consequence of the tone and format of governance discussions. Posts were generally in depth and on topic, and people tended to signal agreement or disagreement by continuing the conversation rather than clicking the like button.

In the end this didn’t cause too many problems, but it did result in only 4 users qualifying in the reward subcategory “create ten posts with at least three likes”, far fewer than the intended number of recipients.

To mitigate this, the 5% discretionary moderator allocation will be divided between the 27 users who made 10 posts which received at least one like. This is far closer to the intended distribution. But even this misjudgement only resulted in a few hundred tokens deviation from the average, which still feels like a success for a first attempt.

What Next?

In general, this seems to have been a success. 257 participants is far, far higher governance participation than other projects are reporting, especially when you consider that HOPR is a relatively new project.

The process can certainly be improved, and now that we’re happy with the general approach, there’s scope to build more custom plugins for the forum to improve the user experience and make the next experiment more streamlined.

There are still a lot of open questions remaining. Is it even a good idea to directly incentivize governance? Participation is real work, and work should be rewarded. On the other hand, introducing monetary incentives into governance can distort the outcome.

We’re probably going to keep the incentives for a while. The incentivization scheme seems to have done a good job of rewarding the intended participants, but how much was that down to the rules being secret. If we ran the experiment again with the same rules, would we see a lot of useless proposals and likes to try and game rewards (we do have mitigations in place for some of this, so people wouldn’t actually get rewards, but it would still create a lot of noise)?

The next experiment should give us better data on that front. We hope to run that in July.

Finally, 51 participants who earned rewards still haven’t filled in their address on the forum. We’re extending the deadline to Tuesday June 15th at 2pm CET, but after that claims will close and the rewards will be distributed.

Thanks to everyone who participated! The next analysis post will focus on the third stage of the HOPR DAO process, the vote.

Qianchen “Q” Yu

HOPR Decentralized Technology Architect

Website: https://www.hoprnet.org
Twitter: https://twitter.com/hoprnet
Telegram: https://t.me/hoprnet
Forum: https://forum.hoprnet.org

--

--

Qianchen YU
HOPR
Editor for

Decentralized Technology Architect | Think for environment and energy