Map Improvement Proposal 12 (MIP-12)

Hivemapper Network
Hivemapper Foundation
8 min readMar 4, 2024

4/4/2024 Update — Technical implementation is complete. The changes from MIP-12 took effect for the reward week of March 25, 2024, which were paid on April 4, 2024. With this change, the AI Trainer bounty is complete. Instead, there is a rewards pool available every week based on Global Map Progress. It is shared among AI Trainers based on how many tasks they completed, the weight of those tasks and the reputation of the AI Trainer. Note: There is a minimum of 1 HONEY for rewards to be issued in a given week.

3/13/2024 Update — We’ve decided to finalize the proposal for establishing sustainable rewards for AI Trainers. This proposal would increase the share of minted rewards allocated to Map Editing & QA from 3% to 10%. It would implement a new formula for distributing rewards among AI Trainers, similar to the current bounty. This will require technical implementation. It will take effect once complete, as soon as the end of March.

Background

Just like contributors with dashcams, AI Trainers are essential to our community’s mission of building a decentralized map of the world.

Building a useful map will require all kinds of contributors playing all kinds of roles. Collecting imagery with dashcams is just the first step. Human contributors must train AI to extract Map Features from imagery. Human contributors must also audit the resulting Map Features to ensure that network products satisfy end customers. Over time, additional modes of contribution will be required, such as map editing.

Since the launch of AI Trainers last spring, tens of thousands of people around the world have submitted a staggering 180 million reviews to help develop new capabilities within the network. This is incredible progress. We believe Hivemapper is now the world’s largest token-incentivized program for reinforcement learning from human feedback, and one of the largest token-incentivized AI projects overall.

More than 180 million total AI Trainer reviews have been submitted, and activity is rapidly accelerating.

The rewards pool will continually evolve as the work of building a decentralized map becomes more complex.

HONEY is the essential currency of the Hivemapper Network. Rewards mechanisms create the incentive structure to reward all of the people who help build a useful product and align them to further the needs of the map.

The needs of the map will evolve over time. It is critical that rewards mechanisms also evolve to reflect those needs, rather than being frozen in time to serve the interest of any stakeholder or faction of stakeholders. Putting structures in place to resist factionalism and self-interest will be one of the most daunting challenges to this new approach to building collective infrastructure for the benefit of the world.

The current rewards pool is as follows:

Just like contributors with dashcams, AI Trainers must be rewarded in a sustainable and satisfying way that fairly recognizes the value of their work. The current Hivemapper Foundation bounty has been satisfying enough to generate rapid growth, but it is not sustainable for the long term.

When we launched AI Trainers last spring, we proposed a change to the rewards pool formulas (MIP-3) to address this need. In response to community feedback and the immaturity of the AI Trainer program, we issued a temporary bounty from the Hivemapper Foundation’s treasury. The bounty has remained active for almost a year, and is now roughly 10% the size of the weekly rewards pool, ranging between 400,000 and 800,000 HONEY per week. For this reason, we believe allocating 10% of the rewards pool to AI Trainers will be sufficient.

In the near term, weekly minted rewards will be the best way to reward AI Trainers. In the longer term, we expect that AI Trainers should receive a share of consumption rewards for products that they help to build and audit. However, that will be a topic for another MIP at a later point in time as Map Feature consumption becomes more consistent.

If this proposal is finalized, the rewards categories will be as follows:

The main risk we see with this proposal is the variability in HONEY rewards for AI Trainers. If more tasks are completed in a week, HONEY rewards per task will decrease. If fewer tasks are completed in a week, HONEY rewards per task will increase. This will tend toward equilibrium, but it could hinder the ability of the network to quickly scale up review volume to satisfy the needs of map users.

We recognize some contributors with dashcams will be disappointed to see their share of the global rewards pool decrease. We respect and understand this feedback, but we urge these contributors to remember that everyone in the community shares the goal of building a useful map — a map that requires the work of AI Trainers to be successful.

This collective mission will require many years of work by hundreds of thousands of people. Restricting the growth and development of the network to steer more tokens to a few thousand early adopters will not create sustainable long-term value for anyone.

Implementation

The size of the global rewards pool would still be determined based on Global Map Progress, and its underlying Coverage, Activity and Resilience metrics. A fixed 10% of the rewards pool would be allocated to AI Trainers.

Under the current Hivemapper Foundation bounty, rewards for AI Trainers are predictable. AI Trainers receive a fixed amount awarded per task, subject to modifications based on the reputation of the contributor.

Under the new methodology, the rewards pool would be fixed. This rewards pool would be divided among contributors based on a “contributor score,” which would be calculated using the following three factors:

  • How many AI Trainer tasks they completed
  • Which types of AI Trainer tasks they completed
  • How well they completed tasks, as measured by their reputation

Every type of AI Trainer task would be assigned a point value based on its time requirements and complexity. These point values would be updated over time and published in official Hivemapper Network documentation.

The simplest tasks would be worth 1 point. These tasks would take as little as one click, and as little as a couple seconds of effort — for example, confirming the numerical speed limit on a road sign. In theory, there is no limit to the maximum point value that could be assigned to a task. For now, however, there would be no task with a point value greater than 30, which would be assigned to a task such as confirming the position of an object, which takes about 30 seconds on average to be done with precision.

As a simplified illustration, let’s assume the following:

  • The global reward pool was 1,000 HONEY for a given week
  • 10% of the reward pool, or 100 HONEY, would go to AI Trainers
  • Two types of tasks: Task A is worth 1 point and Task B is worth 5 points
  • Two contributors: Contributor A and Contributor B

Contributor A would have 0.9 * [ (10 * 1) + (50 * 5) ] = 234 points

Contributor B would have 0.2 * [ (100 * 1) + (10 * 5) ] = 30 points

Total = 264 points

Contributor A would receive (234 / 264) * 100 = 88.636 HONEY

Contributor B would receive (30 / 264) * 100 = 11.364 HONEY

Feedback on the proposal

The comment period was 8 days, running from Monday, March 4, 2024 until Tuesday, March 12, 2024. Conversation was active, and focused in large part on distributional equity — how to allocate scarce resources between dashcam contributors, AI Trainers and other participants in the Hivemapper ecosystem.

Several commenters suggested the Map Coverage share should be reduced less, and the AI Trainer share should be increased by less.

We understand and respect that no one wants to see their own rewards impacted. However, we believe the proposed split would avoid imposing an unreasonable burden on any segment of contributors. With a bounty pool averaging around 700,000 in recent weeks, allocating 10% of the rewards pool to AI Trainers would not favor either map contributors or AI Trainers, because both groups are essential to building a useful map.

We also wish to note that few if any AI Trainers commented to ask for a larger share of the pot. This underscores the power imbalance between map contributors, the majority of whom are English speakers in wealthier countries, and AI Trainers, many of whom speak English as a second language and may not feel comfortable engaging in this conversation. We want to ensure we are making decisions based on fairness, rather than listening to the loudest voices.

Several commenters suggested that some of the reward pool allocated to AI Trainers should come from Operational Rewards.

This is an understandable argument, but not one that makes sense for this specific change. The increase inAI Trainer activity does not reduce the substantial, fiat-denominated costs that the manager of the network incurs for uploading, storing and processing millions of kilometers of map data every week. If anything, it increases that cost due to increased processing needs, making a reallocation from Operational Rewards to AI Trainers inappropriate. In the future, if we make protocol-level changes that allow contributors to do work that reduces the substantial cost of uploading, storing and processing map data, we will consider shifting some of the Operational Rewards allocation toward contributors.

Some commenters suggested that we should be more aggressive in limiting who can be an AI Trainer and how many tasks they can complete.

To be blunt, we are not going to do this now, or ever. This network is designed to be permissionless. Everyone should be allowed to participate as much or as little as they want, as long as they do good enough work for the network to maintain their AI Trainer reputation. The outgoing bounty guarantees a certain reward, and this guaranteed reward runs the risk of being more or less than necessary to incentivize the work needed by the map. By adding AI Trainer rewards to the rewards pool, we will create a supply-and-demand balancing mechanism that allows individuals to decide whether it is worth their time to participate, resulting in a market-clearing amount of AI Trainer contributors.

--

--