The NEAR Tasks Testing Journey Continues: Alpha Test Complete

Jeff Bissinger
NEAR Tasks
Published in
5 min readAug 31, 2023

The second round of testing NEAR Tasks has officially wrapped up and as we did with our Zero Test, we are here to share some of our results and what we’ve learned.

The goal of this test was to ensure that NEAR Tasks could provide valuable information to task supply partners and that users were able to complete tasks with a high level of quality and found the experience to be rewarding.

Top 5 Takeaways

  1. Taskers demonstrated a clear preference for the AI-related task sets and this will become our priority. The NEAR Tasks platform will only support AI-related tasks moving forward.
  2. Taskers exceeded our quality expectations. Average quality across all task sets exceeded 90% which outperformed a parallel campaign we ran in MTruk by nearly 3x.
  3. The introduction of badges and a leaderboard highly motivated users. When rewards were updated to payout 0 $NEAR as a signal to withdraw reward balances, 30% of taskers continued to work the remaining tasks to earn badges and climb the leaderboard rankings. Several hundred tasks were completed with consistent levels of quality.
  4. Almost 90% of taskers felt they earned a meaningful amount of $NEAR for the work they performed.
  5. There is demand for NEAR Tasks. Our Task Supply Partners were able to generate near immediate (and overwhelming) interest from their audiences to participate in our Alpha test. We cut the testing experience short as we had gathered the necessary data much faster than expected.

What did Testers do?

The testing environment was very structured. The NEAR Tasks feed consisted of seven distinct Task Sets and taskers were instructed to complete at least ten from across these buckets, achieve a baseline level of quality, and withdraw the $NEAR they earned.

AI Tasks

Image Labeling

Image Labeling task set from the NEAR Tasks Alpha Test. Reward values are for demo purposes only and are not reflective of actual payouts.

Image Labeling task set from the NEAR Tasks Alpha Test. Reward values are for demo purposes only and are not reflective of actual payouts.

  • Task Type: AI Focus — Data Annotation
  • Taskers were presented with an image to label in detail. They would also mark specific points within the image to provide an added layer of detail to the higher-level descriptions.

Identifying Customer Sentiment

Customer Sentiment Analysis task set from the NEAR Tasks Alpha Test. Reward values are for demo purposes only and are not reflective of actual payouts.

Customer Sentiment Analysis task set from the NEAR Tasks Alpha Test. Reward values are for demo purposes only and are not reflective of actual payouts.

  • Task Type: AI Focus — Categorization and Human Feedback
  • Taskers were presented with a quote from a customer service interaction and they were instructed to determine whether or not the quote was Positive/Negative and to describe why.

Non-AI Tasks

Non-AI tasks featured collaborations with:

  • QSTN Us: Taskers promoted brand awareness by reading an article and tweeting their perspectives.
  • NEAR Digital Collective (NDC): Taskers engaged with the NDC’s i-am-human verification process, a stepping stone in NDC’s governance pursuits for the NEAR Protocol.
  • NEAR Horizon: Taskers registered profiles on the NEAR Horizon Hub on near.org, identifying themselves within categories: Founder, Contributor, or Backer. These profiles are intended to support the startup environment building within the NEAR ecosystem.
Partner task sets from the NEAR Tasks Alpha Test. Reward values are for demo purposes only and are not reflective of actual payouts.

Partner task sets from the NEAR Tasks Alpha Test. Reward values are for demo purposes only and are not reflective of actual payouts.

Other non-AI tasks incorporated promoting NEAR Tasks through Twitter and offering feedback on the product itself.

Key Results

  • An impressive total of over 3,000 tasks were successfully completed. Nearly half of those testing worked a minimum of ten tasks and the top five participants completed more than 500 tasks.
  • Across all task types, testers maintained a quality score of over 90%.
  • 25% of our testers earned all the available badges and were rewarded with a bonus in $NEAR. We’re proud to report a total payout of around 1,000 $NEAR for this test.
  • Our partners were satisfied with the task results and showed interest in the potential of future collaborations.

The Path Ahead

Join our 30,000-strong waitlist to become a tasker or engage with us as an AI Task Provider. As we brace for our Beta test coming later this year, those at the top of this list will get priority access, directly connecting with our team, influencing NEAR Tasks’ future, and earning $NEAR in the process.

If you are an AI task provider and want to learn more about NEAR Tasks, please fill out this form and we will follow up as soon as we can!

Thank you to the 122 Taskers who helped us run our Alpha Test for NEAR Tasks! This group completed over 3,000 tasks and provided feedback that will influence our path forward. We are grateful for your time and are excited to welcome you back in our next wave of testing!

Also, a huge thank you to our NEAR Ecosystem Partners: QSTN Us, NEAR Digital Collective, and NEAR Horizon. Having tasks tailored to their needs in the experience brought variety and enabled us to learn about partner experiences and expectations. They also helped us validate several internal hypotheses around these types of tasks.

Currently, the team is heads down processing the results of our Alpha to deliver the next iteration of NEAR Tasks. Excited to welcome even more of you into our Beta experience later this year!

-The Satori Team (soon to be the NEAR Tasks Team)

--

--