How the Gems Protocol Reduces Consensus by Redundancy

Rory O'Reilly
Nov 27, 2017 · 7 min read

In existing micro task marketplaces, verifying the accuracy of results from workers is difficult. The sheer volume of tasks that workers are assigned naturally introduces inaccuracy and can even induce cheating. As a result, requesters can’t accept workers’ answers at face value.

We’ve dubbed the current hack that band-aids this dilemma— in essence, it’s complete . Requestors pay multiple workers (sometimes 5–15x) to perform the same task and accept the majority response as the correct one. Even Amazon MTurk itself recommends that requesters re-assign the same task to an increased multiple of workers (as well as run local accuracy estimation experiments) to validate accuracy. Amazon Mechanical Turk is not incentivized to reduce consensus by redundancy, as they take a hefty fee out of total wages paid — the more money paid to workers, the more money Amazon makes. Essentially, MTurk is shoving quality control responsibilities onto requesters, who have to pay more to retrieve accurate results, all while Amazon benefits.

Consensus by redundancy not only hurts requesters, but also workers. Because the requesters’ budgets have to be spread across more workers for redundancy and less total work is done with the same total wage output, individual workers lose out on a potential increase in payment.


It’s time to trash this economically inefficient hack and focus on a real solution for the micro task ecosystem. Reducing consensus by redundancy requires a verification system to ensure that each miner’s work is accurate.

And that’s why Gems brings you the Gems Protocol.

The Gems Protocol allows Gems to decrease costs while maintaining network accuracy by disincentivizing malicious actors and rewarding fair players.

When we first announced Gems, we outlined our vision for the Gems Protocol and noted that, as most blockchain projects, we will work with the community to build new extensions on top of the Gems Protocol — it’s only natural that projects evolve and grow over time. Since announcing, we’re happy to say that prominent members in the community have contributed new and efficient additions to the Gems Protocol.

There are three parties in the Gems Protocol:

  • Miners: micro task workers
  • Verifiers: workers with a high Gems Trust Score who look through miner-completed tasks to verify validity. A verifier replaces many individual miners redoing a single task to verify accuracy
  • Requesters: those who want micro task work done

Gems Staking Mechanism:

Via the Gems Staking Mechanism, all parties — miners, requesters, and verifiers — stake tokens on the validity of their work and against the validity of others’ work. This disincentivizes cheating and incentivizes accuracy and honesty.

The staking method for each party is as follows:

  • Miners: Miners stake a token, or a fraction of a token, on a given task. This amount is defined by variable Ms. If the designated number of verifiers agree the task was completed , the miner is returned Ms + Mr, where Mr is their reward. If the verifiers agree the task was completed , the miner can either be penalized the stake or redo the task. If the miner successfully redoes the task, he or she is returned Ms + Mr — Vr, where Vr is the reward paid to additional verifiers, the penalty for the miner’s first mistake.
  • Verifiers: Verifiers stake a smaller portion of token Vs, on their verification of a task. If other verifiers, or the requesters, overturn the verification, the verifier is penalized Vs. If their work is not overturned, the verifier is returned Vs + Vr. Vr ≤ Mr for any given task.
  • Requesters: Requesters stake their total Mr + Vr so they don’t act in bad faith and report successfully completed tasks as incorrect. If requesters deny a task as being correct, that same task will be given again to other miners/verifiers, and the requesters funds will still be locked up, meaning that there is no clear incentive for a requester to be a malicious actor.

Gems Trust Mechanism

The Gems Trust Score indicates the reliability of a particular individual. Completing tasks accurately and consistently increases a miner’s score. Those with high scores are eligible to earn extra money by working as verifiers — authenticating the work of others in the network and thus increasing the system’s overall accuracy. On the other hand, those with low scores will be removed from the network.

Each participant has a Gems Trust Score that is linked to his or her Ethereum address. To calculate this score, we use a confidence interval that incorporates both the proportion of successful task completions and the number of tasks completed.

For more information about the Trust Score, please see section 6 of the Gems White Paper.


A twist on the above is that verification doesn’t happen every single time, but probabilistically in a way that mathematically disincentives malicious actors through high fees when their work is incorrect. We call this method Gems Random Auditor Method (GRAM).

Gems Random Auditor Method (GRAM):

In a task marketplace with a group of requesters and workers, for each task completed, there is an x% chance that the task will be chosen to be verified.

If a worker’s solution is randomly selected for verification, that solution is sent to a group of verifiers, who then vote and come to a consensus on the solution’s accuracy.

If the solution is deemed incorrect, then that worker is penalized 100/x times his or her Mr.

This potential penalization disincentivizes bad actors and reinforces the accuracy of the Gems network. The math is simple to check out: there’s a 5% chance that you can be penalized 20x what that tasks reward (100/5) — being a malicious actor is not in your best interest.


This leaves us wondering…is there a way to have a verification system without in-protocol penalties and security deposits? Yes!

Gems Implied Reputation Method (GIRM):

In this method, we introduce a verification system sans in-protocol penalties and security deposits. The system is rooted in two equations, one regarding worker fees and one regarding the probability of task verification, that serve to align incentives in the marketplace.

For every task, each worker is charged a fee, which is relative to his or her past N completed tasks: f(N) = 5/6 / sqrt(1 + N * 0.05). This fee decreases as more tasks are completed and thus is a proxy for trust.

Additionally, for every task, there is a probability that it will be sent for verification, also relative to the worker’s past N complete tasks: p(N) = f(N) / (5 * (1 — f(N))).

Here’s a breakdown of how this works out:

On the first task attempted (N=0, as no tasks are successfully completed) the fee is 83% for the first task, with a corresponding probability that the miner is audited 100% of the time. For the second task there’s an 81% fee and 87% probability audit. For the tenth task a 69% fee , with a 42% probability of audit, and 11.6% fee for the 1000th task with a 2.6% chance of audit.

The worker’s work will always be verified the first time, but the probability of verification decreases as more tasks are completed successfully. This system is designed so that the fee exactly pays for the cost of a verification each time. For instance, if a given worker performed 50 tasks, (N = 50), f(50) = 0.445 and p(50) = 0.160. There is a 16% chance that the task will be verified, so on average 1.8 workers need to be paid per task. The extra 0.8 is paid out of the worker’s fee, as 0.8 / 1.8 = 0.445.

If the worker performs badly, then he will not only lose his reward, but his N value will also be replaced by floor(N / 2). This disincentivizes workers from performing poorly, as their fees and odds of verification increase as a result. To check to see if this is a proper disincentivization, we can tally the total penalty the miner suffers by looking at the loss of Mr plus the sum of f(k) for k in floor(N/2) … N — 1 which represents the fees that will be needed to paid again in the event of floor(N/2); this net penalty multiplied by the probability of getting caught always exceeds 1.


If anyone wants food for thought, there’s a further twist on the above: Perhaps verification doesn’t occur probabilistically, but only when the requester asks.

We can also introduce a mechanism in which the requester can commission a verifier post task completion. The requester pays $x to commission a verifier. If the verifier confirms that the task is completed well, the requester loses $x. However, if the verifier confirms that the task was performed poorly, then the requester receives $2x back, and the worker’s N is reduced by 2x * sqrt(N). In both cases, $x is lost, which goes to the verifier.


Gems would like to release it’s alpha, make an impact, and iterate along the way. The alpha will include a combination of GRAM and GRIM, and we will improve it over time. Ultimately, as described in the white paper, requesters will be able to decide how they want their work verified. The team is actively working on building the alpha, which we will announce soon — you won’t want to miss it!

The above methods are all possibilities for Gems, and we’d like to thank the community for their continual help in development and refinement. With that being said, not every Gems Protocol integration described will be implemented 100% as described, as there is ongoing development discussions.

Thanks for reading this far! Maybe you have a few ideas after reading? Perhaps you’d like to help contribute to Gems? Feel free to reach out:

Contribute to the Gems Discussion:

Expand

The Protocol for Micro Task Marketplaces

812

812 claps
Rory O'Reilly

Written by

Dropout @Harvard | Founder @Expand | Founder @ gifs.com | Thiel Fellow | 30 < 30

Expand

Expand

The Protocol for Micro Task Marketplaces