Proof of Human Engagement on Decentralized Networks
We present an approach to proving human engagement on decentralized networks such as blockchains. Our approach is capable of differentiating real human users from computer bots with high probabilities. In our framework, computer users occasionally need to solve CAPTCHA-like tasks to prove that they are real humans. Our protocol is attack-resistant and prevents computer bots from fetching answers posted by other users during the answer collection period. The con- sensual answer for a task is defined as the majority vote with stakes weighted. Our experimental simulation on the Steem blockchain shows that the proof-of-human-engagement consensus can be reached with a probability beyond 99% when 15% stake-weighted users are computer bots.
Computer bots are ubiquitous on the Internet and could launch malicious attacks or manipulate data on online systems. It is desirable for these systems to distinguish real users from computer bots. For instance, when we estimate the number of users who have watched a video, we count the number of real human users instead of computer bots manipulating the numbers. For a user-generated content platform, identifying the real users from computer bots can also help us better estimate the value of user-created contents.
In this article, we explore a decentralized autonomous approach for the proof of human engagement on decentralized systems. Our method is capable of differentiating computer bots from human users without going through a centralized authority. This approach also solves the answer-stealing problem in collecting answers on a blockchain: if a computer bot fetches the answers given by other users, then it can choose the most popular answer which is likely to be the correct one. Meanwhile, it can also serve as a marketplace for collecting data for AI tasks.
The proof-of-human-engagement model we propose is based on the assumption that there exist hard AI problems which are easy for human but hard for AI programs to solve. The most popular method for defending online systems is CAPTCHA (von Ahn et al., 2003) or Google’s reCAPTCHA (Google, 2018). CAPTCHA is a type of challenge-response test used in computing to determine whether or not the user is a human. It is a cryptographic protocol which is based on hard AI problems. Google’s reCAPTCHA is an improved protocol of CAPTCHA that additionally assists digitization of text, annotation of images, and building machine learning datasets. Elson et al. proposed Asirra which is a CAPTCHA that asks users to identify cats out of a set of 12 photographs of both cats and dogs (Elson et al., 2007). In 2007, computers have little chance (less than 1/54000) to solve the CAPTCHA. However, today deep learning based models have demonstrated the similar performance of average human users in image recognition (He et al., 2015). Vicarious has announced that their AI programs break existing CAPTCHAs (Vicarious, 2013). There are no ways to prove that there exist AI-hard problems that no AI programs can solve perpetually. Therefore, our proof-of-human-engagement model may not work when AI programs can solve all the human intelligence tasks.
Currently, many AI researchers are willing to pay to collect labeled data for AI-hard problems from crowd sourcing. Amazon Mechanical Turk (Amazon, 2018) is such a marketplace for AI researchers to collect large-scale data from workers all over the world. We are inspired by this platform and suggest a protocol that distinguishes human users from computer bots and at the same time, serves as a marketplace for tasks that require human intelligence.
The proof-of-human-engagement model involves multiple entities working together. We will discuss their roles in the model and how they interact with each other. Similar to the Steem blockchain (Steem, 2017), we assume in our model no one can create infinitely many user accounts because it is not free to create accounts.
Requesters. Requesters propose human intelligent tasks by paying some amounts of reward and deposit. When proposing a task, the requester needs to specify the number of users who will participate in the task (at least 20 users). The priority of a task is defined as the ratio of the total reward to the number of participants. When all the answers are collected from the users, the requesters needs to unveil the consensual answer in 7 days to get deposit back (will be explained in paragraph of Attack-Resistance).
Human Intelligent Tasks. Human intelligent tasks are multiple-choice questions which are easy for human to solve but hard for AI programs. We assume that the tasks are multiple-choice questions with 4 to 9 choices. Obtaining massive answers on these tasks can also help build large-scale datasets for challenging AI tasks. There is a huge demand for such a crowdsourcing marketplace: Amazon Mechanical Turk has become a popular platform for AI researchers to collect data.
Task-to-user Distribution. A task with the highest priority will be chosen and distributed to online users. A computer user is considered online if it just triggers a transaction on the blockchain. The distribution will continue until it collects the minimum number of answers, or stop after 10000 transactions elapse since the task is selected. To maintain decentralization of the consensual answer, no answer can be weighted more than ε = 10% of all the answers. Let S_k be the stake of answer k and suppose that we are selecting the k-th user, S_k should be chosen to satisfy that
where n is the minimum number of answers to collect. The intuition is that expected weight of S_k should be not greater than ε.
Consensual answer. After all the answers are collected, the answer with most weighted stakes is chosen as the consensual answer for the task. The consensual answer is regarded valid if it accounts for more than 50% of the stake-weighted answers.
Users. Occasionally, a user may receive a task to complete. After the task is distributed to the user, the user can choose to participate in the task for earning reward. If the user participates, an answer will be chosen and broadcasted to the blockchain. The weight of the answer equals to the stake the user holds when the question is selected. If the users chooses an answer which is the consensual answer, then the user is rewarded; otherwise is penalized.
Attack-Resistance. A blockchain is an open public ledger that everyone can access without privileges. If a computer bot knows the answers given by other users, it can easily cheat by choosing the popular answer (in this case, the bot is likely to be regarded as a human). To avoid this attack, the choices for a task are permuted and are sent to users directly by the requester. When proposing a task, the requester first chooses a random number called salt and broadcasts its hash along with the task. When a user participates in the task, the requester computes a permutation by hashing the salt and the user ID. Then the requester sends the permuted choices to the user, and then the user broadcasts his or her answer to the blockchain in 10 minutes. When enough answers are collected, the requester needs to put the salt on the blockchain to unveil the underlying answers within 7 days. To ensure the requester publishes the salt, a deposit (10% of the reward) will be collected. If the requester does not broadcast its salt, the deposit will not be returned to requester. If the requester broadcasts the salt but the consensual answer is not reached, only 50% of the deposit will be returned to the requester. Figure 1 shows the overall flow chart of the proof-of-human-engagement model. To further avoid attacks, each computer user is allowed to have at most one transaction every three seconds.
Incentive Analysis. When a requester submits a task, it is expected to be a problem that current AI programs cannot solve easily. Thus collected answers can be used to improve the performance of existing AI models. If the task can be easily solved by state-of-the-art AI programs, the collected answers will not help improve the AI system and requesters may not be interested in paying money to get the answers which can be completed by an AI program. From the viewpoints of users, computer users who provided incorrect answers on two consecutive tasks will be regarded as computer bots and will not be able to participate in any reward allocation process for a lengthy period. Thus users are encouraged to provide correct answers.
Task Distribution Algorithm
- A human intelligent task with the highest priority is picked in the mempool. Then the requester who proposes the task generates a salt which is a random number and submitted its hash to the blockchain. The salt is used to determine the permutation of the choices for a task.
- The task is assigned to online users. When a user performs a transaction on the blockchain, the user is regarded online and have a chance to be chosen to complete the task. An online user can be invited to solve a task if the constraint 1 is satisfied.
- The requester sends the permuted choices to the user in an order determined by the hash of the salt and the user ID. Note that different users receive choices in different orders for the same task.
- Users broadcast their answers to the blockchain.
- Our task-distribution algorithm guarantees that no single answer has more than 10% weighted stake. The task distribution can stop after 10,000 transactions elapse since the task is selected, as it takes too long to collect answers.
- If enough answers for the tasks are successfully collected, the answer validation process will start.
- Go back to step 1.
- Within 7 days since all the answers for a task are collected, the corresponding requester broadcasts the secretly kept salt so that answers from different users can be aligned.
- If more than 50% of the stake-weighted users provided the same answer, this answer will be defined as the consensual answer and all other choices will be defined as incorrect answers.
- If no more than 50% of the stake-weighted users provide the same answer, the task will be discarded.
- Computer users who provide the consensual answer will be recognized as human users. In contrast, users who provide incorrect answers for more than two consecutive tasks, will be recognized as computer bots for a lengthy frozen period. After the frozen period, they will be recognized human users if they provide the consensual answers for two consecutive tasks.
We analyze the performance of the proof-of-human-engagement algorithm by simulation on the Steem blockchain with about 700K accounts. We exclude the accounts with zero stakes. We assume that a real human user has the probability of 90% to choose the correct answer, and the computer bots have the probability of 1/nChoice to choose the correct answer, where nChoice is the number of choices nChoice between 4 to 9. We assume we need to collect 20 answers for each task.
We run the simulation for 10, 000 tasks. Our algorithm can collect 20 answers for each task where no answer is weighted more than 10%. Further, we examine the performance of our algorithm with different percentages of computer bots and the number of choices. Table 1 summaries the stake- weighted percentage that a user chooses the consensual answer, the probability that the consensual answer is found. At average, 20 answers can collected for a task after 226 transactions are posted on the blockchain when nChoice = 9.
Under our assumption, the computer bots will be identified as bots in the long run. When the consensual answer is reached, a computer bot has only the probability
to pass k tests, and
Proof of human engagement on decentralized networks is an emerging but challenging task. In this article, we propose an algorithm that applies the traditional challenge-response systems, such as re- CAPTCHA, on decentralized networks. This algorithm is carefully designed so that a consensual answer will not be revealed until the requester intends to do so. Furthermore, our algorithm can be integrated in a blockchain consensus protocol to identify and prevent botnet activities. Potentially, a marketplace similar to Amazon Mechanical Turk can be established on decentralized networks for AI researchers to collect useful labeled data.
Amazon. Amazon mechanical turk, 2018. URL https://www.mturk.com/.
Elson, Jeremy, Douceur, John R., Howell, Jon, and Saul, Jared. Asirra: a CAPTCHA that exploits interest-aligned manual image categorization. In ACM Conference on Computer and Communications Security, pp. 366–374. ACM, 2007.
Google. recaptcha: Easy on humans, hard on bots, 2018. URL https://www.google.com/recaptcha/intro/.
He, Kaiming, Zhang, Xiangyu, Ren, Shaoqing, and Sun, Jian. Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In ICCV, pp. 1026–1034. IEEE Computer Society, 2015.
Steem. An incentivized, blockchain-based, public content platform, 2017. URL https://steem.io/SteemWhitePaper.pdf.
Vicarious. Vicarious ai passes first Turing test: Captcha, 2013. URL https://www.vicarious.com/press/vicarious-ai-passes-first-turing-test-captcha/.
von Ahn, Luis, Blum, Manuel, Hopper, Nicholas J., and Langford, John. CAPTCHA: using hard AI problems for security. In EUROCRYPT, volume 2656 of Lecture Notes in Computer Science, pp. 294–311. Springer, 2003