Why you can’t game the Lunyr system
Let’s address a common question that’s central to the success of Lunyr.
“How do you prevent people from gaming the system?”
Understandably, many worry that advertisers, political groups, large companies, and even Godzilla will seek to to control their public image by manipulating the Lunyr platform. While no good protection exists against an attacker with unlimited resources, we can mitigate the efforts of all other attackers. The basic idea is that even if every user is trying to manipulate the information, they aren’t all trying to manipulate it in the same way, so we can play them against each other and they will cancel each other out.
We model an attacker as a well-funded entity that wishes to bypass peer-review in order to misinform readers. If you’ve spent any time around the internet, you’re probably familiar with the notion of a paid shill or a sock puppet. That’s what we are talking about here. There are other attack scenarios, but we will address this one first.
To prevent such attacks, we make the cost of an attack increase with the size of the attack. In total, we have five layers of defense.
Defense Layer 1: Contribute before you review
Contributors don’t get to review other submitted material until their own contribution gets past peer review. This means that if somebody attempts to perform a Sybil attack, every account they create will need to
1) Convince their peer reviewers that they are human
2) Convince their peer reviewers that they have something meaningful to contribute
3) Pay a small amount of ether to post a transaction
These tasks are difficult for bots, but straightforward for humans to do. It just takes time and money.
Defense Layer 2: Randomness
Some of you might know about stack smashing. It’s a technique of exploiting software by overwriting the return address on a stack frame in order to redirect the flow of computation into malicious code. One technique to mitigate this is called Address Space Layout Randomization. This isn’t a silver bullet (no such thing), but it definitely makes it harder to hack things.
I bring this up because it’s a good analogy to what we do with peer-reviewer selection. When somebody submits a contribution, we measure it against the contributions of all the other peers and select a set of ~100 candidates whose contributions are most similar to the one being reviewed. From there, we randomly select the 5 peer reviewers. There are “100 choose 5” or 75287520 possible ways to choose 5 peer reviewers from a pool of 100, so if an attacker manages to get 10 accounts into that pool, the probability of the attacker getting all 5 reviewer spots is “10 choose 5” divided by the total number of ways to do it is equal to 252 / 75287520 or .000003347… An attacker would have to control 88 of the 100 closest peers in order to have a greater than 50% chance of having total control of the review process for that piece of content.
An attacker might be able to do some damage by only controlling say 2 of the reviewers. It would only take 32 attackers in the pool to reach a 50% probability of having at least 2 attackers in review. If we increase the size of the pool, the number of attackers required for this threshold increases linearly, so for a pool of 1000, it would require 314 attackers in the pool to have a 50% chance of controlling at least 2 of the peer reviewers. Pretty sweet! Check out the script where I got these numbers here.
Defense Layer 3: Breadth Control
The above analysis assumes that all subsets of reviewers are equally likely and we all know the only uniform distributions are the dead ones. Can we do better by skewing the probability distribution against the attackers? It is a well studied fact that people are not great at talking differently when they are trying to create multiple identities, so if we make a parameter called, say, breadth and we basically say that the distance between each pair of reviewer-vectors chosen from the pool has to be above our breadth parameter, then we can require attackers to create accounts that simultaneously are experts in a given topic and also other topics, making their vectors diverge slightly. By raising this breadth parameter, we can make this requirement as strict as we want, until the pool no longer satisfies the constraint, at which point we can just increase the pool size.
This is probably a good idea to do regardless, to avoid having an echo-chamber-confirmation-bias effect. Having a reasonable breadth parameter will make it likely that the content is reviewed by people from various backgrounds, encouraging connectivity in the Lunyr mind-sphere.
Defense Layer 4: Pay Not to Sybil
We give a slight economic incentive for users to post all their contributions from one account by making the reward function slightly convex.
CBN = quality * log( 1 + quality)
This means that, for example, if one user has one account with a total quality score of 10, then that account will get
10 * log(1 + 10) = 34.59431618637297 CBN
If that user had split that account up into 10 accounts each with total quality score of 1, then the score would be
10 * (1 * log( 1 + 1 )) = 10 CBN
This is a pretty big difference, and we can tune the difference by taking the log in different bases. A necessary side effect of this convexity is that your 5th quality point will necessarily payout more than the 4th one. This introduces something of a seniority effect, but it only lasts until the end of the reward period, so it’s not a very strong effect.
In general, we want to avoid the early-adopters-take-all patriarchal approach to the reward structure. However, for certain things like reputation, it really helps to have a long memory. For this reason, the payout has a short memory (your CBN are only good for one reward period, and then they are forgotten), and the peer review structure has a long memory (your contributions are remembered longer for the purposes of eligibility and relevance). Your HNR will last for a year or until you spend them on dispute and resolution. The point of this defense layer is that attackers will have to give up LUN in order to attack, in addition to making the tokens less valuable by attacking.
Another similar effect is that putting all of your submissions in one account makes your relevance vectors more accurate, so that you are selected to review topics that are more relevant to you. This is a minor incentive.
Defense Layer 5: Require Frequent Submissions
We require that contributors submit content if they would like to be peer reviewers, since you can only peer review if you have contributed something accepted as valuable. So for example, for every one submission you make, you may not peer review more than six submissions. This adds maintenance costs to anybody using a bunch of sock-puppets to increase their reviewing ability.
Other questions about security:
How do we know Lunyr can’t game the system?
While we wait for the appropriate technologies that will enable full decentralization, Lunyr is a hybrid system, with part of the code running on the Ethereum blockchain and part of it running off-chain. Because of this, some of the code will have a higher degree of transparency than other parts. Anything that touches machine learning will be too gas intensive to put on-chain, so one might argue that Lunyr could rig the model to pick certain peers for certain topics. This will be difficult for Lunyr to do without getting caught because all of the code, models, and peer history will be open source, including the random seeds. (User information will not have personal identities in it.) We will put the models and data on IPFS, so that anybody can download them, run them against our public data and come up with the same answers. In case someone worries that we searched through random seeds until we found a good one, we can base it off of the hash of the latest Ethereum block header. Thus gaming the seed would require us to be able to influence miners in some way that is likely unprofitable for them, since they normally choose the first nonce that is valid.
What if people try to sell their accounts?
CBN and HNR are not transferable. While CBN being transferable would not make much of a difference, since the LUN they translate to are transferable, we refrain from making CBN transferable for the sake of simplicity. HNR are not transferable to discourage selling HNR, although in practice it’s very difficult to prevent people from selling something they own when there’s a demand for it. The important thing is that HNR does not allow you to edit content, so accumulating HNR illicitly or otherwise does not give someone the ability to bypass the peer review system.
What if users re-submit the same content multiple times?
We can limit the rate of user submissions based on the similarity to and quality of previous submissions. Higher quality, different submissions are prioritized in peer review relative to lower quality or rejected submissions. We will also have a limit on the number of un-reviewed submissions per user at a given time, to mitigate denial-of-service attacks.
How will we know if a Sybil attacks has occurred?
The dispute and resolution process will have an option to report bias, which might indicate a Sybil attack. This way the community can be self monitoring. If Sybil attacks prove to be a persistent problem, we can adjust various parameters in our defense layers to increase the difficulty.