Attack Vectors in P2P Reputation Systems
TL;DR
P2P reptuation systems:
- Are slow to propagate awareness of bad actors
- Leak a lot of private information that works against individuals and against the network
- Being open source make attack evolution 10x faster
- Place high reputation in the hands of fallible individuals (same goes for any social network, but so much worse when the network is being leveraged to detect phishing/scam sites)
- Can’t distinguish which side is truthful whenever a balance of scores exist for a given actor (both good and bad)
1. Slow Propagation Attack
P2P reputation systems, like any social networks, form “local clusters”. Information concerning bad actors propagates quickly between nodes in a local cluster due to the high number of node-node connections, but is slow to jump between clusters and the time to propagate across the whole network is unbounded.
In this attack, once awareness of a bad actor has penetrated a given local cluster to the point that the whole cluster is effectively insulated, the attacker jumps ahead of the “propagation front” and targets a different cluster with no direct connections to the original cluster. This extends the effective lifetime of the bad actor.
2. Harvesting Infiltrator Attack
A P2P network relies on broadcasting reputations to a semi-trusted local cluster.
In this attack, a passive spy infiltrates a local cluster and listens/queries the reputation of other nodes, harvesting data about the activity and interests of nodes on the network. This data can be sold to advertisers.
3. Targeting Infiltrator Attack
All reputation networks leverage the ability of a few “discerning” individuals to identity bad actors and thus protect the rest of the network.
In this attack, a passive spy infiltrates a local cluster and listens/queries the reputation of other nodes, collecting data about which nodes are most likely to detect a phishing domain (based on their activity and blacklists). This allows the attacker to sidestep “discerning” individuals and avoid detection for a longer period of time.
4. Simulation Attack
Many P2P reputation networks, especially decentralized networks, are open source.
In this attack, a developer creates a an entire network of nodes on their local machine, testing and evaluating the effectiveness of various strategies with incredibly short iteration time.
The attacker also uses the open source rule system to make informed design choices, rather than relying on educated guesswork.
The attacker can take this approach further, and unit test their strategy, allowing the developer to quickly respond to any protocol changes.
5. Positive Reputation Attack
Once reputation is gained in a P2P reputation network, that reputation can be leveraged to influence others and gain access to various services/resources. Given that this reputation is visible to a passive spy (see above), this places a target on the back of high-reputation individuals.
In this attack, a high-reputation identity is either purchased or stolen, and used to influence a local cluster. This may include promoting a phishing or scam site, or an app/site/service with embedded spyware/malware.
Any P2P reputation network that weights ratings by the reputation of the rater compounds this attack: it may take time before sufficent “true” ratings have accumulated to cancel out the positive rating and protect the local cluster.
6. Negative Reputation Attack
A P2P network that attempts to protect against positive reputation attacks may look to place more emphasis on negative ratings — however this opens the door to a different kind of attack.
In this attack, an incumbent for a given service establishes presence in a number of target local clusters and issues bad ratings against a newly launched competitor in an attempt to prevent them from doing business.
Because “length of reputation” cannot be relied on for new services, this attack is 100% indistinguishable from a positive reputation attack.
Any attempt to tweak the rules of the game will be quickly overcome by a simulation attack.