Spam Filter Idea for Nostr

Tim Pastoor
4 min readFeb 20, 2023

--

This article has been retracted. What’s described in the original article already exists. :)

“That option already exists. The options are out there. github.com/fiatjaf/relayer and github.com/hoytech/strfry are fully customizeable and make it very easy to implement your suggested policy. Aside from that we just need clients that give users more flexibility in how to pick their relays. I wrote briefly about this here: fiatjaf.com/3f106d31.html (although in my mind this was a pretty basic and obvious “vision” I wrote it anyway).”

— fiatjaf

Original article:

Nostr, the rapidly growing decentralized social media network, is starting to face a growing problem of spam as it scales. Spammers are starting to take advantage of its open and decentralized structure to flood the network with irrelevant content, which may turn malicious at some point. Spam not only clutters up users’ feeds, but also detracts from the overall user experience, causing frustration and maybe even for some users to leave the network altogether.

Source: Martinus Scriblerus

There are four ways I know of that may help solve this issue:

1.) Content moderation by the team behind individual client apps;
2.) Content moderation by owners of individual relays;
3.) Paid relays;
4.) Graph filtering.

The first two solutions, content moderation by individual clients or relay owners are problematic due to the centralization of control that they entail. Nostr was built to be a decentralized, user-driven social network, and solutions that rely on centralized control fly in the face of that approach.

Another possibility is the introduction of paid relays. In this model, users would have to pay a periodic subscription or small fee to access a relay and the broader Nostr network. While this would provide some deterrent against spamming, its monetary cost could also discourage the kinds of broad participation that make Nostr so attractive in the first place.

One solution in line with Nostr’s core philosophy is the use of graph filtering. Instead of relays storing all publicly received and broadcasted messages, a server could maintain a graph of its users, and store only messages from those users and those they follow, or other (trusted) paid servers.

Having relays filter by their own graphs would make it more difficult for spammers to carry out Sybil attacks. In centralized networks, spammers can easily create many fake accounts (for free, or at the cost of an email alias) and manipulate the network to their advantage. However, in a network consisting of many individual overlapping network graphs, where people only listen to those they follow (or even those they follow; friends of friends) spammers would have an increasingly hard time to create fake accounts and manipulate multiple graphs simultaneously, making it much harder for them to carry out a successful Sybil attack (read: spread any kind of irrelevant information or disinformation).

Additionally, this approach would make spamming less economically feasible, as spammers rely on the reach and visibility of their content to drive revenue. By limiting the reach of their spam, spammers would be less likely to carry out these attacks, as they would not see the same return on investment. This, in turn, would disincentivize spammers and further preserve the integrity of the network.

This should increase efficiency and scalability (thinking of overlapping graphs and how Metcalfe’s law dictates that the network grows exponentially stronger with each node added, and therefore in this case its Sybil-resistance), making Nostr a more powerful social network with every user signing up, contrary to centralized networks that become more prone to Sybil attacks as they grow. Just think of all the content moderation required by Facebook or Twitter to moderate the billions of messages each day. Then think of how many spammers you or your friends have in your contact lists …

Of course, relays would/could still have global timelines, and only relays that opt-in to graph filtering would decide not to store more information than strictly specified by their users. It could also be mixed with paying a subscription/fee to the relay or even its user to receive your messages. There are many options, but not paying should be a viable alternative if Nostr wants to compete with services that do offer a free option.

The reason I wanted to get this idea out there is because I believe it will be one of the best ways to ensure Nostr remains a trusted and user-friendly social media network for years to come. By implementing a graph filter approach, Nostr could ensure that only high-quality content reaches users, without sacrificing the key principles of decentralization and user control that make it so compelling in the first place.

Note: There may be spam-mitigation techniques existing within Nostr relays or client apps currently that I’m unaware of. If any of what I’ve written above is wrong, please do correct me. Also, feel free to let me know if you have any questions. I’m happy to help where I can and this short read was meant as constructive feedback in the hope it helps someone, somehow.

--

--

Tim Pastoor

Rants about Bitcoin, P2P Identity & Reputation, and Intermediaries