Thanks for the informative article.
Even though it’s constant time for each lookup, O(1), it’s O(n) to filter the articles, where n is size of the set of articles. The set of articles would have to be a severely diminished subset of all articles for it to be performant or feasible.
Like you said, it’s all about tradeoffs. One question could be, would we rather have the already read articles filtered before it even gets to this point? I’d be interested in how you guys are computing you relevance scores for recommendations :)