Breaking radicalization with novelty

What is the central affordance of Youtube? Naturally, this is different for different groups. However, the platform itself is predisposed to certain types of interaction based on either early design decisions (e.g. video-based) or running the service like a business and wanting to keep people coming back. (And this holds for services like Netflix, Medium, or anything that relies on recommendation engines as well.)

I posit (though it certainly isn’t a hot take) that the central engine of Youtube is its never-ending personalized recommendation of content to the users. The natural question to this statement is “what does it mean for the content to be personalized?”

Usually, this means that the content is similar in some manner to the content already consumed. This operates under the assumption that if you consumed item X, then you would like to consume more items similar to X.

The affordance, when phrased in the above manner, should sound eerily familiar to anyone with a cursory knowledge of machine learning — it sounds like the k-nearest-neighbors algorithm with a (binary) classifier on top. And certainly one can imagine that the very early incarnations of Youtube and Netflix’s algorithms might have functioned like this in a prototype phase. Famously, in 2009 Netflix offered (and paid out) a one million dollar prize for researchers who could come up with better recommendation algorithms for them to use.

From a normal consumer’s point of view, this is desirable. I can enjoy the content I consume and at the same time get offered more content in that vein without needing to arduously search for it or rely on recommendations from people who know my tastes.

However, even as these recommendation algorithms have iterated and improved over the years, they have still proved vulnerable to unwanted outside influence. For example, White Nationalists have influenced Youtube’s algorithm with self-referential loops (and other tools) such that “[extreme right] YouTube video[s] are likely to recommend further ER content, leading to immersion in an ideological bubble in just a few short clicks” [1][2] or just by letting videos play on auto-play (which hands control of the videos that play directly over to the recommendation algorithm).

This hijacking of Youtube’s algorithm is directly possible from the affordance of recommending content that is similar to what has been previously consumed. As such, the next natural question is how might Youtube change if we were to change that affordance? And in what manner might we change the recommendation process?

There exists an appealing option outside of typical recommendation engine work in the Novelty Search algorithm [3]. This algorithm completely leaves behind the desire to optimize towards a goal (e.g. time spent on Youtube), and instead simply looks for solutions (e.g. videos) that are sufficiently different from what has already been explored by the algorithm. Naturally, such a drastic change to the core of Youtube would break the site and why many people go there. And to be clear, this would be a terrible business decision. But it would also break one of the primary methods of radicalization used by white nationalists.

What might Youtube look like under this change? None of the observable systems would need to change (e.g. layout, auto-play, etc.) However, the videos that get recommended would, by design, lose connection to what has been previously watched. As such, the following image would change from its current recommendation to new recommendations.

Below is the same image as above, but with new recommendations. These recommendations were chosen at random with a random word generator and then taking the first offered video:

--

--