Source illustration. Logo courtesy of Google.

For the YouTube Algorithm to Get Better, It Needs to Become Worse

It’s time to recognize YouTube’s algorithm as a failed experiment.

A. Khaled
A. Khaled
Jul 21 · 8 min read

YouTube’s recommendation algorithm has been at the center of discussions surrounding online radicalization because of its much-loathed ‘rabbit-hole effect’ and its subsequent tendency to further push down audiences down an ever-deepening den of extremist content through the recommendation tab when the starting point could’ve been something completely benign, or even remarkably unpolitical.

The algorithm was very obviously built to support YouTube’s aspirations for a greater capability to capitalize on a business that used to cost them as much money as it did generate. The platform thus kept searching for avenues of growth, and one of them had to involve ways to get users to consume more content on its platform. The implementation of such measures hasn’t been always transparent in the past, but YouTube continued marching forth, and they finally were able to hit the ever-so-elusive “1 billion hour per day watchtime” goal that they’ve been working to solve since 2012. YouTube now started to look like a much more sensible business venture for Google, but its unconditional stride towards hitting that goal started to show its wrinkles, and it appeared as though that YouTube hasn’t only found a way to make people watch more of its videos, but it also contributed to creating a solid front of extremist content that just played its recommendation algorithm like a damn fiddle.

What is fundamentally at the heart of YouTube’s mission to drive more traffic towards its site is a sophisticated form of crowd-sourced psychoanalysis data that informs the platform which have been the most successful ways to get users to watch more of its content, and plan their recommendations accordingly. Essentially, it means that the data YouTube was getting from watching habits wasn’t only serving its most-obvious use as an effective ad-targeting method, but it also started to reverse-engineer users’ mental process when watching videos so that their behaviors are better catered to. This is a task that if assigned to humans, would’ve taken an astronomically-long amount of time to get done, and working off a single dataset and adapting it to YouTube’s constantly-changing landscape would prove a nightmare to even the most-skilled teams of data analysts. YouTube’s reliance on AI here wasn’t just a conscious choice — it was very much a necessity to growing its business, lest it wanted to continue burning through cash on hosting capabilities, and not make enough to sustain their operation.

But because the incentive to make YouTube’s algorithm better was trapping audiences within, pondering its deep moral implications became a collateral. Google’s chief concern wasn’t making a morally-conscious algorithm — it was just to boost its capabilities and worry about its problematic dimensions after.

Unfortunately, and since the algorithm’s machinations are veiled in secrecy, it’s very hard to make any definitive determination about their intended goal. However, ex-Googler Guillaume Chaslot — founder of the AlgoTransparency projectrecounts a version of events when helping create the algorithm that is eerily similar to tech reporters’ best attempts to speculate on its nature.

Guillaume Chaslot’s research through AlgoTransparency made some headway when it suggested back in early 2018 that the pattern of recommendation of political videos during the 2016 American presidential elections was heavily laced in conspiracy theories and blatant pro-conservative bias. When YouTube was approached by the Guardian to comment on the findings, YouTube brushed off their methodology as being flawed, further claiming that “[they’re] attempting to shoehorn research, data, and their incorrect conclusions into a common narrative about the role of technology in last year’s election”.

YouTube is right about the latter part of their comment — when Chaslot revealed his findings, the issue of Russian interference in the 2016 American presidential elections was still the hot talk of the day. Major platforms were under heavy scrutiny due to their inability to properly enforce their policies, favoring often an inaccurate, or a falsified version of a story over an authoritative source. That’s not to say that YouTube was favoring say, a conservative read of a story as opposed to a liberal one necessarily — it was outright recommending news content that only upheld its latter part of the name. As the platform became big enough to sustain an audience of news-seeking individuals, the responsibility it had to carry for distributing a remotely-accurate version of news became significant. In the age of the internet, it was often the split between a widely-shared version of the story — which is often dubbed automatically true — and a less popular one deemed subsequently unworthy of trust.

Chaslot’s dedication to blow the lid on the YouTube algorithm’s nefarious goals was unphased. He came back a year later as a presenter at the DisinfoLab Conference in Brussels, and it seems that if his criticism was once that of a specific function of the YouTube algorithm — that of misinformation — it now morphed into an overall statement about YouTube’s moral role in moderating the content people have access to. Guillaume told The Next Web’s Már Másson Maack that “it isn’t inherently awful that YouTube uses AI to recommend video for you, because if the AI is well tuned it can help you get what you want. This would be amazing,” but he tempered that enthusiasm by further adding that “the problem is that the AI isn’t built to help you get what you want — it’s built to get you addicted to YouTube. Recommendations were designed to waste your time”.

“Wasting time” shouldn’t be looked at here with a mere eyebrow raise. If the YouTube algorithm was designed to keep users glued to their screen, clicking one video after the next, then it must’ve not paid great consideration to what type of content users were exactly consuming. This came into sharp focus when two blockbuster reports — one on YouTube’s radicalizing power, and the other on its tendency to aid child predators by recommending suggestive content of minors — came out with concrete data on the algorithm’s ineffectual treatment of problematic content and its lack of awareness for suggesting the morally-defunct. The allegations relating to pedophilia were grave enough that it prompted an FTC investigation into the company, which the two bodies have then squared off with a meager multimillion-dollar settlement agreement.

Shortly before the FTC settlement was finalized, Guillaume Chaslot published an op-ed for WIRED, outlining the major issues at the core of YouTube’s algorithm. He remarked that suggestive content of minors was being proposed to YouTube users through perpetuity — what this essentially means, is since YouTube’s AI has detected a trend in which those watching habits do occur, it treated them like any other and sought to bring into them an influx of new audiences to maximize engagement. Working off that finding backwards, it’s very easy to conclude that YouTube’s AI hasn’t been calibrated to discriminate against content that violates its terms-of-service — quite the contrary in fact. The mixture of platform’s propensity to suggest content that borders the platform’s community guidelines, the users not questioning the algorithm’s choices, and YouTube’s (human) hands-off approach off with regards to content moderation has created a space in which weaponizing the ire of the very few is quite easy to get any video taken down, but a clear problem a great majority of YouTube’s users have with its platform are simply impossible to single out because of the sheer volume of what gets uploaded to YouTube every single day.

So, what’s the solution then? The FTC’s track record of keeping Big Tech’s power in check is quite simply abysmal. In mid-June, the agency fined Facebook $5 billion over privacy violations — a sum that is already pocket-change for the company, and only accomplished raising Facebook’s stock price well above the fined amount. The agency’s settlement with Google over YouTube’s pedophilia scandal wasn’t that bright either, and it raises the question of whether the US law-enforcing agencies are even within close measure of making tech companies — chief among them Facebook and Google — frail at their mention. Donald Trump’s recently-organized social media summit included a list of attendees whose finest accomplishments are a direct result of the very criticism social media platforms are subject to every passing minute of their existence. So if legislative will is lacking, and if regulatory oversight is nigh-inexistent, what is the escape? What will right the course of an already off-the-rails YouTube? The more immediate change will have to come from within, and it has to intentionally make the algorithm worse.

The algorithm is designed to prey upon users’ most-primitive desire for consuming content. We’re creatures bred by curiosity, born to sate it, and were sold through the promise of scientific inquiry that more of our vices will be accommodated for, without pondering what the consequences for an unconditional appeal for what we want, could be.

Photo by NOAA on Unsplash

YouTube’s algorithm threat is akin to that of nuclear might at the height of the atomic age. We kept pushing and pushing, not realizing an arsenal of weapons of unparalleled destruction would soon become the leverage by which peace is maintained. Things have reached such a dire state of being, that “Mutually-Assured Destruction” became synonymous with peace, not fright. And YouTube is in such a position now — if it wants to continue to exist, it’ll have to do less of what its original mission intended it to accomplish. If nuclear energy has now become one avenue where fossil fuel use reduction is explored, YouTube could use its AI’s computational might to think consciously about what it puts forth before its users. It is imperative to now acknowledge that YouTube’s experiment in increasing user engagement has gone terribly awry, and the focus should now be to fix it while the opportunity is still there.

The YouTube algorithm isn’t bad because it failed its mission — it is bad because it is precisely very good at it. It was instructed to drive more traffic towards the site, and it successfully did that. Now, its conceivers have to come up with a way to divert its attention away from harmful content, well into what made YouTube once great — content that only added to the user experience, and less fodder for extremists and harm-seekers to feed on.

The legacy of the platform’s existence has become that it once was a safe haven full of cat videos and DIY content, but now turned into the main destination for the young radical. It can be hard to draw a clear distinction between what YouTube should, and shouldn’t recommend, but the affect of its decisions should be the easiest metric to follow — if your platform produces less terrorists, pedophiles, tragedy-grifters, and deep emotional pain due to creative burnout, then you’d have done your job successfully. Otherwise, it’s time to go back to the drawing board and try again and again, until the algorithm becomes wholly aware of the massive cost it’s incurred, and will continue to incur to humanity if its powers stay unchecked.

The Startup

Medium's largest active publication, followed by +492K people. Follow to join our community.

A. Khaled

Written by

A. Khaled

Blogger with a focus on internet culture, video games, film, and content creators. Bitches about islamophobia every once in a while.

The Startup

Medium's largest active publication, followed by +492K people. Follow to join our community.

Welcome to a place where words matter. On Medium, smart voices and original ideas take center stage - with no ads in sight. Watch
Follow all the topics you care about, and we’ll deliver the best stories for you to your homepage and inbox. Explore
Get unlimited access to the best stories on Medium — and support writers while you’re at it. Just $5/month. Upgrade