Something is Rotten on YouTube
I recently read a bewildering post on Medium by James Bridle, “Something is wrong on the internet,” that cast a light on some dark corners on YouTube. (No, it’s not that. It’s far stranger.)
You can go read the post, but be prepared to be left feeling numb and perplexed. In short, people are posting weird ass videos on Youtube that have the potential to cause all manner of unexpected issues as the videos target young children with familiar cartoon characters and sing-song melodies that are intermingled with a mix of creepy behaviors and outright violence (among other things). Once your kid stumbles upon this mess, YouTube’s algorithm and auto play function may lock them into a long session of such videos. As one who suffered frequent, terrifying nightmares in my youth without such fodder, I fear what such viewing might do to kids in the same shoes. And more importantly, what effects it might have on their psyches.
As Bridle puts it:
What we’re talking about is very young children, effectively from birth, being deliberately targeted with content which will traumatise and disturb them, via networks which are extremely vulnerable to exactly this form of abuse. It’s not about trolls, but about a kind of violence inherent in the combination of digital systems and capitalist incentives.
This, I think, is my point: The system is complicit in the abuse. (emphasis mine)
The bolded point above is a recurring one recently. What effects did Facebook and Twitter have in recent elections? Did outside actors use them to leverage existing biases and further divide countries that are in dire need of common ground? And are there any adults in the room or are the inmates running the asylum?
More YouTube Tomfoolery
That get’s me to the weird discovery I made today. A file I’m using to keep track of links for a writing project became corrupted today, and so I was wading through it trying to salvage as much as I could. I was looking for a Guardian article on possible food cost inflation after Brexit via its title, “Food prices would soar after no-deal Brexit, warns major dairy boss,” when I made my discovery. In searching Google for that text, I found the link I was looking for, but I also found a couple of YouTube links (appended at the end of this post) of the same name.
From the skipping through that I did, it seems that both of these videos are automatically generated by pulling the text of the article (and some images) and then embedding it in a video via an algorithm. (It might have been done manually, but it certainly feels automated.)
Maybe I’ve just missed the boat on this, but I can’t recall ever seeing anything like it before. And I don’t see it as a desirable replacement for reading the original article, but I don’t suspect that’s the aim. Rather, it seems another ham-fisted attempt at grabbing ad revenue via an automated system. Anyways, I posted the tweet below in the hopes of alerting The Guardian team in case they weren’t yet aware of this.
With that, the real concern needs to be with YouTube. What controls do they have in place to protect copy written materials? And what are they doing to improve performance on these lines? More importantly, what are they doing to help protect children from inappropriate content?
If, as currently practiced, ad revenue chasers can post videos to YouTube with the same title as the original article without issue (the article in question is seven days old, the videos five and six days), what hope do they have of protecting copyrights when such approaches become more sophisticated? Will they have to shift to greater human intervention with measures to help make sure that content providers are trusted parties? And will advertisers lose faith in the system?
I don’t know the answers to these questions. But I think they’re all worth asking, especially in our current environment in which these systems are increasingly being perverted and leveraged against us.
The last video in Bridle’s post, the one he called out for being highly problematic, is just over a month old. It has received over 300k clicks. The account that posted it has nearly 400k subscribers. It’s obviously in violation of multiple copyrights, and yet it is still up. The video does have a notice calling it out for Age-restricted content, but how does that notice play out as a mechanism to keep children from viewing the content. And how can an account that’s posting age-restricted content get away with having a handle like, “Animals For Kids?”
I’ll go back to Bridle’s key point:
This, I think, is my point: The system is complicit in the abuse.
Does anyone disagree? I surely don’t.
Something is rotten on YouTube. Let’s demand that they do something about it.
Here are the YouTube productions of the article from the Guardian.
Originally published at Chris Oestereich.