Like Cat Videos? YouTube Recommends: Ultraviolence.

The scary truth behind YouTube’s rabbit-hole algorithm that’s radicalizing nations.

Machine language is controlling what we see and how we think, and as they are getting smarter. People are formulating opinions based on false narratives spewed out by automated sources that know what their targets want to hear and create a false reality to appease them.

There are now individuals creating hyper-biased propaganda driven websites and these are being linked to other fake news sites, with YouTube being at the center of the source. What is being seen is that many of these sites are catapulting to a new breed of websites that line up with confirmation biases and ideology.

In a 2017 Times article it was reported that ISIS used YouTube to entice new recruits:

“Islamic State has flooded YouTube with hundreds of violent recruitment videos since the terrorist attack in London last week in an apparent attempt to capitalise on the tragedy, The Times can reveal.”

Google, the owner of YouTube, failed to block the films, despite dozens being posted under obvious usernames such as “Islamic Caliphate” or “IS Agent”. Many are produced by the media wing of ISIS and show, in high definition, beheadings and other extreme violence, including by children.

It should be noted that at a later date, YouTube began to redirect some of these videos to those that are anti-terrorist.

The internet has opened the door to information chaos and malicious groups around the world are jumping on the bandwagon to manipulate your mind. You might consider yourself to be a critical thinker and disregard the outrageous and bizarre, but what about those people that have mental disorders? They fall prey to all manner of misinformation that makes the average psychic thriller seem tame. To them, it’s all real and Youtube is one of the worst offenders due to the visual nature of their content.

YouTube Preys on Children

YouTube was launched in 2005, but when it was purchased by the tech giant Google, it took on a whole new face. The engineers that were hired created an algorithm to increase profits based on keeping users glued to videos, and recommend additional videos before the current one was even done. The goal wasn’t to offer relevant videos, but to keep you watching non-stop. The business model worked, because the current viewership is around two billion every day, and a majority of them are young.

A Pew Study in 2018 found that 85% of the teens in the United States use YouTube, making this the most popular under-20 online platform. The concept might sound harmless enough when you are watching the antics of puppies, but it takes on an entirely bad taste when the topics are about extremist or political content. YouTube’s design is to place people in filter bubbles of their design and not offer any way to escape. There is a kind of dopamine reinforcement that attracts and keeps the attention of kids, all the while, random influencer accounts are making a profit by selling advertising in their channels. Additional elements to attract kids include alluring music that becomes emblazoned into the brains of anyone that listens to them. Baby Shark anyone? Appealing to multiple senses is one of the sure ways to keep the viewers on the video and they play them over and over again.

Every generation has some form of red flag alert behavior warning for the younger generation. We look back and laugh at the “dangers” of Elvis Presley swiveling his hips or the long hair of the Beatles. YouTube offers a platform for anyone to film and upload, and it covers the gamut of everything from eating laundry pods, drinking urine, all the way to suicide. For the youngest children, some of these YouTube videos are generating absolute fear and traumatization.

An article in Wired UK: Children’s YouTube is still churning out blood, suicide, and cannibalism states:

“ WIRED found videos containing violence against child characters, age-inappropriate sexualisation, Paw Patrol characters attempting suicide and Peppa Pig being tricked into eating bacon. These were discovered by following recommendations in YouTube’s sidebar or simply allowing children’s videos to autoplay, starting with legitimate content. ”

‘Recommendations are designed to optimize watch time, there is no reason that it shows content that is actually good for kids. It might sometimes, but if it does it is coincidence,’ says former YouTube engineer Guillaume Chaslot, who founded AlgoTransparency, a project that aims to highlight and explain the impact of algorithms in determining what we see online. “Working at YouTube on recommendations, I felt I was the bad guy in Pinocchio: showing kids a colorful and fun world, but actually turning them into donkeys to maximize revenue.”

Algorithms That Encourage Extremism

In a New York Times article, sociologist Zeynep Tufecki watches countless YouTube videos of Donald Trump then recommended videos “that featured white supremacist rants, Holocaust denials and other disturbing content.” YouTube becomes directly responsible for creating the kind of deviant rabbit hole that some might turn away from, but others find that they are strangely drawn to watch.

Research group Data&Society released a scathing report

“Alternative Influence: Broadcasting the Reactionary Right on YouTube by Researcher Rebecca Lewis presents data from approximately 65 political influencers across 81 channels to identify the “Alternative Influence Network (AIN)”; an alternative media system that adopts the techniques of brand influencers to build audiences and “sell” them political ideology.

Alternative Influence offers insights into the connection between influence, amplification, monetization, and radicalization at a time when platform companies struggle to handle policies and standards for extremist influencers. The network of scholars, media pundits, and internet celebrities that Lewis identifies leverages YouTube to promote a range of political positions, from mainstream versions of libertarianism and conservatism, all the way to overt white nationalism.”

Alternative Influence goes on to say:

‘YouTube monetizes influence for everyone, regardless of how harmful their belief systems are. The platform, and its parent company, have allowed racist, misogynist, and harassing content to remain online — and in many cases, to generate advertising revenue — as long as it does not explicitly include slurs. YouTube also profits directly from features like Super Chat which often incentivizes ‘shocking’ content.’
White Supremacists and KKK Members are protetected under free speech and have tens of thousands of views and subscribers on Youtube and are even often recommended to people based on largely unrelated videos.

Free Speech vs. Societal Harm

The landscape of freedom that is experienced on the net has a downside for those that suffer from mental illness. The danger with serious mental health conditions is that they cover a wide span of symptoms, and the temptation of YouTube can take them down a spiraling vortex of no return.

Small sampling of Youtube Conspiracy Videos

A case that brought this front and center was a 26-year-old man that killed his brother because he heard voices telling him that he was a lizard. Buckley Wolfe exhibited early signs of mental illness, but it was his experiences with YouTube videos that made him travel deeper into his sickness. The YouTube algorithm consistently offered Wolfe additional videos until he finally embraced radical white extremist groups such as Qanon and Proud Boys. The delusional fantasies that permeated eventually solidified when he thought his brother was a lizard and asked him to kill him. In the situation of Buckley Wolfe, the YouTube algorithm was doing precisely what it was programmed to do, and the results were catastrophic.

The suggestion algorithm designed for YouTube never took human emotion, reactions, or mental conditions into consideration.

Using AI, the organizations that are involved with YouTube in this massive brainwashing scans the news and creates false reports, and in many cases videos that incite violence.

Parent company Google has been called to task on YouTube radicalization, and while they have stepped up their TOS (terms of service) policies, it is only enough to ensure their own organization. If a video receives complaints, they will review and remove it, but this relies solely on users to register a complaint. In another step, YouTube has indicated that they will be removing automated control, allowing only humans to make decisions regarding accountability and acceptability. This alternative is also not an answer, as it will involve people that are not qualified to make these types of decisions.

Worst-case-scenarios of those watching YouTube videos may involve people with mental health difficulties, but the problem extends far beyond that. There was such a volume of YouTube AI generated fake news that the fallout resulted in influencing the 2016 Presidential election, the Brexit vote, and presenting false political voices.

Artificially Generated Content on Social Media

In his article, FakeTube: AI-Generated News on YouTube Jonathan Albright states:

”Beyond YouTube’s role in hosting videos through embeds on political websites, after reading a piece on AI…I thought I’d look more into why it increasingly feels YouTube, like Google, is being harnessed as a powerful agenda booster for certain political voices.”
Using the latest version of fake news called “A Tease”, Jonathan located 6,902 AI-generated YouTube videos that portrayed themselves as “news.”

“Curated, Republished, and AI-Narrated — Each “Tease” AI-generated video consists of a progressive “slideshow” of still images related to the title of the video, which originate from various websites, WordPress blogs, and content delivery networks across the internet. It’s notable that the last frame of these “Tease” videos seem to have a “copyright” disclaimer.”

“A computerized voice ‘reads’ out text from published news articles and other news-related content as the slideshow progresses. The narration is surprisingly accurate and coherent, except for the brief pauses during the transitions between the sources from which the video derives its ‘story.’”

We are looking at the next evolution of AI generated, so-called “news” videos that have escalated the use of the YouTube algorithms so that no one is in charge of what any of us see. Albright’s article is continuing to tally the videos and has made an approximation at around 80,000 so far.

The time has come to educate ourselves about AI-controlled mind influence. Just as YouTube designed an algorithm to advance profitability for videos and viewing, there is a requirement to have an education and AI countermeasure systems that are not incentivized by ad-dollars, but for the search of the truth. Solutions must be brought forward that can cut through the lies, misinformation, and misrepresentation.

This Is Why We Fight,
Blackbird.AI 
www.blackbird.ai