Watch Yourself: The Techlash Comes To YouTube

Another digital ad giant faces increasing public censure as content safety issues surface

Richard Yao
IPG Media Lab
7 min readApr 4, 2019

--

Photo by Rachit Tank on Unsplash

Move over, Facebook! There is a new subject of scrutiny and censure in town: YouTube.

Over the past month or so, there have been a number of high-profile reports exposing the toxic, inappropriate, and questionable video content that can exist and spread on the Google-owned site. Starting with an investigation by Wired in late February that shed light on the hundreds of thousands of sexually suggestive comments on YouTube videos featuring children, to the exposé that Bloomberg published on Tuesday alleging that YouTube executives willfully ignored and allowed the toxic content (including harmful misinformation and conspiracy theories, but also extremist content) to spread on its platform for the sake of boosting user engagement, YouTube is now unmistakably following Facebook’s footsteps as its sees the tide of public opinion starts to turn. If this round of backlash holds up, how much will it hurt YouTube’s position as the №1 destination for digital video?

Old Problems, New Context

Ironically, all these revelations about the questionable content that YouTube harbors aren’t exactly new. For years, pediatricians and advocacy groups have been raising concerns over child safety on YouTube. To its credit, the company has taken measures to try to make YouTube a safe space for young viewers by launching a special app specifically for children’s content, YouTube Kids, four years ago, but that didn’t really solve the problem as videos on YouTube Kids are still corruptible by bad actors and, partly due to the lack of awareness, most kids end up on regular YouTube anyway.

Beyond children’s content, videos with harmful, toxic content such as anti-vaccination and extremist propaganda have long been hiding in plain sight on YouTube, all just one simple search or algorithmic recommendation away. A range of bad actors, from white supremacists to ISIS and other terrorist groups, have long been using YouTube as a recruitment tool. Also, remember Pizzagate? That was over two years ago.

What’s new about these issues is that the larger environment has changed. The ongoing “techlash” continues to sour consumer sentiment towards tech companies and provides a new context that has enabled the media to build momentum around the content safety issues that previously failed to break through to the public consciousness. Following what felt like a year-long news barrage about Facebook’s reprehensible practices around data privacy and its incompetence at regulating the harmful misinformation spreading on News Feeds and private Groups, more and more consumers are starting to turn a suspicious eye towards other social content platforms as well. Although the media scrutiny and souring consumer sentiments have yet to hurt Facebook’s bottom line in any substantial way, the eroding consumer trust is costing the social network some valuable growth opportunities, such as expanding into the smart home space.

These new reports are coming out at an interesting time. People are getting tired of hearing about yet another Facebook security issue, and Mark Zuckerberg has not only declared a new privacy-focused mission statement for Facebook, but also wrote an op-ed in The Washington Post calling for global governments to regulate Facebook through cooperation. As a result, the media narrative has come to a pause, as the ball is now in the court of the regulators and policy-makers to come up with a plan, so the watchful eyes of public scrutiny has moved on to YouTube and its many long-standing problems.

YouTube’s Original Sin

At the end of the day, YouTube’s problems stem from the same root as Facebook — a reckless pursuit for user engagement that is largely shaped by their business models and exacerbated by their massive scale. As content aggregators that operate globally, their success depends on the zero distribution cost enabled by the internet, which unfortunately means that harmful, toxic content gets the same benefits of free, frictionless distribution as all content. Following the horrific Christchurch mosque shooting last month, graphic footage of the attack live streamed by the shooter was downloaded and re-uploaded online faster than tech companies could respond. Facebook alone says it removed 1.5 million videos within the first 24 hours of the attack. And those are just the clips they were able to catch.

Here, the blame is not completely on YouTube, although they could’ve done a better job at moderating the content uploaded to its platform. YouTube reportedly took down more than 58 million videos and 224 million comments during the third quarter of 2018, but it still faces a bigger challenge with material promoting hateful rhetoric and dangerous behavior where the problematic content is harder to discern by algorithms.

What is on YouTube though, is that its robust algorithm-based recommendation engine, designed to maximize user engagement, will keep recommending similar videos once you watched one. So if someone stumbled upon, say, an anti-vaccination video, YouTube’s algorithm, as it is today, will keep pushing misinformed content about vaccination to that user, thus impacting their views on this issue. As long as the incentives do not change, YouTube’s platform will likely keep the system that is designed to spread questionable content for the sake of virality and engagement.

This systematic issue points to a worrying structural problem we outlined in our 2019 Outlook — In the section titled “Media Haves and Have-nots,” we laid out how today’s media consumption is bifurcating as the media landscape fragments into paid subscriptions and ad-supported sites. As a result, high-quality content retreat behind paywalls and leave people outside of the paywall gate, who are often underprivileged, under-educated, and tend to have low media literacy, to battle an onslaught of fake news, misinformation, and toxic content spread on digital platforms whose algorithms are designed for engagement, not content quality.

Of course, this problem also reflects on the ad industry. Major advertisers, which flocked to digital platforms in recent years due to efficiencies in audience targeting as well as global reach have sometimes found their brands tarnished by appearing next to hate-group propaganda or violent crime. Broadcast and cable TV ad sales chiefs have recently taken to hammering YouTube and Facebook over this “brand safety” issue. If YouTube is fundamentally unable to effectively address this issue, would audiences and advertisers start to look elsewhere for a safer, calmer site to watch short videos?

Dismantling YouTube? Not So Fast

While YouTube has been taking some serious reputational hits, it is far too early to talk about dismantling YouTube as the destination for “snackable” short-form videos. Sure, short-form video consumption is already being spread to other platforms like TikTok and, more importantly, the quickly rising Stories format on Snapchat (which just introduced the ability to for users to share their stories to other apps like Tinder and Houseparty), Instagram, other Facebook properties. But if Facebook’s trajectory is anything to go by, it would seem that what is happening with YouTube in the court of public opinion won’t necessarily hurt its bottom line just yet.

That’s not to say that there’s no competitor waiting in the wings to take YouTube’s place. IGTV, the video off-branch of Instagram, could also be an intriguing alternative, although as it scales, it would inevitably face the same issues plaguing Facebook today unless it adopts a different approach to content recommendation. Twitch, the leading video game streaming site owned by Amazon, is also reportedly itching to branch into non-gaming content, including scripted content, although the Twitch brand is so intrinsically tied to gaming at this point that Amazon may be better off starting from scratch if they are serious about creating a UGC video platform.

Therefore, instead of YouTube being replaced by another video platform, a much likelier outcome of this brouhaha over YouTube is some sort of regulation that can put a forceful stop to YouTube’s mighty but flawed algorithms indiscriminately aiding all types of content to spread globally. In order to achieve that, YouTube, like Facebook, will need to staff a lot more human moderators (Facebook currently has a staff of roughly 7,500 moderators) as well as direct more resources to improve their algorithms, both for detecting toxic content and to build a more nuanced recommendation engine. Of course, it will be a costly operation, and therein lies the irony of regulations — most regulations in tech and media today often end up helping the incumbents strengthen their lead in the market and shut out smaller competitors, since they are the ones that have enough money and resources to deal with these issues.

For people to truly desert YouTube and embrace a new platform to fulfill their need for interstitial video consumption, the tech firms will need to figure out a new method to review and regulate user-generated content. Perhaps a new video platform may opt to impose a tight verification process to determine who can upload videos, which will no doubt limit supply but improve content quality. Or, as our Chief Brand Safety Officer Joshua Lowcock suggested, existing platforms should consider using technologies such as ACR and geo-fencing to improve proactive risk management around content, especially with live streams.

Until there’s a viable solution or alternative, YouTube will remain the biggest online video platform with a mature suite of ad products to help creators monetize their content and brands reach their audience. Much like how Facebook’s bad press has yet to cut into its profits, only time will tell how much this round of backlash against YouTube will hurt Google’s video ad revenue.

--

--