Can we finally trust Artificial Intelligence on Brand Safety matters?

Paul Chaumont
Context Insights
Published in
7 min readJan 22, 2021

Brand Safety applied to video media is a topic that we have been discussing many times already on this blog and that has become key to the digital advertising conversation. In 2019, Alphabet was already spending hundreds of millions of dollars annually on content review only to moderate about 300K terrorist videos monthly on YouTube. A very high cost for small results.

As time goes by and human moderation seems (only seems) to slowly fade away, one could think that Artificial Intelligence is now ready for a complete takeover. But is it truly? And most importantly, what can we really put behind Brand Safety, a catch-all term that many companies thrive on without having the real technologies to provide?

Brand Safety, Brand Suitability or Brand Equity?

When addressing the matter of Brand Safety, people commonly use different acceptances that refer to specific meanings, thus services. Three should be taken into consideration in the context of Brand Image and content moderation : Brand Safety, Brand Suitability and Brand Equity.

Brand Equity refers to the value of a brand, determined by the consumer’s perception. It can either be positive or negative and affect certain features of your brand such as quality, credibility, or consideration.

Brand Safety refers to all practices ensuring that any advertising campaign does not appear linked to any dangerous or illegal content. This mostly includes content featuring violence, hate speech, drugs, alcohol or any form of nudity and sex.

Brand Suitability refers to all practices ensuring that any advertising campaign does not appear linked to any content that would be judged irrelevant or inappropriate for the brand. This more broadly acceptance is often summed up by saying : if it is aligned with your brand image, guidelines and strategy, it is brand suitable.

Once that said, we could sum it up as follows : regarding digital content, Brand Suitability ensures a brand to appear in a relevant and appropriate content, including Brand Safety matters, to protect or improve its Brand Equity. As Facebook (60%) and YouTube (31%) remain the top places where consumers face non-brandsafe content, negative consumer perceptions are now a key topic for brands : 50% of social media users admit having changed their opinion of a brand whose ads were displayed next to inappropriate content.

A single classification to rule them all?

As Brand Suitability can vary a lot from one brand to another, we will limit the following to Brand Safety matters. The main issue then, is to agree upon a common framework to determine what is safe and what is not and how to introduce degrees in terms of severity. Actors such the Global Alliance for Responsible Media or Integral Ads Science have issued specific classifications and taxonomy to try to set limits to what could be deemed as unsafe. Solution providers such as AWS also provide their own classification. Most of them undergo rigorous auditing and are accredited by the Media Rating Council, giving them credibility on the advertising market.

But all of them still face two major issues preventing them from going towards a universal Brand Safety taxonomy. First, most taxonomies slightly consider video format and mostly favor a broad and common acceptance of media format, mixing text, image, website origins and in some cases voice. This explains why categories such as “Adult & Explicit Sexual Content” go along “Spam” or “Online Piracy”. In the case of video media, these classifications are most times irrelevant : they provide too many details on useless categories and too few on major ones.

This is the second issue regarding existing classification : they all define their own level of sensibility regarding major topics such as “terrorism” or “gore content” but these are fine lines that would truly benefit from a common acceptance by all parties. Regarding topics such as nudity, once again, what is deemed acceptable or not is up to the country, the culture of the brand and more than anything : the context. Many topics that, in the case of video, are still not really considered.

Understand video like humans do

Thus, understanding the context of appearance becomes key in Brand Safety and this explains why video understanding automation is the ultimate tool for media stakeholders. A fake terrorist in a comedy could easily dress exactly like a reel one but would this video content be considered as apology for terrorism? Whether nudity is displayed in a serious documentary or an erotic movie can completely change our perception of nudity and its severeness. As for weapons, footage of a school shooting could not be put at the same level as displaying military troops equipped with guns marching during a parade. It’s all about context of appearance.

This is where the game becomes tricky. As of today, some companies do a fairly good job using A.I to analyze semantic, image or sound and get a fair assessment of the context. Video is the next level as it requires combining many various kinds of learnings, from the easiest ones (a plastic gun used by a kid is not a real gun) to the toughest cases (one could as well drink whisky in a water glass and water in a wine glass, how to make the difference?).

Let us take a single example of textbook Brand Safety to understand issues that are raised with video and context : promotion and advocacy of alcohol use. Display of alcohol bottles and products are rather easy to identify. Most times, shapes, logos, and colors really help the algorithm detect what is alcohol and what is not. But when it comes to drinking in other containers, behavioral understanding is necessary : drinking shots at a party, playing a beer game on a ping-pong table, or simply pouring translucid alcohol in a plain glass are as many examples as possible that require understanding that context is most likely to be displaying alcohol. Identifying someone being drunk without even identifying alcohol creates new difficulties, based on body movement, facial expressions, voice, overall context.

Alcohol identification is not just a matter of detecting beer glasses

In the end, it all comes down to levels of understanding, as we humans do. Eventually, algorithms can help brands make assumptions on the context of a video, thus helping them assess the true necessity to reject a video.

Towards levels of safety and customized variations

In theory, Brand Safety applied to video is a quite simple affair : reject weapons, reject nudity, reject drugs and alcohol. But practically, once you block the “simple” cases, you are left with a vast majority of “grey area” cases for which Brand Safety becomes relative or subjective.

Again, let us take a simple example to understand it: the events that occurred to the U.S capitol a few days ago. Most videos from this event display weapons, violence, street riots, hate flags and signs. Whether you judge them Brand Safe or not will depend on many factors : the level of violence displayed, the country it airs, the handling of information itself or even the context of publishing. All this will have an impact on how confident a brand is to be associated with such videos.

Brand Safety applied to video cannot simply be limited to a narrow classification. Any reasonable solution and offering real A.I powered brand safety monitoring (And not pretending to do so by hiring external human moderation partners) should indeed include two major cornerstones in their approach :

  • Always give probabilities of content safety and not strict answers. Sometimes it can be 100% safe or unsafe. But most times, it will be way more realistic to provide a tendency rather than an assertion.
  • These tendencies must rely upon the brand’s own guidelines and acceptance towards any topic related to Brand Safety. And even for a single brand, these should be adaptable to local constraints. Nudity appearances for instance should rely upon customizable levels depending on needs and constraints.

In the end, Artificial Intelligence helps us provide a rational understanding for a subjective interpretation. This subjectivity is each one’s to set but our role as A.I solution providers is to give them the means to make their own choices.

There are still many complexities related to video understanding and we at Context work each day to increase our client’s capabilities to define their Brand Safety (and suitability) guidelines. The final argument for the advent of A.I applied to Brand Safety is that it will open a new era for programmatic advertising : being able to monitor pre-bid at scale and increase speed of response. That is what will eventually make the difference between manual moderators (and A.I wannabes), mostly offering post-bid reports, and real A.I-powered Brand Safety solutions, such as the one we offer at Context.

--

--