Brand Safety and video understanding : a matter of life and death for the media industry (Part 1)

Paul Chaumont
Jul 15 · 7 min read

Brand safety has exploded over the past couple of years and has become a vital priority for the media industry. Advertisers cannot afford any single mistake on giving the impression they endorse inappropriate content and content providers are dealing with such inventories it has become mandatory to sort the good from the bad as fast as possible. However, even though automated on semantic understanding, it seems like video Brand Safety still follows very archaic standards, both in categorization and automation. It can easily be explained by the difficulty to assess each item and event occurring in a video.

Many Brand Safety “experts” thus rely on human platforms dedicated to human moderation with barely satisfying results. Video understanding is without a doubt the ultimate key to help brand safety step into an automated and 100% verified process. Without it, would the media industry be able to survive the tsunami of unverified video content they are already facing and will even more in the future ?


Brand Safety 101 : what to put behind ?

This might be the trickiest question of all. Brand Safety in its broad meaning is a set of measures aiming at protecting a brand’s image from any content that might deteriorate it in an advertising context. In other words, making sure that your brand is not associated to any content that might ruin its effect. Brand Safety has evolved throughout the evolution of web content : not only did it become wider and trickier to delimit but the exponential number of content and its various forms made it an exhausting but vital task for all.

When speaking about Brand Safety, one usually follows the same broad categories that we all know, such as violence, alcohol, drugs, hate speeches, porn and offensive content. Within time though, advertisers and brands raised their expectations in order to adapt to new forms of “offensive” content : illegal download and most recently fake news are now a priority for the industry, as streaming platforms and social networks took over the internet.

Investing in Brand Safety should not be reduced to protecting a brand with any negative impact : it actually has much broader consequences especially if taken lightly.


One mistake to ruin a whole campaign

Always remember it takes one mistake to ruin a brand image on the long term. It has been proved that consumers always viewed advertising on a content as intentional from the brand, whatever the context. If a brand were to appear on a YouTube channel with racist undertones, consumer would most likely retain that the brand endorsed the content, therefore is a bit of racist itself.

Though hard to measure, a recent study has tried to quantify the direct viewer effect of placing an ad next to an offensive content. This study indicates a 200% brand purchase intent decline after watching inappropriate content as the brand quality perception is 7 times worst. Meanwhile, consumers are 450% more likely to feel the brand does not care about them.

Brand Safety fails can have catastrophic financial consequences as they truly have an impact on the brands perception and future purchases.


Human moderation : a costly and exhausting mission

Brand Safety process still involves a lot of human tasks. Although lots of specialized companies claim they have performing algorithms to moderate any offensive content, they usually rely on outsourced platforms of hundreds of human eyes binge-watching images and videos and sorting the good from the bad ones. Very far from full automation then.

But all this has a real cost. First, because humans cannot watch more content than they have hours in their daywork. The more content, the more resources needed to monitor it. Second because humans make mistakes: if you want to feel what it’s like to take a decision on a piece of content after having watched eight straight hours of it, you’d be surprised how you end up losing any rationality. Third because of cultural bias: many moderation platforms are outsourced and depend on ever-changing cultural references. Again, a human eye will always have biases preventing him to operate full objective moderation, even with clear guidelines.

Last but not least, because it is the worst job in the planet. And how could this be different : imagine spending hours watching violent content, porn or incitement to terrorism all day long, every day. In this kind of business, turnover is very high, and employees usually end up losing it pretty quickly.


Preventing rather than reporting

As companies still rely on human based moderation, even though they like to hide behind obscure technological solutions, they usually don’t have the capacity to prevent inappropriate content from being published. Every minute, 400 hours of content are uploaded only on YouTube. Assuming that you’d want to do a proper job and ensure full brand safety, this lets you imagine how many human platforms would be required to handle this. By the way, this is exactly the kind of problems companies like Facebook are facing right now, trying to avoid any inappropriate content to remain on their platforms longer than minutes. As preventing would create an infinite waiting line and eventually kill the magic of instant publishing, the only solution remains reporting and taking the risk to push inappropriate content out there, waiting for any advertiser to fall in the trap.

Brand Safety automation and adapted algorithm to detect any irregularity would not only secure the whole process but ensure never publishing content at risk.


Towards modular Brand Safety…

As long as Brand Safety will remain a matter of human-based activity, it will never get the chance to be as accurate as brands will like it to be. Sure thing, any brand is capable of telling you they do not want to appear next to gun shootings, ISIS flags or Nazi salutes. But Brand Safety is so much broader than this and more importantly, adapted to each brand. Football star Neymar sponsors having trouble benefitting from his current image would probably want to avoid strengthening this endorsement on videos featuring Neymar’s extra-sportive issues. At least for a while. To the contrary, if you’re a non-profit fighting against animal cruelty, you might have interest in appearing in these exact video content featuring animal cruelty to make your point. In other words, Brand Safety is as polymorphic as there are companies and communication objectives.

This is why video understanding allows the media industry to lean towards a modular Brand Safety. Depending on your brand, it is the promise to activate or deactivate any criteria that might harm your brand image.


…And inventory optimization

Finally, it is not to be forgotten the power of video understanding when it comes to automate sorting of thousands of videos to be published everyday and offered as advertising territories to the brands. The quicker and more accurate a publisher can be to sort good advertising material from bad material for a client, the higher the chances to monetize a larger spectrum of the inventory on a longer time.

Indeed, one last thing to take into account is ever changing context. For instance, Harvey Weinstein was perfectly ‘Brand Safe” 18 months ago and any correlation between any advertising and him would not have been a problem. Only Brand Safety automation can allow evolving your standards on what is Brand Safe in the present context.


Deepfake : the end of human-based monitoring

Technology has already evolved to the point human monitoring, whatever the amount of money you put into it, won’t be enough to prevent inappropriate content. Fake news have opened new dangerous doors in terms of content safety. The latest one on the subject is deepfake: a technique for human image synthesis based on artificial intelligence, used to produce or alter video content so that it presents something that didn’t occur. Replacing a face by another, adding an item in a video, making people say something they did not say, Deep Fake creates infinite new ways to “play” with content and mess with the integrity of each piece.

Without a single doubt, Deepfake will soon enough become a Brand Safety priority for any media company. This of course, if video understanding based on true artificial intelligence has started replacing armies of human eyes checking thousands of hours of content.

In the second part of this article, we’ll tell you more about Reminiz Brand Safety algorithm and how it has been created to operate accordingly to every company’s requirement.

Reminiz Insights

Reminiz is a world pioneer video understanding technology offering real-time facial and logo recognition. Augmented Content for a never-seen viewer experience.

Paul Chaumont

Written by

Product Manager at Reminiz

Reminiz Insights

Reminiz is a world pioneer video understanding technology offering real-time facial and logo recognition. Augmented Content for a never-seen viewer experience.

Welcome to a place where words matter. On Medium, smart voices and original ideas take center stage - with no ads in sight. Watch
Follow all the topics you care about, and we’ll deliver the best stories for you to your homepage and inbox. Explore
Get unlimited access to the best stories on Medium — and support writers while you’re at it. Just $5/month. Upgrade