Brand Safety & AI: Everything You Need to Know

Unitary
Unitary
Published in
3 min readMay 10, 2023

With the public release of ChatGPT, artificial intelligence has hijacked the news agenda. This technically impressive, authentic-sounding chatbot has made AI accessible to the masses — and for many people, ChatGPT has become synonymous with artificial intelligence itself.

Different types of AI

Not all artificial intelligence is the same, even if ChatGPT is the first time many people have consciously used an AI-powered system. ChatGPT, like DALL-E, is an example of generative AI, a system that has been trained to create new content based on user input. So if you ask ChatGPT to write a poem, it will use its algorithms and training data to create a composition that will pass for poetry.

Artificial intelligence has been in use for several years, even if the general public was not fully aware of its existence. Much of this ignorance is due to the relative mundaneness of these specialised applications. Apple and Google have been using machine learning and AI to identify and tag people automatically in their photo apps for instance. Similarly, image recognition technology has been tested for automatically identifying cancerous growths in medical imaging applications.

In these instances, AI is used for a very specific and narrow task. It is not intended to be interactive, nor is it supposed to create anything. And in reality, AI will continue to operate ‘behind the scenes’ in the majority of use cases.

Different AI use cases in our everyday life. Image via TechVivdan.‍

AI and brand safety

The issue of generative AI and its potential misuse is creating headaches for many brand safety professionals. However, AI can also help to make their job a lot easier.

Take Unitary for instance. Our proprietary machine learning and artificial intelligence algorithms dramatically simplify the process of monitoring content to ensure it matches brand safety industry standards. These algorithms are even capable of assessing video content to ensure that every element — visual, audio, subtitles, etc — is fully compliant. And it does so in an automated way.

As well as providing content assessment capabilities that accurately emulate those of a human moderator, AI goes beyond simple keyword categorisations. Once trained, AI can analyse content against other, more contextual factors, such as cultural cues or innuendo.

AI can also facilitate the adherence to industry standards, including GARM’s Brand Safety Framework — the latest industry-agreed guidelines for safe advertising. By translating high-level guidelines into detailed content policies, machine learning can be used to classify content according to a policy, with great speed and accuracy.‍

Doing more, faster

Like the Apple and Google examples, AI content moderation implementations are optimised for a particular task. With the right infrastructure behind them, AI algorithms are capable of processing more data and making more decisions faster than their human equivalents. In some cases it means processing 25,000 frames of video every second, or 3 billion images each day — far more than a human moderator could hope to achieve in several years.

And because specialised algorithms are constantly being refined and improved, capabilities continue to evolve at a similarly rapid pace. In this way, AI will also help strengthen brand safety efforts in the face of new and emerging threats.

Clearly the rise of AI has the potential to create new, previously unthinkable brand safety issues. However, not all AI is the same — and the right AI tools can actually help brand safety specialists carry out their duties more quickly and effectively.

To learn more about Unitary and how artificial intelligence can turbocharge your brand safety efforts, please contact us.

--

--

Unitary
Unitary
Editor for

Stay up to date with the latest stories in the online safety and AI space!