How AIWORK Ensures Content Safety for Viewers

AIWork
4 min readMar 31, 2022

--

Despite taking the lead among the pioneer crypto projects merging artificial intelligence (AI) with blockchain, we have to stay vigilant on matters concerning the kind of content posted in our open marketplace. Yes, we will go that extra mile to refurbish the video content ecosystem, but we will not deviate from prioritizing the protection of our users from harmful content online.

For those joining us, we are AIWORK; a decentralized, open-source blockchain protocol and ecosystem built on a consensus network of Artificial Intelligence (AI) computing resources and a community of human experts, used to generate normalized and enhanced metadata for video content.

As it stands, the video content arena is booming with users consuming more content now than ever before. Many might not know, but this sector is highly concentrated, leaving consumers with a limited number of platforms to view videos online. There are established platforms that came before us that pose certain challenges to the new entrants such that they cannot compete with them effectively.

As an incoming video content project on the blockchain, we plan to stand out from our predecessors by bringing in an innovative product offering that will differentiate us from the existing platforms.

The Existing Video Content Platforms and their Content Safety Drawbacks

YouTube

This platform is by far the largest in the video content sector with more than 4.3 million videos getting viewed every minute. Despite all this growth, experts have warned that brand safety on YouTube might be impossible to guarantee. This poses a challenge to advertisers as YouTube does not guarantee that your brand is in a safe environment when placed there.

In 2017, multiple high-profile brands came up in arms against YouTube as their ads were placed against unsafe content such as terrorist video uploads.

TikTok

This platform is a fast-rising star in the video content ecosystem, making its name a prominent social media success story in the last five years. That being said, on TikTok, regular rules do not apply. This gives you a hard time as a content creator or advertiser to carry out immense groundwork before posting anything there.

The biggest issue related to brand safety on TikTok is creators taking advantage of branded hashtag challenges for their own gain. Another issue is that it is turning into a digital playground where content creators can experiment and while at it, might post harmful content.

Facebook

In just a few years, advertisers are already raising concerns about how crowded Facebook’s in-stream video program has grown. With the surge in pages that can monetize their videos on this new feature, it is becoming clear that the brand safety guard measures Facebook has added to its in-stream program lag behind the standards needed for a huge video content platform.

We do commend the growth Facebook has experienced thus far but they should not prioritize maximizing inventory at the expense of making their platform brand safe.

AIWORK Rising Above the Rest

For us, we have to prioritize content access management at the user level and still install other restrictions that allow us to define access for each video content. It might seem like a mouth full but with ContentGraph, our trademark content safety index, this is easy to attain.

By leveraging the power of AI, this content safety index can define a confidence score for each of several content safety attributes, such as nudity, adult, offensive language, hate speech, violence, guns, alcohol, religion, etc. These scores will then help ContentGraph make a programmatic determination of content suitability, thereby fostering content safety. ContentGraph is distinctive in its extensible feature through which we can add new content safety attributes that might come up later. If the AI can be trained to recognize new safety attributes in video content, the attribute can be added to ContentGraph by adding a single digit, without affecting the other attributes or earlier uses of the graph.

Since we have AI behind us and a community of human experts working behind the scenes, some of the issues seen with the established video content platforms mentioned above can be avoided. Take the example of YouTube, content that does not align with your brand can be easily detected and flagged as inappropriate. ContentGraph will then give it a low safety score and label such content as rejected to avoid any future mishaps.

Furthermore, if you as a content creator need to customize your content safety index, we have a feature that can come in handy. You can use AIWORK and ContentGraph to offer content safety filters that viewers could use to search for safe and appropriate content.

As AIWORK, we are committed to ensuring that the health and diversity of video content remain our number one priority while still maintaining an innovative edge on other emerging matters in this ecosystem such as brand safety and suitability of social video advertising.

Follow our socials for more in-depth project insights:

Website | Telegram | Twitter | Medium

--

--