How AI is helping to save the media industry
Few industries have been hit as hard by the technological changes of recent times as the media industry.
The same trends that have improved the lives of billions — the growth of the internet, the spread of social media, and the proliferation of smartphones — have instead disrupted the business models of every major media company, diluting their ability to sustainably fund their core operations.
Most media companies have identified a ‘shift to video’ as a critical pathway out of this digital dilemma. Digital video content is five times more engaging for consumers and four times more valuable for advertisers than text content alone.
However, despite huge investments by media companies in increased video production, these shifts to video have so far failed to deliver meaningful bottom-line results.
That’s because you need to accurately match highly relevant content to the right audience in order to drive video performance. And so although production has scaled, the necessary matching technology has not. Media companies are still reliant on legacy solutions like manual keyword tagging and human matching. This has left their vast video libraries both under-utilized and under-valued.
AI provides the solution.
Advances in deep learning technology can be applied to ‘read’ vast libraries of video to determine what’s in them, and ultimately be trained to ‘understand’ which videos are most relevant in a given context. Suddenly, media companies have a technological solution that can scan and surface highly relevant video to the right audiences at scale.
“This is a perfect example of AI’s ability to deliver direct benefit to business performance,” says Dr. Michael Barnathan, a former Google engineer and current CTO of ViewX, a start-up that’s built an AI used by multiple major media companies to help them transform their video performance.
Despite huge investments by media companies in increased video production, these shifts to video have so far failed to deliver meaningful bottom-line results.
ViewX uses deep learning to analyze the full scope of information contained in any premium video — combining sources such as face detection, audio transcription, scene segmentation, scene captioning, and optical character recognition (OCR) — and then applies multiple proprietary innovations to improve how that information is prioritized, tailored to reflect the needs of the media industry. The result: transformed accuracy that moves beyond traditional performance limitations to achieve up to 98% accuracy vs human ground truth.
Their technological innovations touch on both the metadata generation and their proprietary relevance engine. For example, their OCR processor captures on-screen text by analyzing patterns in videos’ morphological gradients to differentiate programmatically overlaid text, natural text in the background, and the underlying background image. A Long Short-Term Memory (LSTM) neural network then processes the segmented gradients to translate each stream to much more accurate human-readable text than previously possible.
ViewX’s relevance engine breaks new ground in NLP and vector based matching techniques to improve the accuracy of how videos are found and recommended based on their relevance to any input, whether articles, videos or audience types. Their users’ proprietary selection data is collected as feedback, aggregating multiple media organizations to drive continuous optimization of their results.
For Dr. Barnathan, the opportunity to apply AI directly to a critical industry challenge is the most validating part of ViewX’s journey: “A thriving media industry has never been more important nor more under threat. It’s exciting to be working with the biggest brands in media to harness the incredible power of AI to unlock the value of their video.”
With ViewX and the power of AI on their side, media companies can finally look forward to a new wave of technological advancement that will enhance rather than dilute their ability to generate sustainable profits and power the vital social function they perform.
Written by Daniel Burke and Alexander Gould