Media, Entertainment, and Artificial Intelligence

AI LA Community
The AI Collective
Published in
5 min readFeb 15, 2019

Technical & Business Corner Series by Christian Siagian.

The key to media and entertainment has always been content. Content that engages the audience, making them come back and wanting more. The business of media & entertainment has changed considerably in the past decade, in the internet and social media era. For one, content variety has increased at an unprecedented level. The same goes for the number of channels to consume them.

At the same time, on-demand content has become the norm, leading to new consumption behaviors, such as binge-watching. The business model also has changed, where revenue comes from sources other than just advertising. Nowadays, basic content is almost always free, with premium content or tailored content/services being the primary source of revenue. Because of the explosion of choices and fierce competition, audience targeting is critical for survival.

In terms of basic mechanisms, media & entertainment is still comprised of content creation, content discovery, and content delivery. It’s just that all three phases are undergoing rapid changes due to accelerated development in Artificial Intelligence (AI), technology in general, , and how quickly society chose to adopt them. One critical component that should be emphasized here is data. In modern algorithms, some form of learning is a must. Techniques that do not make use of data or learning, most likely, are inferior to the state of the art. Every company should have a strategy on how to capture and make use of their consumer’s behavior (past selections, frequency of consumptions, the use case of the products, etc.), as well as data of various business processes or know-how (how to efficiently process orders, etc.).

We’ll start with content creation. Generally speaking, there are two types: live content, such as sports or artistic performances, and content that has been prepared well in advanced, such as movies or music videos. In the former, the act does not have to be performed by polished or world class athletes or artists anymore. The recent demand for an authentic, every day experience through mediums such as YouTube, have increased dramatically. Also, the new trend of increased update frequency (daily to 24 hour non-stop) of the content can take a toll for the creator. This is why there are now many services to help content creators. For example, Bytedance, the highest valued unicorn ($75B) out of China, provides AI-based cut special effects or editing suggestion to improve video quality for content creators.

As for the (elite) performers, the pertinent technology revolves around coaching or development with the usage of virtual/augmented reality, smart sensors, smart clothing, smart room technology, and more. In addition, along the lines of talent discovery or evaluation, there are also plenty of analytics and new techniques in computer vision/natural language processing/voice or text understanding, to discover the next great singer. Further down the line of content creation, we’ll have more realistic graphics animation, automated pre/post production such as auto-syncing, as well as machine learning (ML) for creating the most efficient budgeting or scheduling timeline.

However, the most important part of content creation is the content itself. It should touch the heart. Creating content that is current, fresh yet familiar is crucial. Many times, this means analytics in term of what kind of content resonates to your audience, in order to predict the Return on Investment. Analytics by nature, however, is information about the content. Research done by Elaine Chew at the Queen Mary University of London, on the other hand, goes deeper by analyzing the content itself by modeling the structure and expressivity of music. This is still difficult to perform on movies because of the sheer volume of video data, and because efficient algorithm to represent a movie is still unavailable.

While predicting revenue has a lot to do with characterizing a large audience, content discovery is more about understanding the lives and nuances of an individual. This is where recommender systems, a maturing technology, comes into play. Companies, such as Spotify, YouTube, Netflix, or just about any entity that has a large number of products in its inventory, make use of recommender systems.

In the most basic form, these systems address the problem indirectly: user A just finished looking at item 1. Previous knowledge and data show that many users look at both items 1 and 2. So, user A should also be shown item 2. Many companies add special knowledge-based improvements to these algorithms, such as ratings, social media inputs, etc. But what they would like to have is a larger context, the user’s current situation or intent. Improving situational awareness is probably where smart speaker/companion technology becomes a gateway for this information. Software such as Amazon’s Alexa or Apple’s Siri is in prime position to provide that to their corporate partners.

And so, we arrive at content delivery. This topic is primarily a lucrative/large scale platform play, where the payoff is an ecosystem of businesses providing revenue. Technology such as virtual reality, smart room, and the above-mentioned smart speaker, fit the bill. Along with hardware form-factors and adoption strategies, there are a lot more to be developed here, both in perceiving the user, as well as presentation of content.

On the other hand, content delivery can also mean creating an optimal consumption setting. Think better acoustic for a movie theatre or sports stadium, such as content-based intelligent decision on how to improve immersive. Another would be smart bundling of products. For example, adding fantasy sports or legalized gambling within the game/broadcast context, allowing users to bet on just about anything.

In the end, the purpose of technology in Media & Entertainment is to present content to audience with ever shifting interests. And as we describe in more detail different technical topics, we have to always come back to the question: would this be something that the audience want and willing to pay for.

Christian Siagian received his Ph.D. in Computer Science from USC in 2009. He was a Neuroscience PostDoc at Caltech. He also authors and hold patents in 3D printing, 3D scanning, and medical robotics. He co-founded a 3D Printing and Scanning start-up named AIO Robotics. His interests spans many Artificial Intelligence fields, as well as the business world, in building teams, products, and predicting future directions of technology.

If you are in Los Angeles, please come out too one of our upcoming activities: https://AI_la.eventbrite.com

--

--

AI LA Community
The AI Collective

We educate and collaborate on subjects related to Artificial Intelligence (AI) with a wide range of stakeholders in Los Angeles #longLA #AIforGood