Having streamlined workflows for video import, transcoding, encryption, delivery and playback are important parts of any VOD service. The majority of transcoding solutions available support the most common import formats and containers and can produce more or less the same quality, formats and codecs for playback. These parts are important, but there are more pieces to the puzzle.
While high-quality playback and delivery is important for the end users, connecting the video assets to high quality metadata, subtitles and images is the key to be able to build recommendation engines, effective search engines and presenting the content in the best possible way to increase end-user consumption.
While we see some competing standards in relation to video formats, subtitles and metadata is a different story. The lack of a unified standard for metadata and subtitles makes ingest hard. Content providers all have separate standards or no standards at all, providing oddly structured excel spreadsheets with the information needed. Systems to process, quality assure and convert metadata are often disconnected from the video processing workflow. SYNQ Media is capable of mapping metadata to custom schemas or convert between common formats, such as CableLabs, Public Schedule, TV-Anytime or others directly. We provide a unified and connected method to keep all source assets (video, metadata, subtitles, images and any other information) together throughout the processing workflow. We achieve this by creating what we call video objects.
A video object is a unique entity consisting of a JSON object. When new content providers are on-boarded, they can transfer their content in any format using any file transfer method. SYNQ will manage the on-boarding and communication with the content providers and the providers are free to provide any video, metadata, image and subtitle format. During the import of files, each connected file generates a source asset inside the same unique video object. The asset has a path to the file on our storage solution, a type, a state and other connected information needed. Based on the configured workflow and asset types, each asset type goes through individual processing. SYNQ Media is built on a microservice architecture, meaning each asset type has a separate microservice in charge of converting it to the distributors’ specified output, quality assure the information and files and generate new output assets within the same video object.
All files and information are kept connected throughout the workflow. Distributors are notified by Webhooks on state changes and can easily fetch the entire video object (with all sources and output assets) using the SYNQ API and GET Video or individual assets using GET Asset. Each individual asset has a state that the workflow will act on accordingly. When the state of an asset is “complete”, you can be sure that it has been quality assured and that it is compliant with the output specification.
Below is an example of the response when using the API and fetching a video object. The uploaded source video was a .mov format. The metadata was supplied in an excel format and mapped to a predefined metadata schema (under “metadata”), subtitles in three languages in .ttml format were supplied, in addition to two images (one .png (poster) and one .jpeg). The source video has been transcoded to fMP4 in 5 different resolution and bitrates and packaged for MPEG-DASH, HLS and Smooth Streaming. Parts of the information in the response has been removed for security reasons. The “GET video” example response is available here: https://gist.github.com/paltorg/707e20a7349e5b738e70330149c8e5fc