Augmented live match experiences

Augmented Experience of Media Presentation Events

George Krasadakis
4 min readJan 5, 2017

--

Typical coverage of a live competitive event such as affiliated with sports, focuses on (and is limited to) live broadcasting and commentating. It can be shown that increasing percentages of users watching a live match on television also use a companion smart device (e.g., tablet, mobile phone, laptop, etc.) to search for additional data within the context of the live match. This can apply equally to non-live media events such as movies and television programs. The detailed patent application can be found here

As users watch the live match, the tablet/device seamlessly synchronizes and presents highly-related metadata in a near real-time fashion and an optimized timing effect. This enables powerful ‘companion apps’ for sport events targeting a massive world-wide audience and empowers significant user engagement scenarios

The architecture provides an augmented user experience with a smooth, enriched, and personalized information flow during a live competitive event such as a sports game or non-live media presentations. A user having a user device with the application components installed can experience an automatic synchronization of content with the entities, activities, and moments occurring in the live match being watched. This is achieved through logic applied on a combination of different inputs and entities, activities, moments continuously identified based on at least natural language processing technologies. The user experience associated with media presentation of the event on a first user device is augmented by the automatic identification of a live match, the teams, stadium, players, etc., and generation/presentation of highly related content on a second user device with which the user is currently interacting.

The disclosed architecture enables an automated flow of complementary information (content) while viewing a media presentation, whether a live event or a non-live event. For example, if a live sporting event (or game), while the user enjoys the live broadcast on a television, additional content can be generated and presented on another user device (e.g., a device the user is currently holding or interacting), such as facts, statistics, and content about the live event and also the entities associated with the live event.

The architecture can be realized using one or more client applications and a server infrastructure, where the one or more client applications communicate with the server infrastructure and enable the unattended (e.g., free of user input) identification of entities (e.g., people, teams, sports, leagues, places, things, etc.) as these entities are being mentioned by the commentator of the event, the synthesis of the most appropriate package of content to be served to the user, the unattended identification of a commentator based on speech patterns, the unattended identification of the live event the user is watching (or listening to), the continuous multi-stage optimization of content sets, and a distributed quality-based model for entity extraction.

The set of content is automatically generated and served to the user, as the next “best” mix of content for the protagonist (e.g., the player(s), the actor, the singer, etc. — depending on the class of the event), as associated with a moment (defined as a point in time associated with the event, or as an occurrence as part of the event) and/or other entities. In the context of a game, for example, the content mix can refer to a player, but can also, or alternatively, describe additional entities such as the team, the stadium, the league, the referee, etc. The content mix can be synthesized dynamically for a given moment in time or the moment (action) itself, which is a process triggered by event moments (e.g., moments occurring during the event) identified in a real-time fashion.

The architecture provides an augmented user experience with a smooth, enriched, and personalized information flow during a live competitive event such as a sports game or other forms of live/non-live media presentation such as movies and videos, video clips, audio files, audio clips, etc. A user having a user device with the application components installed can experience a seamless synchronization with the live match being watched. This is achieved through logic applied on a combination of different inputs and entities continuously extracted based on natural language processing (NLP) technologies (e.g., on verbal or textual commentary).

The user simply enables the user device in an operating mode and the rest of the identification of a match being presented on another device, the identification of the match itself, the sport, the league, the teams, stadium, players, etc., all occur automatically and seamlessly. An always-on mode is also supported where the user activates the application on the device once, and then the identification of what the user is watching happens automatically and with no user input (unattended).

--

--

George Krasadakis

Technology & Product Director - Corporate Innovation - Data & Artificial Intelligence. Author of https://theinnovationmode.com/ Opinions and views are my own