Family photo stories by A.I. agents

Artificial Intelligent can now autonomously build stories for your family experiences

Imagine a connected home scenario where the system knows the family (and can recognize its members via voice and facial analysis) and also the family patterns and preferences. Imagine if the system can extract the context out of random discussions inside the house and dynamically present relevant stories on the right screen while capturing levels of engagement by the audience.

Or, allowing family members to search and discover ‘moments’ as a series of photos and/or videos — all done via voice, gesture and gaze recognition.

Imagine if you could speak on top of photos and videos to seamlessly add metadata — index your photos and videos— for more effective discovery in the future — all based on natural user interfaces

This patent application describes a range of related methods and scenarios.

Individuals, as well as families and other groups of individuals, are increasingly generating and storing large collections of media data files, such as data files of media data including but not limited to photos, videos and audio and related rich media data, in digital form

These media data files are captured using multiple computer devices and are stored in multiple computer storage systems, including but not limited to non-removable storage devices in computers, removable storage devices, online storage systems accessible by computers over computer networks, and online services, such as social media accounts. Such media data are also being transmitted and shared among individuals through multiple transmission and distribution channels.

The large volume of media data, and distribution of media data files among multiple different storage systems, and multiple transmission and distribution channels, can make overall management, administration, retrieval and use of media data files both difficult and time consuming for individuals or groups of individuals.

While some systems can index large volumes of media data, such systems generally are limited to processing the media data itself or to responding to explicit user instructions so as to generate metadata about the media data or about the media data files.

As a result, management, administration, retrieval and use of such media data files also is limited generally to the metadata available for the media data and the media data files.

A computer system automatically organizes, retrieves, annotates and/or presents media data files as collections of media data files associated with one or more entities, such as individuals, groups of individuals or other objects, using context captured in real time from a viewing environment.

The computer system presents media data from selected media data files on presentation devices in the viewing environment and receives and processes signals from sensors in that viewing environment.

The processed signals provide context, which can be used to select and retrieve media data files, and can be used to further annotate the media data files and/or other data structures representing collections of media data files and/or entities.

In some implementations, the computer system can be configured to be continually processing signals from sensors in the viewing environment to continuously identify and use the context from the viewing environment

Microsoft Technology Licensing, LLC
USFamily ID:1000001895608Appl. No.:14/993035Filed:January 11, 2016

One clap, two clap, three clap, forty?

By clapping more or less, you can signal to us which stories really stand out.