Running on OPENRNDR
Detecting and matching poses from film fragments
This story is part of a series of case studies related to the open source framework for creative coding OPENRNDR, written in Kotlin and Java 8. OPENRNDR simplifies writing real-time audio-visual interactive software. The framework is designed and developed with two goals in mind: prototyping and the development of robust performant audio-visual applications.
A non-linear way of experiencing movies
Imagine you can use your body as the start for a search in a movie. Imagine searching for similar films scenes as Rocky’s victory dance on top of the steps just by raising your arms.
Film, by definition, is linear, the narrative unfolds itself over time. However, using motion tracking methods we can introduce a new way to interact with film-based footage. Rather than surrendering to the screen as a passive viewer, users can penetrate an archive of movies from a new perspective. This subversion of the traditional film/viewer relationship brings unexpected juxtapositions and a more engaging interaction.
The unpredictable nature of physicality brings an element of freedom into an often static and methodical activity. In its most complex form, after an analysis has been carried out on archival footage, the users body position is recorded in real-time, and matched directly with film clips.
While films tend to be linear, databases are more unpredictable. Unforeseen associations emerge as one navigates them, recalling the action of strolling across some archive with its associated lot of discoveries. Yet, databases precisely lack this embodied, strolling dimension. Each path within the database is amplified by cinematic expansions of color values and acoustic enhancements that trigger nothing but a fiction of mastery — fiction that obnubilates the actual experience of the archive.
The point of departure was the idea to explore the inherent cinematic quality of the database, while pushing film away from its linear self. By transforming the ‘filmbase’ into a mirror that is responsive to the body as a whole, the experience of the films and the one of their selection are “folded” together. The database is reincarnated — becoming endowed with the archive and cinema’s bodily dimension — while films are endued with the archive’s collage-like structure — jumping across all scenes and shots alike. A posture initiates a search request, a movement a transition between two films and information then reveals itself insofar as it is given a body — the user’s.
Navigator
In a more functional form, motion tracking can also be used as a navigation
tool. You use your hands and body to search through the archive. Essentially you are the playhead, cutting a path through vast amounts
of imagery. Context is challenged and new narratives are pieced together
from fragments, triggering the mind in new ways.
After an analysis has been carried out on archival footage, the user’s body position is recorded and matched with film clips. You make a movement and that movement is repeated in film.
Facial tracking
Using a more intimate system, expressions can be tracked and matched
with footage, to mirror your expressions or even provide opposing
reactions. Especially interesting with documentary footage, this allows
a ‘conversation’ between the user and the people recorded within the
Archive.
The code set-up in OPENRNDR
The software that deals with the interaction is done in the open source framework OPENRNDR using a few tools that were developed over the past years. OPENRNDR allows us write more efficient and semantic code then ever before.
Editor
To be able to add and edit film clips we wrote an online tool that is able to cut up huge movie files into smaller clips and to add or scrape the right meta data for the movie(s). From the generated clips we reconstruct the human posture/gestures by recording actors in front of a Kinect. The skeleton data is saved for every single clip and together with the meta data loaded into the program.
USE CASE: Camera Postura
Camera Postura is an experimental project by LUST and LUSTlab, which consists of an interactive installation, presented during the Netherlands Film Festival 2014. The installation allowed the visitors to translate their gestures into scenes from 20 of the festival’s most popular films creating a unique poster at each pose.
Camera Postura tries to match your body language to scenes in the movie with similar poses. These scenes are then augmented with additional information, such as actor names, locations, tweets and film reviews of the movie. Each pose results in different matched scenes creating unique film posters at each visit.
Content & design of posters
Users can try out, or imitate poses to get their favourite clips on the screen. When they hold the pose for some seconds they generate a new, personal and unique poster. The posters consist of the following elements:
Movie title
The movie title is rendered in a typeface that fits the style or content of the movie and it is the only typographic element of the poster which is animated. The title finishes the poster, and gives it a unique look.
Movie reviews
What is a poster without reviews? Camera Postura shows one or two reviews per generated poster (depending on the available space) from the review selection. Are there no reviews yet because the film is not released yet? Camera Postura defaults to a short plot in case there isn’t any review available.
Custom labels
Some films deserve a special label. For the Dutch Film Festival (NFF) for instance, the Gold or Platinum film status was shown, which is awarded to films that attract a great number visitors.
Nominations and awards
What is a festival without awards? During the Dutch Film Festival (NFF) the nominations were added to the poster as soon as they were announced.
Screenings
On every poster live screening information was added. The system checked every hour for up-to-date screening information, by scraping the screening content from the festival website.
Credit block
Important cast and crew members deserve a place in the credit block. CP Editor gets the most important information from various online databases.
Tweets
Tweets from visitors containing predefined hashtags show up instantly on the screen.
Tech specs
Camera Postura has two Microsoft Kinect II integrated, one on each side and two Intel NUC I7 computers. The Kinect II allows to get a better and more sharp depth picture of the surrounding and the user(s) standing in front of Camera Postura. Microsoft introduced the Kinect in 2010 as a wireless controller for the XBox allowing people to control games by jumping, leaning, clapping, waving. The success of the sensor was overwhelming and soon artists and designers started to experiment with and hack into them. Unfortunately Microsoft has stopped production of the Kinect, but Intel released a similar device, the RealSense.
Credits:
Concept, coding and design: LUSTlab & LUST
Furniture design: Gebr. Bosma, Utrecht