Choreographic Coding with Effect

Florian Jenett
Motion Bank
Published in
9 min readMay 28, 2019
CCL Mainz 2019 at Kunsthalle Mainz; Photo: Vanessa Liebler

TL;DR — While preparing for the recent Choreographic Coding Lab (CCL) in Mainz we thought it would be nice to give the more code-oriented part of our audience an introduction into the “Effect data” and how they might want to go about exploring it.

Effect by Taneli Törmä

Effect by Taneli Törmä, Trailer by Andreas Etter for tanzmainz

As part of the 2nd Mainz CCL (April 28th — May 2nd, 2019) we were offering a data set that we collected as part of a larger project called Between Us (with Staatstheater Mainz and Kunsthalle Mainz). The data stems from a new piece called Effect by Finnish choreographer Taneli Törmä and consists of multiple parts. Firstly it contains the documentation (videos and annotations) of the whole creation process of the piece. Secondly, sound, video, and full body motion capture data of one full run is included. And finally, we added interviews with the choreographer, dancers, Motion Bank staff and our project partners.

There is a lot more data and background information to offer about the Between Us project and also the piece itself. Since this is not the focus of this post we will instead just note that the exhibition at Kunsthalle Mainz is still running until June 16th ’19 and that there is an Online Score for Effect that you should visit.

The process documentation

The key to understanding the piece Effect was being able to visit and to document the creation of it. Our dance researcher David Rittershaus got to spend almost all studio days with the tanzmainz dancers Bojana Mitrović, Amber Pansters, Milena Wiese, Zachary Chant, Finn Lakeberg and choreographer Taneli Törmä during the rehearsals.

The documentation was done using methods and tools that we develop as part of our research at Motion Bank. The part of our system that was built to support process documentation is called Piecemaker. It was started by David Kern inside The Forsythe Company Frankfurt in ~2008 and has been in development since. It allows us to link time-based annotations with video materials.

Video of rehearsal on September 13th 2018

The documentation of the rehearsal process consists of 61 videos with annotations captured during the rehearsals as part of the creation process of Effect at Staatstheater Mainz between September 12th – November 19th 2018.

These videos not only capture the way that choreographer Taneli Törmä set up the creation process, but it also holds the language that arose between him and the dancers of tanzmainz. A lot of this language consists of names/labels that were invented to introduce, discuss and reference elements of the choreography. These are often not descriptive but rather point to shared memory or knowledge inside of the team. The videos and annotations also contain background information about all the details that are part of Effect and that are sometimes not even noticeable on stage.

About 60 minutes of Effect (unfolded) on the walls of the tower of Kunsthalle Mainz

Motion Bank used this information to create an index into Effect inside the stairwell of the new tower of Kunsthalle Mainz. Over 2.5 turns one can walk along a timeline on the walls that represents the ~60 minutes of the piece. It consists of the times and names of the elements of the choreography with additional descriptions and visualisations. The index will not only allow the visitors of the exhibition to study the piece but also help establish a language and (mental) images for each part of the choreography to support communicating about the piece with others.

The player, which we are introducing below, will send the core elements names alongside the movement data as it plays. A full list of all elements with times can be found here.

The movement data

In 2017 the design department of the Hochschule Mainz, where Motion Bank is based since 2016, decided to invest in technology that would allow students to explore and learn about spatial interaction. After some research, it opted for a marker-less motion capture system (see, The Captury). It has successfully been used in education settings at Bauhaus University Weimar and Academy of Fine Arts Vienna. The system uses state of the art machine learning technology (y’know that AI/ML stuff) to calculate full body skeletons (29 joints) from up to 3 movers in real time.

A video snap from one of our test recording sessions.

Recoding Effect pushed the system to its limit though. Classic motion capture (“mocap”) is mostly set up to record small chunks of movement that then is used for CGI or as movement samples in computer games. When recording a full ~60 minute dance piece that, of course, is not a good strategy because there is no way of splitting it up into recordable chunks to put it back together afterwards. When explaining the difference to classic mocap we like to compare our situation to a video documentary where one would just set up a camera and let it record opposed to a film set where one would record single scenes. For Effect, we had to exceed the spatial assumptions of the system to match the 8 x 8 meter dance area plus some extra space around it to capture dancers entering and leaving. Also recording 5 dancers at once is beyond what the system is used to “see”. After two longer test recordings, we found a camera setup that allowed us to get good recognition results and spent an additional month after the recording manually improving and re-tracking some parts of the movement data. The final result can be seen below.

The full ~60 minute recording showing the movement data through stylised avatars and additional choreographic aspects as coloured graphs. Created and rendered using Three.js

Effect Player

During our first CCL in the context of the Between Us project (September 2018), we found that participants had a hard time understanding the shape of the movement data, coping with the formats (bvh / fbx) and the amount of data we offered. Further labs and the exhibition needed something else to allow for low threshold and direct access to get people started quickly.

Screenshot of the Effect Player

We created a data player that can be downloaded and that holds the movement data and core annotations of one full run recorded November 2019. The “Effect Player” allows one to play back the movement data — similar to a video player — and study it through a simple 3D representation. At the same time it streams the data through two very popular and standard protocols: Open Sound Control (OSC) and WebSockets on the local machine. A simple filter system allows one to select what will be streamed, for example limiting it to one performer or just a selection of joints.

The player and data can be downloaded here: effect.motionbank.org

To get people started quickly we compiled a collection of examples using popular frameworks like Processing/P5.js, Three.js, A-Frame and others. As a side note I’d like to mention that for our own work at Motion Bank we stopped using most of these since core web technology like SVG already offers most of the functionality we need and integrates a lot better with our tool stack.

Getting started

A simple P5.js sketch showing a dot and a line

To get you started, let us walk you through a simple beginners example based on P5.js. It draws one joint of one performer as a dot with a trail … Wooohoo! Feel free to download and hack it. For starters, I recommend using Brackets as code editor. You should also go and download, unzip and start the Effect Player now if you have not done so already. Then also d’load the effect data (same site), unzip and load that into the player.

There are a couple of aspects that you need to grasp:

  • the framing consisting of HTML, CSS and a basic “P5 sketch” (JavaScript)
  • the part of the code that receives the data through WebSockets
  • how the data is being drawn

The framing is a simple HTML page with a tiny amount of CSS to reset some browser defaults and center the drawing area. It also includes the basics of a P5 sketch.

<!DOCTYPE html>
<html>
<head>
<title>Simple Dot Trail in P5</title>
<script src="js/p5.min.js" defer></script>
<script src="js/osc-browser.min.js" defer></script>
<script>
// ...
function setup () {
createCanvas( 800, 800 )
}
// ...
function draw() {
background( 235 )
translate( width/2, height/2 )
// ...
}
// ...
</script>
<style>
body {
width: 100vw;
height: 100vh;
margin: 0;
padding: 0;
display: flex;
align-items: center;
justify-content: center;
}
</style>
</head>
<body><!-- P5 sketch goes here --></body>
</html>

Next, we need to set up the sketch to receive the OSC data through the WebSocket connection of the player. First, the player needs to be configured to do so. Make sure you check “Send OSC packets” and also “Use WebSockets”. Uncheck “Send annotations over OSC” as we are not using them.

The motion data — which is contained inside the player — consists of 5 skeletons, one for each performer, with 29 joints each. The recording is ~60 minutes long and was recorded with 50 frames per second. When playing one OSC message per joint and performer will be sent by the player at set frame rate (see “Skip frames” option). Since we are only interested in one single point you should now set the filter in the player to only send that: pick one performer (Amber) and one joint (Hips). The message that you will now receive has the following address signature: /<PerformerName>/<JointName> (i.e. /Amber/Hips) and will contain 7 float values. The first 3 of these are an absolute coordinate [X,Y,Z] (right-handed system) and the last 4 are a quaternion (that we ignore for now).

We are using a JavaScript library called osc.js as a middleman between the normal WebSocket API of the browser and our sketch.

// we need a variable to store our data in let positions = []

// then we set up the port for the player
const oscPort = new osc.WebSocketPort({
url: "ws://127.0.0.1:8888",
metadata: true
})
// next we define a function to handle incoming messagesconst onWebSocketMessage = function(message) {

// console.log(message) will give something like:
// message {
// address: '/<Perfomer>/<Joint>',
// args: [ x,y,z, x,y,z,w ]
// }
// store the x,y part of the message in two handy variables ...
// note that we are using Z as Y here to look at the stage
// from atop

const x = message.args[0].value / 10
const y = message.args[2].value / 10
// clip our positions array if it has too many values

if (positions.length > maxPositions) {
positions = positions.slice(1) // remove first item
}
// now add the new x,y values as object to our array

positions.push({x,y})
}
// finally we register our callback functionoscPort.on('message', onWebSocketMessage)// ... and start listeningoscPort.open()

The final part of our sketch is drawing the data. The dance floor that the piece was recorded on was 8 x 8 meters. All incoming coordinates are in millimetres (8000 x 8000) and are centered to that stage (-4000 x 4000). As you can see above we set our sketch to be 800 x 800 pixels and hence here are dividing all incoming values by 10 to make them fit (4000mm→400px). Drawing the data now is pretty simple …

// using Processing/P5 "drawing loop"function draw() {
background(235)
// translate to the center
translate( width/2, height/2)
stroke(0)
noFill()
for (let i = 1, k = positions.length; i < k; i++) {
const p1 = positions[i-1]
const p2 = positions[i]
line(p1.x, p1.y, p2.x, p2.y)
}
noStroke()
fill(0)
const last = positions.length-1
ellipse(positions[last].x, positions[last].y, 10, 10)
}

That is pretty much it. There are examples using other frameworks available on Github (and we are still working on more).

Links

Between Us Online Score and exhibition
Choreographic Coding Labs
Motion Bank

Between Us is funded by the German Federal Cultural Foundation.

--

--

Florian Jenett
Motion Bank

Professor at Hochschule Mainz, former director of Motion Bank, head of KITeGG