Would you dare Vogue?

A Voguing inspired dance game

ICM Finals @ ITP


The idea of transforming the dance battle known as Voguing into a digital experience clicked on me in the mid of the semester. At that moment, my approach was very experimental and (maybe) naif, using three flex sensors to track the movements of the dancer’s arm. With a greater emphasis on the visuals and storytelling , this time my goal was to create something as seamless as a game.

Lots of fun and fierceness on this project.

The concept was simple: the user would see the image of a dancer on the screen and would try to follow the movements in sync. To make this happen I divided the project into 7 bigger tasks:

  1. Develop a program to track the positions of a dancer while recording a video.
  2. Develop a program to make the recorded video look didactic to the user. This will be used as a reference for the gamer.
  3. Develop a program that captures the user and overlays on top of the dancer’s video, so the user can compare the positions of both.
  4. Combine the previous two programs and compare the distance of the positions of the dancer and the gamer.
  5. Add incentives and gamication aspects to improve the user engagement.

As I went trough those steps, I had to change the technics applied and adapt my original scope.

Let’s get technical:

I started with the Kinect to capture the video of the dancer. My departure point was an example found on the web: the Record/Play sketch. I cleaned up a little bit the code so that I could have only the “Record” part of it when filming the dancer. After adding few lines for drawing the skeleton, I drew elipses on every junction and exported the data of the junctions’ positions to a spreadsheet (code on the bottom of the page).

One of the videos from the kinect camera that were correctly saved
what it looks like when the videos that were not playing correctly (cries!!!)

If the recording part went well, I can not say the same for the “Play”. I’ve recorded 9 videos in total, and only 2 of them played as expected. This example uses SimpleOpenNI library and for (I guess) some versioning reason, it doesn’t always work smoothly with the kinect. The videos were saved on the .oni format and by the size of the files they were definitely video files, but I couldn’t find why they were not being displayed.

Luckily, I had attached a 5D camera on top of the kinect. Then I could work with the beautiful images taken with it. Using processing’s background removal example, I created the black and white silhouette aesthetic that I envisioned to be the reference for the gamer.

You Can’t Always Get What You Want

When I tackled the junction’s positions syncing task, I faced the most difficult and frustrating moment of the project.

First, because it was hard to sync the video from the 5D camera with the elipses drawn from the kinect’s exported data. But after time consuming tentatives and a few successes, I was shocked to realize that the spreadsheets would have the data just for a few frames. That causes the elipses to draw just for short seconds after the video started. I suppose this was the result of a “keyPressed” function that I created to have better control of the moment it would start exporting the data. The issue with that is that if the kinect skeleton detection fails (and it does), I would need to click a second time to start saving the data again. Of course, I did not reminded of this while recording Javier Ninja, the professional voguer who agreed to help with the project.

But you just might find what you need!

Knowing that I would need to record Javier again to have new data, I decided to change gears and work on the front-end / storytelling part of the game. I applied a different pixel effect for the video captured live when the user plays. Named Motion Detection, this code allows to see the extremities of the user’s body when he/she moves.

The user’s silhouette in pink is drawn when he/she moves.

To improve the performance, I avoided doing the pixel effects on both the dancer and the user’s videos at the same time. I saved each frame of the dancer’s processing sketch and made a new video with it, which then became the background of the final sketch.

The final touch to help engaging the user came in form of incentive/fun words that would be drawn on the screen, for example: Fierce, Extravaganza, Bam!, and other slangs from the voguing scene. I added also a (not accurate) score at the end, so the user had some reward for his effort. Based on the reactions and feedback of the people who tried out, I can say that the experience is engaging. And you? Would you dare voguing?

By Edson Soares


Code for recording and track/save junction positions (needs a kinect):

Final code (needs a computer w/ webcam):

One clap, two clap, three clap, forty?

By clapping more or less, you can signal to us which stories really stand out.