Vitasnella | Full body projection mapping

Saatchi Milan challenged UNIT9 to create a hardware+software solution to project real-time ‘body deformations’ back onto a live subject. It is a step further than what has ever been attempted so far in that we aim to add/subtract volume to the subject, as well as projecting simultaneously on body and face. The final output of the exercise was a viral video to promote the health benefits of the Vitasnella product.

Final result | PR Story

Introduction

The challenge we had in this project was to create a tool which projected real-time 3D graphics controlled wirelessly via a mobile app onto a live model, tracking body and head movements. In order to achieve that, we reconstructed the model’s body and head in 3D, put a skeleton into it and controlled the shapes and transformations of the body via a remote tablet in real time. The result was sent to a projector which mapped the result back onto the model’s body.

Technical Approach

In our initial approach, we tested two machines connected via TCP to have more control over the 3D and the projection mapping software, however we found out that the latency was scaling up rapidly due to the reasons below:

  • Kinect latency to send data over the wire
  • Skanect latency to detect faces
  • Unity3D latency to render the 3D in real time
  • TCP latency to send the image to be projected over the wire
  • Projection mapping software latency to receive the data and project

Based on the above, we decided to use only one machine to render the 3D in Unity and used Syphon (socket) protocol to send the image output to the projection mapping software (Madmapper).

Marker Based Motion Tracking (Omote approach)

After exploring this route we realised that the cost and technical/time requirements were out of scope for this project, plus the fact that the model would have markers on her face that would affect the aesthetics of the project overall.

Kinect + Faceshift tracking

We also exhaustively tested the Microsoft Kinect hardware, to detect faces and head movement, but unfortunately we have reached the edge of the technology without much success. The head / face detection is unmanageably unstable and cannot be normalised enough. Whenever we blasted light from the projector into the models face, Kinect and Faceshift get lost, as they are not able to distinguish the actual face from the projected face, making the solution unstable and unusable.

Arduino: gyro, accelerometer and magnetometer

Once Kinect was ruled out as a viable approach, we started exploring different scenarios involving Arduino driven sensors. The clear advantage of physical computation of movement was the potential absence of any (visible) tracking rig / markers on or around the model (the Kinect solution implied a rig in the camera’s line of sight and at 1.2m from the subject).

We tested a 9-axis sensor and tried some data normalisation techniques such as lerping an array of values, averaging a X amount of consecutive values and even AHRS (Madgwick Quaternion) which is a technique used to fly drones. Unfortunately the raw data was more erratic than we were originally envisioning, and none of the techniques proved stable and sensitive enough for the purpose of this project.

Mobile Phone sensors (iPod)

The idea of using Arduino to control motion sensors, first came up when we tested smartphone sensors, early on in the discovery process. Having failed to replicate the same level of accuracy with Arduino and its sensors we came back to the idea to use a simple smartphone to track the movements. Thanks to the signal processing done by the device OS, the solution proved to be more stable and have less interference with the environment.

Finally, we decided to use an Apple iPod 5th Generation rigged to the model’s head to track the movement. The device is lightweight, only about 88 grams and incredibly thin. The iPod was connected via a socket with the Unity3D application and sent the smoothed out values from the sensors and replicated in 3D in real-time.

adjusting iPod on model’s head

Final setup diagram

Remote control

Projection map output

Madmapper to fine-tune the projection mapping

Final body projected

CG approach

In order to have full control of model’s body, we have created a 3D model using very a sensitive body scanner — the Artec MHT 3D. This method allowed us to create a detailed 3D model which we can use for any modifications. The models went through topology clean up and re-meshing in order to be used in any 3D software.

During the 3D scanning, the model was standing in a “half T-pose”, so we could get the best scan possible. The model was wearing a swimming hat to cover the hair, and seamless underwear.

3D model workflow

Once the 3D model is delivered to us, we took few different steps to prepare files for projection:

Step one — 3D Mesh preparation

  • removing all unnecessary elements from the model
  • re-meshing the model in order to create clean and smooth mesh topology
  • separating head from the body
  • preparing eyelids for the animation
  • unwrapping whole model

Step two — Texture preparation
As Artec 3D scanner is not able to scan body texture in high resolution, we painted the texture manually based on model’s photos from the session. Despite being time consuming this method gives us the best results and full control of the textures.

Step three — 3D Model modifications

Face modifications:

  • cheeks modifications
  • eyes modifications
  • nose modifications
  • mouth modifications

Body modifications:

  • legs modifications
  • torso modifications
  • bottom modifications
  • breast modifications

Step four — Asset preparation
Once all the modifications were ready, we created all Blending Shapes that were easily animated and controlled in real time by code in Unity3D.

Rigging, mounts, braces and projector

Model Brace

To ensure the models body is static during our experiment, we created a brace structure to hold her in position. This ensured that we keep the models body in the correct place and achieve the shots we need. The brace also incorporated a removable head rig which was removed when we shoot the head moving, but can be added to ensure we can get perfect shots on the face. The brace wasn't visible from the front view of the model, ensuring it does not interfere with the shoot.

Projector Model: Panasonic PT-DZ110 3 chip DLP 10 000 Ansi Lumen WUXGA

Behind the Scenes

Credits

Agency: Saatchi&Saatchi Milan
Brand: Vitasnella
Creative Director: Francesco Bernabei
Executive Producer: Marc D’Souza
Live action Producer: Nick&Gabs
Photography / Director: JP
Project Manager: Martin Jowers
Technical Director: Silvio Paganini
Tech Lead on set: David Hartono
Unity3D Developer: Andrew Oaten
Hardware support: Christian Bianchini, Mateusz Marchwicki
3D artists: Karol Góreczny, Sophie Langohr
Art Director: Karol Góreczny
Production Company: UNIT9