Lessons of Auschwitz: VR-project created with the help of Volumetric capture. Part 4

Phygitalism Inc
PHYGITAL
Published in
10 min readAug 10, 2020

On January, 27 we posted a video art project that we had been working on for 2,5 months. The series of articles will be devoted to the concept of our project, both meaning and technologies aspects. We will also show you why you should not fear the unknown.

Part 1 \ Part 2 \ Part 3 \ Part 4

Introduction

In the previous articles we told you about the concept behind our project, about the research we conducted, about the process of recording, and about the equipment that we used (cameras, VR headsets).

The next stage of our pipeline was processing the recorded material and producing CG-content with using the effects from VFX Graph created by Unity.

We divided the content we needed to process into three stages and made a script:

At the first stage (Intro) close-ups of the children’s faces were made with the help of a camera and depth camera. This we achieved the effect of digital capture.

At the second stage (the Main one) the previously filmed/recorded children painted in VR, and effects were applied to them.

The third stage (World) presupposed creating a VR gallery with the children’s pictures. Every stage was to be logically structured.

The pipeline was formed with the technical and conceptual aspects of every part taken into consideration. You will learn more about the stages in this article.

Export from DepthKit

Exporting the recording material from Depthkit to Unity is not as easy as it might sound. We created a project in Depthkit for every child, containing all the takes with him/her. We had 10 projects in total (nine for our little artists and one for the composer Peter Theremin). All recorded material was thoroughly hierarchized in order not get mixed up.

So, let’s start with export from Depthkit. After we had chosen a take to work with, we created a refinement mask for it in After Effects and applied it in Depthkit. This step helped us to improve the quality. By the way, there is a manual on Depthkit on how to create a refinement mask using different videosofts. Then chose the timeline of the video, cropped it, and exported it as an obj-sequence. (Read more about it here).

Example of mask in action

Preparing the material for importing it to Unity

After a video was exported, we had a folder with an array of materials: an mp4-video, a txt-file with depth data, set of obj-models and textures for them, and a preview image. We needed to perform several manipulations with the files in order to stick to the hierarchy we created in Unity:

  1. We converted mp4 video to mov using HAP codecs
  2. Then we deleted the mp4 video, because we did not need it anymore
  3. We deleted the preview image
  4. We erased all suffixes in the names that appeared after export
  5. We created holders, named them, and sorted the files in the folders.

We created a PowerShell script to avoid repeating all these steps over and over again — the script did it for us with one click.

Folder hierarchy and how Powershell works

Then we placed the files, sorted according to our hierarchy, to the Unity project folder. We named it DepthkitSources and it was .gitignore, i.e. git-ignore system was programmed to ignore folder with our project. As the source files were rather bulky, we needed to employ this method for resource optimization. Otherwise, processing and saving every update would take too much time and disk space. In addition, it is a real struggle to work properly with such bulky projects in git systems. Our Unity project took more than 100 GB.

Import to Unity and settings

The graphics of our projects was based on Volumetric video (depth video) with visual effects applied to it. For this purpose we used Unity 3D engine and one of its tools called VFX Graph. To combine Volumetric video with VFX Graph we used mechanics devised by Kejiro.

Essentially, we have two components in a scene: Hap Player and Converter. The first one is used to replay a video (that we exported from Depthkit and converted to mov with the help of HAP codec). The second component, Converter, the colour and depth data from a shot chosen by us in Hap Player. It saved the chosen images in Render textures in Unity that we prepared beforehand. Every volumetric effect created in VFX Graph is based on two textures: the first one, Depth render-texture, gives the VFX Graph the information about the particles’ location. The second texture, Color, gives VFX Graph the information about their color.

It is also important to mention that Metadata (the information on recording from the text file, which we got in the process of exporting the files from Depthkit) was also taken into account. Practically, it is something like JSON file with a variety of parameters which need to be considered in order to interpret the depth data correctly.

Scene component connection diagram

So, we needed to create a scene that would used as a sample/template. It was to contain all the components with links mentioned above. Not only did we need to use the sample/template for every scene, we also needed to adjust it for a particular take we were going to work with. This included writing a path to MOV file in HapPlayer, replicating sample metadata file, and parsing a text file into it with the settings that were created in the process of exporting the file from Depthkit. And then we needed to add the new metadata in the Converter.

It was of vital importance that we stick to the hierarchy of the folders for takes, scenes, metadata, and other files that contained the content.

The process of adding one take manually (6x)

In this case we also resorted to automatic execution of the repeated processes. We came to a point when all we needed to do was to choose a folder with the necessary scene and all settings for a particular take, and press a button. This helped us to maintain the hierarchy of our files.

Automated process of adding one take

Volumetric video player

One more challenge that we faced was the imitation of the children painting in VR. To make the animating process convenient we devised our own tool in Unity, which had all functions necessary for a player. At first it was an obj sequence player, in which animations could be easily adapted to the children in volumetric. We also experimented with mesh obj-sequences, which could be used for writing shaders and which could be combined with particles. We also synchronised our video player with HapPlayer, which made the process of creating process easier and more convenient.

Volumetric player

Besides, we added audio track to our video player, which is missing in Unity. This allowed us to coordinate and adjust the real-life events to the composition of our future video. This tool played a significant part in the Main stage of our project.

Trail setting

One more tool used in the process of creation of our project was Bezier curve. Working with it allowed us to animate objects which needed to move along a trajectory, for example, camera passing through something. It was mostly used for trajectories that are called trails in Unity and their animation. Later you will see that it was used for imitation of VR-painting alongside with other tools.

Bezier tool

Transition

The key moment of our project was transition from the Intro stage to the Main stage. At this moment we left the physical world and, accompanied by the fascinating sounds of theremin, we entered the world of new technologies. Apart from having conceptual meaning behind it, transition to the Main stage was the time when we tested out the mechanics, methods, and concept chosen for the project: the effect of digital capturing of the children, volumetric effect, Theremin’s music, audio visualization, imitation of VR-painting.

Fragment of the transition from physical to digital

In addition to volumetric effects, one more technical peculiarity was that a girl was supposed to draw a line which would be the visualization of the sound spectrum when the theremin was played. Bezier curves were used to set the trail for drawing the line, and everything was synchronised with the girl’s movements with the help of volumetric video player.

Visualization of the audio spectrum of the Termin’s composition

To visualize sound spectrum, we used one more additional logic . In a nutshell, we created an array of specific size. We used Update () method (it means every frame): audio source’s spectrum data (that was played in AudioSource component) was recorded in the massive. The function we used for it was AudioSource.GetSpectrumData (float[] samples, int channel, FFT window). Then everything was transferred into figures, like vertical shifts for every curve we created.

...private void Update()
{ _audioSrc.GetSpectrumData(this.samples,0,FFTWindow.BlackmanHarris);
for(int i = 0; i < _samples.Length; i++) {

_samplePos.Set(

BezierCurve((float)i / _samples.Length).x,

BezierCurve((float)i / _samples.Length).y +
Mathf.Clamp(_samples[i] * (50 + i * i), 0, 50) * _power,

BezierCurve((float)i / _samples.Length).z
);
_lineRenderer.SetPosition(i, _samplePos); }}...

Imitation of painting in Unity

We had four different methods of imitating VR-painting:

  1. Trail Renderer

It is a component from Unity which is used to make trails behind objects in the scene as they move about. Setting the time (a lifetime of a trail) at a long period of time we managed to achieve the effect that looked like painting in Tilt Brush. Moreover, we could add brush materials from Tilt Brush to the trail. We applied such component to an object and animated it along Bezier curves. We built them basing on smears imported from Tilt Brush models and matched them with the children’s movements.

Trail animation example

2. Texture Offset

Some of the brushes from Tilt Brush work in the following way: if a 3D smear results from a movement that starts with pressing a button and ends with releasing it, then smears of many brushes are so long that they take the whole UV space (from the beginning (0) to the end (1) on U coordinate).

It enables us to scroll textures to UV and thus move it evenly along the 3D stroke.

Offset texture in Blender
Offset texture in Unity

If we animate it and synchronise it with the children’s movements, we will get a result that would look very much like painting in Tilt Brush.

3. Object Scale

In some cases we took certain 3D smears, combined them together, and animated it’s scale. But this worked only for straight lines or for a set of strokes, like fiery eyes.

Lights animation

4. Shaders

There were two cases when we had to write our own shaders.

In the first case the shader was used for showing the dead souls. It was based on vertical gradient with transparency and it was combined with light animation and tiling of smears texture.

Soul manifestation effect
Shader nodes

In the second case we worked with a big amount of smears formed in a circle. To avoid selecting and animating each of them manually in a 3D editor, we decided to create one shader for all of the smears. We created two sphere masks of different size. Taking the size of the smaller sphere from that of the bigger sphere, we got the area where the painted layer would appear. And as the boy was painting in a circle, we needed to add radial fade/gradient using arctangent.

Circual Fade Shader
Circular Fade shader nodes

From Tilt Brush to Unity

VR-paintings in Tilt Brush were essential for the narration, especially for the World stage. It was difficult to integrate them into Unity, because we used HDRP, and most of the brush shaders from Tilt Brush SDK could not be rendered correctly.

We changed some of the brushes that would not work for Unity Lit/Unlit shaders, but we could not do it for all brushes. For example, for brushes like running neon we had to develop such mechanics ourselves in shaders.

Conclusions

It was an interesting experience to create such VFX and to do postproduction right in the 3D engine in real time.

  • Apart from having all effects in Unity in real time, we also used Unity for composing and color correcting.
  • The production was new to us, because instead of common (software) build at the end, we had a video. We used Unity Recorder for this.

Still there were some challenges due to the fact that the engine is supposed to be used for different purposes. But in some cases the engine is like clay which you can mold into anything you want — you just need to think everything through and choose the right code.

Art project “Lessons of Auschwitz”

Part 1 \ Part 2 \ Part 3 \ Part 4

Written by Vladislav Krutenyuk

CG / Tech Artist

instagram: @kvy_cg

kvy@phygitalism.com

--

--

Phygitalism Inc
PHYGITAL

An international tech company developing Phygital+, a web-based AI product for creators