News volumetric video technology on display at Siggraph 2019, cinematic quality digital avatars, next gen social media influencers, adds up to industry disruptive new technology.

What are we going to learn about the latest advances in cinematic quality volumetric video at Siggraph?

SVGN.io
Silicon Valley Global News SVGN.io
5 min readJul 28, 2019

--

Article by Micah Blumberg, http://vrma.io

It’s Saturday July 27th, I’m a journalist and a software architect and I am packing my bags to go to Siggraph 2019 in Los Angeles tomorrow.

For the first time I am bringing a depth camera with me, it’s Microsoft’s new Azure Kinect DK, that has been on sale now for over a month.

The camera is for developers specifically because for example it has a command line interface for recording sensor activity, ie its not consumer ready, it doesn’t record sound out of the box, and you can’t simultaneously record and view what you are recording with the default software, and like other depth cameras it needs to be plugged into a PC to operate.

Fortunately Microsoft has partnered with Depthkit Beta which is ready now to help you capture depth video with the Azure Kinect DK (with the pro licenses).

Depthkit partners with Microsoft

Eventually there will be other software options with Brekel, a suite of affordable motion capture tools, already pledging to support the Azure Kinect in the near future.

Imagine that you wanted to combine sensor feeds from multiple Kinects into a single model? The problem of camera alignment, calibration, and 3D capture fusion of multiple cameras is an extremely hard problem in general. To our knowledge Brekel’s beta software is the only off the shelf software that does it.

Even as I begin to experiment with volumetric film making as a journalist, I surprised by what I am learning already from companies coming to Siggraph that have been emailing me (my email is micah@vrma.io ) to share their press releases. Technology that is already going to change what I do in the future as a journalist.

Think about computer generated influencer's for a moment. LilMiquela for example who despite not being a real flesh and blood person has attracted a real following, millions of people are influenced by her on social media, and it’s a big business for the companies that advertise with her.

The existence of these computer generated influencer’s threatens the future incomes of real influencers.

Lilmiquela is an example of how companies are already using high resolution CG models for sales and marketing.

So what does that have to do with volumetric film making.

Well while the Microsoft Azure Kinect DK is currently the best of the best when it comes to it’s combination of sensors and price, there is a new technique that combines two old techniques that is resulting in cinematic quality volumetric video that essentially animates 3D computer generated models of a person with motion capture achieved with depth sensing cameras.

I have two examples of this:

Dynamixyz

I received a video from a company called Dynamixyz.

Dynamixyz

What Dynamixyz has shown is a proof of concept demonstrating how facial tracking trained directly from scans along with a direct solve rig in Maya can deliver high fidelity raw results. In other words you create a digital double of an actor, extract key poses from scanned data, and you get next level volumetric video.

The other example is from a company called ICVR.

They are show casing their own approach to creating a photorealistic human, in this case they collaborated with “The Scan Truck” a company that makes a 3D model of a person with a cameras pointed at you in every direction. They took 30,000 photographs and rendered them in real time in Unreal Engine.

Jason L. White, a collaboration with ICVR and The Scan Truck.

This technology again uses motion capture to animate the photographs. Creating a volumetric video that is far superior image quality to what you can capture with a Microsoft Kinect Azure DK with just its default sensor recording configuration.

The technique of the future might be to make use of depth cameras like the Azure Kinect as motion tracking devices, for animating 3D models that you have captured with high resolution cameras. There is a lot more two it than that, but it’s exciting to see what I think is an industry shift towards turning reality into a cinematic quality object first, before bringing to your screen, AR VR device, or holographic monitor.

Combine advanced motion capture techniques with digital influencers like Shudu Gram, and you have disruptive new technology that will change all the media industries.

--

--

SVGN.io
Silicon Valley Global News SVGN.io

Silicon Valley Global News: VR, AR, WebXR, 3D Semantic Segmentation AI, Medical Imaging, Neuroscience, Brain Machine Interfaces, Light Field Video, Drones