3D Sensors — What Is Out There for Volumetric Capture?

Ieva Stelingyte
4 min readJun 16, 2016

--

With the fast expansion of the 3D market and with the increasing demand of 3D sensors, it’s no surprise that this hardware is becoming more accessible. Sensors ranging from $99-$4500 are developed to acquire 3D data in form of a depth image. When we are dealing with 3D sensors it’s important to understand the several technologies behind them. First I will talk about what sensors used to be applied for volumetric capture and than we will go into the new Azure Kinect. It is pretty great one for volumetric video.

In one hand the most popular technology seems to be based on IR Lasers, where the depth map is constructed by analyzing a speckle pattern of infrared laser light. This technique is based on the structured light principle where depth is inferred from the deformation of a pattern. On this type of sensors a more distant object is more blurry and further away. Quite popular sensors as Kinect v2, Intel RealSense (R200 and F200), Asus Xtion, Orbbec Astra PRO, DUO MLX and Softkinect DS525 operate according the IR Lasers. These sensors are quite developed and can track people up to 4m away from the sensor, except for Intel RealSense F200 that as a range of 0.2–1.2m, the same as SoftKinect (0.15–1.0m) and Orbecc Astra that is able to track a greater distance (0.4–8m). Even though this sensors present a good resolution for their RGB image (up to 1920x1080p, 30 FPS), their depth image resolution is considerably lower (up to 640x480p, 60 FPS). However, the user needs to keep in mind that usually, the depth resolution decreases with the increase of the number of FPS. Anyhow, these sensors only work on interior environments which can be a problem if a user desires to use them for outdoors applications. Microsoft Kinect v2 used to be the most known sensor for volumetric capture and the most powerful one on terms of intrinsic features: it’s capable to track up to 6 skeletons, do gesture detection and training and facial recognitions. Besides, just like us at EF EVE ™ many others researchers used this dynamic sensor for their testing and development. This was before the new Azure Kinect came out and now we can connect 2 or 4 of these into great quality volumetric capture studio. This is done with our automatic calibration functionality. We will talk about this later in the text.

On the other hand, stereo vision is another technique used to obtain 3D images. ZED Stereo Camera is a sensor that relies on stereo vision and provides a disparity map where the depth information is stored. Stereo vision has the advantage of being able to work outdoors but current cameras are still not very precise. Besides this, stereo vision data requires a lot of time to process and the algorithms are usually quite heavy computationally.

There are other quite interesting 3D imaging systems on the market as the Matterport camera and the Pelican PiCam. Matterport camera is a system that allows to scan houses and automatically creates a 3D image of the property. It’s a camera especially designed for real-estate purposes. Pelican PiCam is a very fascinating device built to be connected to a camera on, for example, mobile phone or tablets. It’s output image has a resolution of 8 MP and a range of up to 5m. This 4×4 sensor array allows the user to obtain images with a 3D perspective, perfect for art and everyday applications.

So now on to Azure Kinect for volumetric capture. As it stands now Azure Kinect is the best option you have if you want to have volumetric capture without spending thousands and being able to have a portable, quick set up. It stands out with RGBD resolution of 4096×3072 that generates an astonishing 12 million points every 66.6 milliseconds. This is a breakthrough in volumetric capture when we compare it to its pre ancestor Kinect v2 which gives only 2 million points per frame. Also it comes with hardware synchronisation via 3.5 mm jacks allowing multiple Azure Kinect camera frames to be synced together. However, the software it comes with mainly allows users to inspect the depth and RGB. So thats definitely not enough to use it is volumetric capture software. You can try to find some open source code but non of it will be in one place to actually solve the main 2 issues: cleaning the raw data in real time and auto calibration of sensors to get quality capture. Thats where EF EVE Volumetric Capture comes in and does it in seconds.

If you would like to learn more about volumetric capture visit EF EVE ™

--

--