Microsoft Kinect Indoor Scene in Rerun

How to Visualize a recording from the NYUD dataset with RGB and Depth channels

Andreas Naoum
Rerun-io
2 min readMay 13, 2024

--

Microsoft Kinect Indoor Scene | Image by Author

This tutorial is a guide focused on visualisation and provides complete code for visualising a Microsoft Kinect Indoor Scene with the open-source visualisation tool Rerun.

If you’re eager to give the example a try: Try it in browser

Background

The dataset, known as the NYU Depth V2 dataset, consists of synchronized pairs of RGB and depth frames recorded by the Microsoft Kinect in various indoor scenes. This example visualizes one scene of this dataset and offers a rich source of data for object recognition, scene understanding, depth estimation, and more.

Logging and visualizing with Rerun

The visualizations in this example were created with the following Rerun code:

Timelines

All data logged using Rerun in the following sections is connected to a specific time. Rerun assigns a timestamp to each piece of logged data, and these timestamps are associated with a timeline.

rr.set_time_seconds("time", time.timestamp())

Image

The example image is logged as Image to the world/camera/image/rgb entity.

rr.log("world/camera/image/rgb", rr.Image(img_rgb).compress(jpeg_quality=95))

Depth image

Pinhole camera is utilized for achieving a 3D view and camera perspective through the use of the Pinhole.

rr.log(
"world/camera/image",
rr.Pinhole(
resolution=[img_depth.shape[1], img_depth.shape[0]],
focal_length=0.7 * img_depth.shape[1],
),
)

Then, the depth image is logged as an DepthImage to the world/camera/image/depth entity.

rr.log("world/camera/image/depth", rr.DepthImage(img_depth, meter=DEP

--

--

Andreas Naoum
Rerun-io

AI | Robotics | Apple Enthusiast | Passionate Computer Scientist pursuing an MSc in Autonomous Systems at KTH.