ROS 2 Live Depth Cam / Point Cloud Visualization with Rerun
When working with robots, or really anything that interacts with the physical world, visualizing what has happened and what is happening in real-time becomes essential. While several observability tools focus solely on telemetry (Grafana, Datadog, etc.), the harsh reality with physical systems is that you need to see them in their environment alongside their metrics.
In the ROS 2 ecosystem, rviz2 and Foxglove are the two de facto tools for visualization. Rviz2 is native to ROS 2, working seamlessly out of the box, though it has some limitations when it comes to reconstructing more complex visual scenes. Foxglove, a newer solution targeting the same ROS 2 community, comes with tight ROS integration, too. However, it can also operate standalone using its own WebSocket protocol or MCAP
file format with its ROS-inspired message types.
Rerun enters the game
Rerun seems to take a different approach, not focusing primarily on ROS 2 integration. Instead, it began by building core data structures to log and visualize multimodal data from the ground up, without mimicking existing types as Foxglove did. This approach has both pros and cons: while it allows any application to easily log data to the Rerun platform, it doesn’t natively support ROS 2 systems without implementing conversion rules or adding extra metadata.
There are two methods to visualize ROS 2 data in Rerun:
- a DIY solution: subscribe to the topics you want to visualize and log them in Rerun (like in this example)
- Use Rerun’s example ROS 2 bridge and set up conversion/ingestion rules there.
Using Rerun ROS 2 bridge
The Rerun ROS 2 bridge is one of many Rerun examples showcasing a potential, more general integration solution. It subscribes to all topics with supported types (Image
, PointCloud2
, Odometry
, Pose
, etc.), converts the messages into Rerun archetypes, and logs the data to Rerun.
The mapping between ROS 2 frames and Rerun entity paths is configured in a parameter file, where additional transformations or settings can also be specified.
Logging data from Intel RealSense D435i
To demonstrate how the integration works, I chose a relatively simple and common task: visualizing topics from my depth camera. The camera publishes two feeds — one color and one depth image, along with their intrinsic parameters — and a point cloud representing points in 3D space.
I used the default configuration to launch the camera, without any additional transformations or custom settings.
ros2 launch realsense2_camera rs_launch.py pointcloud.enable:=true
This will produce the following topics which are need to be mapped to Rerun entities:
❯ ros2 node info /camera/camera
[..]
Publishers:
/camera/camera/color/camera_info: sensor_msgs/msg/CameraInfo
/camera/camera/color/image_raw: sensor_msgs/msg/Image
/camera/camera/depth/camera_info: sensor_msgs/msg/CameraInfo
/camera/camera/depth/color/points: sensor_msgs/msg/PointCloud2
/camera/camera/depth/image_rect_raw: sensor_msgs/msg/Image
/camera/camera/depth/metadata: realsense2_camera_msgs/msg/Metadata
/camera/camera/extrinsics/depth_to_color: realsense2_camera_msgs/msg/Extrinsics
/tf_static: tf2_msgs/msg/TFMessage
The mapping is handled through standard parameter files, specifying which frame_id
will be logged under which entity path. The challenge is that ROS 2 messages only provide the frame_id
, while Rerun requires all data to be logged with the full entity path.
tf:
update_rate: 0.0 # set to 0 to log raw tf data only (i.e., without interoplation)
tree:
base_link:
camera_link:
camera_depth_frame:
camera_depth_optical_frame:
points:
camera_color_frame:
camera_color_optical_frame:
topic_options:
/camera/camera/depth/color/points:
colormap: rgb
colormap_field: rgb
entity_path: /base_link/camera_link/camera_depth_frame/points
/camera/camera/color/image_raw:
entity_path: /base_link/camera_link/camera_color_optical_frame
/camera/camera/color/camera_info:
entity_path: /base_link/camera_link/camera_color_optical_frame
/camera/camera/depth/image_rect_raw:
entity_path: /base_link/camera_link/camera_depth_optical_frame
/camera/camera/depth/camera_info:
entity_path: /base_link/camera_link/camera_depth_optical_frame
In addition to the mapping, I added an RGB color map for the points in the point cloud. There’s also an option to apply a turbo colormap based on any point cloud values (such as the z-coordinate).
Since my camera wasn’t facing upward, I needed to define a base_link
to camera_link
transformation as well.
extra_transform3ds:
- entity_path: "base_link/camera_link"
transform: [1,0,0,0,0,0,-1,0,0,1,0,1,0.0,1.0,0.5]
from_parent: true
After starting the node with:
ros2 launch rerun_bridge realsense.launch
the viewer started to stream the messages into the UI.
Everything worked, except for some flakiness due to the default blueprint (dashboard layout) values. Here are the changes I made:
Blueprint settings
In my 3D view, I wanted only the pinhole projection from the color camera, not along with the depth camera. To achieve this, I removed the camera_depth_frame
from the entities listed in the Entity path filter on the view's property page.
I also set the ImagePlanDistance
to a fixed value, which controls how far the pinhole image projection is visualized in 3D space. This value isn't available in the ROS 2 messages but is necessary for the visualization.
If you’re streaming live data, it may be wise to set memory limits, so older data is deleted once the memory threshold is reached. By default, it takes 75% of the available memory, so be careful.
Summary
Rerun is a super neat tool to visualize multimodal data, but unlike rviz2, it requires some configuration to work with ROS 2. While there’s no official integration with the framework, you can set up your visual observability pipeline using the provided examples in just 10–15 minutes.
For me, it was worth it.