Raspberry Pi + ROS 2 + Camera

Sander van Dijk
The Startup

--

I have written before about running ROS 2 on different Single Board Computers (SBCs), including an Odroid-XU4 and a Coral Edge TPU dev board. Recently I got my hands on a Raspberry Pi 4b and of course thought: “let’s put ROS 2 on it!”

There are several resources on running ROS 2 on a Raspberry Pi using Ubuntu as your OS, but I started out with Raspberry Pi OS (formally known as Raspbian), and I will describe here how to install ROS 2 onto that. And, as a bonus, I will also show how to use the Raspberry Pi Camera Module V2 in this set up. Let’s get started!

Left: Raspberry Pi 4b with Camera Module v2. Right: Output of camera using ROS 2 + V4L2 camera driver, shown in RQT

Initial setup & building

I assume you already have your Raspberry Pi up and running with Raspberry Pi OS; mine uses the latest version at the time of writing, from February 2020, based on Debian Buster.

There are no binary packages available for this OS at the moment, so we will have to build ROS 2 from scratch. Start out by following the first steps of the official instructions for building ROS 2 Foxy, until you reach the ‘Build the code in the workspace’ step. We have to perform some additional steps before we can actually build everything successfully.

First, we will follow the note in the instructions to ignore some subtrees. We won’t build Rviz and other visualization tools, mainly because they will fail, but also because usually you want to run that kind of monitoring tools on a more powerful external machine anyway. We will also not build all the system tests, to further save us some time. Run the following commands to make this so:

cd ~/ros2_foxy/
touch src/ros2/rviz/AMENT_IGNORE
touch src/ros-visualization/AMENT_IGNORE
touch src/ros2/system_tests/AMENT_IGNORE

Next, we need to set some additional build flags to make all builds succeed. We will save these as Colcon defaults so they will also be used automatically when building our own packages in the future. Create a file at ~/.colcon/defaults.yaml and edit it to contain the following:

build:
cmake-args:
- -DCMAKE_SHARED_LINKER_FLAGS='-latomic -lpython3.7m'
- -DCMAKE_EXE_LINKER_FLAGS='-latomic -lpython3.7m'
- -DCMAKE_BUILD_TYPE=RelWithDebInfo

You can leave out the last line, or use a different build type, but I always find RelWithDebInfo to give a good balance between optimization and debuggability.

With these preparations done, you can now continue with the official build instructions where you left off… Except that the build will still fail on the mimick_vendor package, which tells us that Architecture 'armv7l' is not supported. Luckily it actually is supported, and we just have to tell it so by running this command:

sed -i 's/(ARM32 "^(arm|ARM|A)(32)?$")/(ARM32 "^(arm|ARM|A)(32|v7l)?$")/' \
build/mimick_vendor/mimick-ext-prefix/src/mimick-ext/CMakeLists.txt

We couldn’t have done this before running Colcon, because the mimick_vendor package only clones the Mimick source code during build time. Finally, run Colcon again as before and the build should now successfully complete.

When compilation and installation is done, the official instructions give some examples to try out and test whether everything works. But if you have a Raspberry Pi Camera Module, we can do something better!

Install & run camera driver

The base ROS 2 installation comes with a cam2image node with which you can quickly test your camera. If you source the ROS 2 workspace and run ros2 run image_tools cam2image, it will start capturing and publishing images out of the box:

This node does not give you much in the way of flexibility though, like changing brightness, image format, or any other controls that the camera supports. Instead, we will use our v4l2_camera package, which uses Video4Linux2 to expose all these things. On top of that it supports image_transport to allow you to stream images over a network much more efficiently, which helps a lot if your Pi is headless and/or you just work through SSH.

First, we will create a new workspace and clone the code for the v4l2_camera package and some of its dependencies, and will install any further dependencies using rosdep(as always, make sure to have the main ROS 2 workspace in ~/ros2_foxy sourced first):

mkdir -p ~/ros2_ws/src && cd ~/ros2_ws/src
git clone --branch foxy https://gitlab.com/boldhearts/ros2_v4l2_camera.git
git clone --branch ros2 https://github.com/ros-perception/vision_opencv.git
git clone --branch ros2 https://github.com/ros-perception/image_common.git
git clone --branch ros2 https://github.com/ros-perception/image_transport_plugins.git
cd ..
rosdep install --from-paths src -r -y

Now build everything we need and source the new workspace:

colcon build --packages-up-to v4l2_camera image_transport_plugins
source install/local_setup.bash

And finally you can run the camera node:

ros2 run v4l2_camera v4l2_camera_node

The node will give a lot of output about all the capabilities of the Raspberry Pi Camera Modules.

Update 26 Feb 2023: New Raspberry Pi installations use a new open source camera stack based on libcamera. As part of this, new ‘Broadcom Unicam’ camera drivers expose camera and image signal processing devices in a more separate way, which the v4l2_camera package is not made for to support. Luckily it is possible to revert to the legacy V4L2 drivers by setting the following options in /boot/config.txt:

camera_autodetect=0
start_x=1

You can now go ahead and:

  • View the camera output, for instance by running the RQT image viewer: ros2 run rqt_image_view rqt_image_view.
    Remember that we turned off building visualisation tools like this one on the Raspberry Pi earlier, so you should run this on another, fast machine where you do have a full installation of ROS 2. When doing so, you can select the /image_raw/compressed topic rather than the base raw one so that the images are streamed much more efficiently at a higher rate.
  • Change the image size, like: ros2 param set /v4l2_camera image_size [1280,720]. The camera supports resolutions up to 3280 x 2464, or 8 megapixels!
  • Check out all the controls supported by the camera. The driver exposes these as ROS parameters and you can list all of them with: ros2 param list /v4l2_camera. You can further use the ROS 2 CLI to see what they are for and which values are available and what they mean, with e.g.: ros2 param describe /v4l2_camera 'white_balance_auto_&_preset'. You can get and set their values dynamically with ros2 param get and ros2 param set.

Next steps

I hope that this was useful to set up ROS 2 Foxy onto your Raspberry Pi, and to get you started with using the camera module. Next, you of course want to do something useful with that stream of images in your own nodes.

You can use cv_bridge, which we already installed as a dependency of the V4L2 camera driver, to convert ROS messages to OpenCV images, which gives you access to the full OpenCV ecosystem. The documentation for ROS 2 is at the time of writing a bit limited, but you should be able to get going with the ROS 1 tutorials.

Alternatively, you can also check out our ROS 2 Tensorflow Lite package, which lets you run modern deep learning models directly on your Raspberry Pi.

Have fun and let me know about any cool Raspbery Pi + ROS 2 projects you end up building!

--

--