Robots at Work: A story of Decentralized Collaboration

Divye Singh
Thoughtworks: e4r™ Tech Blogs
6 min readJan 18, 2024
Photo by Sam Moghadam Khamseh on Unsplash

by Divye Singh, Bhajneet Singh Bedi, Antariksh Ray

The increasing demand for autonomous systems has led to the use of multiple robots working in an environment, ushering robotics into the era of multi-robot systems. This transition promises increased efficiency; the true potential of a multi-robot system can be realised through collaborations within these robots. This can be achieved in two ways:

  1. Centralised collaboration, where a single central entity is responsible for guiding each robot.
  2. Decentralised collaboration, where robots share responsibilities and make their own decisions without the need for a central controller.

The collaborative robots (cobots) have shown their tremendous potential in addressing various challenges in intralogistics. So much so that many logistic giants are moving towards deploying these cobots in their warehouses for tasks like material handling, point-to-point transportation to name a few. Furthermore, there is an increasing demand for decentralised autonomy as decentralised systems are often more flexible, adaptable, and resilient. They also address various vulnerabilities of centralised systems, like the single point of failure and scalability limitations.

In this blog, we will look at how we deployed a decentralised collaborative multi-robot system for heterogeneous object segregation.

Simulation setup

We created our simulation using Ignition Gazebo’s version of Fortress and Robot Operating System (ROS)’s version of Humble Hawksbill. Ignition Gazebo is an open-source 3D robotics simulator, and ROS is a set of software libraries and tools that help you build robot applications. A robot control system in ROS will usually be made of many nodes, which are nothing but processes that perform computation. These ROS nodes communicate with one another and with Gazebo using ROS topics, services, and actions.

ROSS-Gazebo

Arena

The arena is a 2.5-meter square, with black walls on all four sides, containing randomly spawned red, blue, and green-coloured cubical boxes with each side measuring 3cm. There are three rovers in the arena. It further has red, blue, and green-coloured areas (warehouses) along the center of three of the sides for collecting their respective coloured boxes. In order to capture the visual feed of the arena, there is a camera at the center, at a height of 5.3m from the ground, looking down at the arena. This camera is continuously publishing its feed on a ROS topic named top_camera/image_raw.

Image captured by top camera

The objective of the simulation is for the rovers to pick up boxes and place them in their respective colored warehouses without interfering with each other. To achieve this, the following tasks need to be performed:

  1. Capture arena: Extract the location of the boxes, warehouses, and bots from the overhead image.
  2. Localization: Localize each rover within the arena.
  3. Selection of the box and the warehouse: Select an appropriate box-warehouse pair.
  4. Navigation: Navigate rovers in the arena with obstacle avoidance.
  5. Box handling: Pick and drop the boxes.

In the following sections, we will delve deeper into how we achieved these tasks.

Capturing arena

For extracting the location of different entities in the arena, we created a ROS node called world_mapper, that subscribes to the topic, top_camera/image_raw, to get arena images. To process the image and extract location information, we used OpenCV, an open-source library for computer vision. The image we get from the top camera also contains an area outside of our arena, which is not required. . So we cropped the image to keep only the required part of the arena.This also helps us fix the coordinate system that is confined within the bounds of our arena.

Local coordinate system

Then we created a binary mask for each of the colors (red, blue, and green), and from these binary masks, found the coordinates for the center of the boxes and warehouses using contours and their moments. The distinction between the boxes and the warehouses was done based on their contour areas. For locating the rovers, a similar exercise was done. However, for rovers, along with their coordinates, we also found their orientation (𝜙) using the black picking mechanism in front of the bot. The orientation was measured counterclockwise from the horizontal line extending from the center of the rover toward the right of the image. The location information extracted was captured in a JSON format, as shown below, published on ROS topic map/json as a string message.

{
"coordinates":{
"boxes": {
"red": [(x,y)...],
"green": [(x,y)...],
"blue": [(x,y)...]
},
"warehouses":{
"red": [(x,y)...],
"green": [(x,y)...],
"blue": [(x,y)...]
},
"bots": [(x,y,𝜙)...]
}
}

Localization

A rover’s localization is carried out via their ROS node named <bot_name>/localization which subscribes to topic map/json to get the coordinate data from the world_mapper node. The localization process can be divided into two stages. The first stage is done once at the beginning of the simulation when rovers have no information about their pose (location+orientation). Second stage is tracking the rover’s pose continuously during the simulation. In the first stage, each rover rotates in its position while others remain stationary. At the end of the rotation, the coordinate that was changing is assigned to the rotating rover. Once the initial pose of the rover is found, pose tracking is done by finding the pose that is the closest to the last known pose out of the three available poses.

This node has the an the following additional responsibilities:

  • To filter out the rover’s own pose from the location information on the topic map/json
  • To publish the filtered information on the rover’s own topic <bot_name>/filtered/map/json that is further used for navigation the rover in the arena.

Box and warehouse selection

A rover needs to decide on which box to pick. So we decided to go with the greedy approach of choosing the box that is closest to the rover’s current position. The warehouse is decided based on the color of the box selected. Once a rover decides which box to pick, it starts navigating towards that box. However, it is possible that multiple rovers approach the same box. To deal with this situation, the rover will keep checking if that box is still available based on the coordinates available in the location information message on the topic map/json. If the box becomes unavailable (i.e., it is picked by another rover), the rover will search for another box to pick.

Navigation

Navigation for each of the rovers is handled via the Nav2 package for ROS. This works using an occupancy map, which represents the free and obstructed area in the map in the form of a grid. The pixel value of zero in an occupancy map represents the free area, while the pixel value of 100 signifies the presence of an obstacle at that location. The occupancy map is created using location information from topic <bot_name>/filtered/map/json form of nav_msgs/OccupancyGrid message and is published on topic <bot_name>/occupancy_map.

Handling boxes

When we are picking up a box, it needs to be moved from the ground to the rover, and vice versa when dropping. The ignition gazebo does not provide any way to address this challenge; therefore, we created our own custom plugin for the ignition gazebo, which removes the box model from the arena and recreates it on the rover while picking. Similarly, while dropping, removes the box model from the rover and spawns it on the arena.

All of the tasks discussed above are tied together to form a simulation flow using a behavior tree, which models the flow of processes and action triggers in the form of a hierarchical structure. The tree also takes care of recovery strategies in case of a task failure.

Conclusion

In this blog, we have shown how a decentralized collaborative multi-rover system can be deployed for an object segregation task. Since each rover is independently taking its own decisions and is able to adapt to other rovers, it is very easy to scale this system for many more rovers with minimum effort. This kind of decentralized collaborative system can be used for everything from warehouse management to space exploration, providing autonomy with a touch of flexibility and resilience.

Disclaimer: The statements and opinions expressed in this blog are those of the author(s) and do not necessarily reflect the positions of Thoughtworks.

--

--