Robots Leading Multiple Teammates to Multiple Goals

Brian Reily
Mines Robotics
Published in
5 min readAug 26, 2020

This post summarizes our research paper “Leading Multi-Agent Teams to Multiple Goals While Maintaining Communication” by Brian Reily, Christopher Reardon, and Hao Zhang. This work was presented at Robotics: Science and Systems 2020.

In many real-world applications, robots must be able to influence teammates to follow them. For example, robots may explore unknown environments and then possess information that human teammates do not have. In disaster response, robots may find survivors that do not know how to proceed to a safe location. In both cases, robots must be able to behave in a way that leads these followers to goal positions, while remaining connected as a group - i.e. if a drone is leading disaster survivors, the drone must stay within visual range so that the survivors are able to follow it to safety. In large environments, multiple robots must work as a team to lead multiple groups of followers to multiple goals.

Existing research has largely been limited to leading agents to single goals. This has been evaluated in the context of robots leading automated tours, as well as herding groups of animals. While the problem of maintaining connectivity between robotic teammates has been addressed by a number of approaches, it has not been addressed in the context of leading followers to multiple goals. We propose the research problem of multiple leaders leading a multi-agent team to multiple goal positions, while maintaining communication.

We propose an approach to adaptively lead a multi-agent team to multiple goal positions while maintaining communication. This approach generates an action for each leader at each time step, reacting to possible changes in the environment and allowing each leader to make progress towards a goal while maintaining communication with its followers.

We begin by defining three utilities that motivate our leader robots. First, we consider the utility value of a leader moving in the direction of a goal - for a leader to be effective, it must lead its followers somewhere useful. Second, we consider the utility of a leader maintaining communication with its followers. We assume that agents within a certain range are able to communicate - e.g., disaster victims are able to see a drone within some visual distance. We use this range to construct a communication graph, and base this communication utility on the algebraic connectivity of the graph. Third, we consider the utility of a leader maintaining communication with other leaders, as robots must often be able to coordinate as a team.

We define an optimization-based approach to assign weights to each of these identified utilities. We also introduce constraints to limit the allowed values of these weights, as well as regularization terms to influence how weights are assigned. These regularization terms encourage leaders to focus attention on only a single goal, for only a single leader to maintain communication with each follower, and for leaders to remain consistent in their choices (e.g., if a leader is moving towards one goal, it is counterproductive for it to switch to a different one). Our full formulation is below, with W, V, and U representing weight matrices and R, Q, and S representing the three defined utilities.

Our objective function to learn weight matrices W, V, and U that identify an optimal combination of the three identified utilities.

We then introduce a solution method based on Augmented Lagrangian multipliers, that we show is theoretically proven to converge to the optimal solution. We can then use the identified weights to generate actions for each leader. For full details on our approach and solution, see our paper.

We evaluated our approach in a high-fidelity simulator, where we defined leader robots that would utilize our proposed approach and follower robots that would move to stay closer to neighboring robots. Below shows qualitative results from one such simulation, with drones acting as leader robots and Husky robots acting as followers. This simulation defined two goals, with leaders reaching each in the bottom two frames.

Qualitative results from a high-fidelity simulator.

We also collected a large amount of quantitative results by simulating our approach in Matlab. We defined three scenarios, consisting of 3 leaders / 9 followers / 3 goals, 4/16/4, and 5/25/5. Below, we report the average distance from each follower to the nearest goal, as well as the number of time steps required for agents to reach goals for each scenario. Our full approach is in blue, with red indicating our approach without the introduced regularization terms, tan indicating a greedy approach where leaders move directly towards goals, and black indicating a random approach.

We introduced the research problem of multiple robots tasked with leading teammates to multiple goals while maintaining communication. We propose an approach based on regularized optimization to find an optimal balance of the competing utilities that influence a leader’s actions, enabling leaders to make progress towards goals while preserving communication with followers and fellow leaders. Simulation results show that our approach is effective at enabling robots to lead multi-agent teams.

Full Paper: Leading Multi-Agent Teams to Multiple Goals While Maintaining Communication. Brian Reily, Christopher Reardon, and Hao Zhang. Robotics: Science and Systems (RSS), 2020.

--

--

Brian Reily
Mines Robotics

PhD Candidate in robotics at the Colorado School of Mines, working with the Human-Centered Robotics Lab. See my work @ brianreily.com