MARVIN’s Head Pt. 2

Gene Foxwell
5 min readMay 20, 2018

--

continued from previous article …

In this article I will briefly go over the custom ROS code that I have written for MARVIN. Details of each module will be explored in future articles.

Keep in mind that all of these modules are currently a work in progress, while the functionality is present it has not been optimized and in some cases I already have plans to change it (especially the “SubsumptionCtrl module as I am not convinced this fully captures the abstraction I was originally aiming for).

The Blackboard

At the center of MARVIN’s design is the Blackboard. As the name suggests, this serves as the central data structure for MARVIN’s implementation of the Blackboard design pattern.

Blackboard Graph

Internally, the Blackboard represents the data as simple graph where each node represents an area on the map. New nodes are added to the Blackboard as MARVIN moves around in the world. When observations occur, say for example a user names the area that MARVIN is in, or an Object Detector detects some objects, this data is written to the node that corresponds to MARVIN’s current location on the map.

Why do this? MARVIN is by design very location oriented — most of the information it is interested in at any given time is related to what is present where it currently is. Am I in the Kitchen? Is there a couch here? Is Aunt May or Uncle Peter in the room? By structuring the Blackboard so that information queries about the current location are simple I hope to make the rest of MARVIN’s processes more efficient.

Keeping the Blackboard in this format also has an added bonus of allowing me to layer on a “higher” level path finding algorithm over top of the ROS Navigation stack. This allows me to get around a limitation of the Navigation Stack which is that it requires a cost map in order to do its planning. It is not memory efficient to keep a large cost map of the entire world active at any given time, rather I would like to keep the cost map sizes somewhat small. By constructing the Blackboard such that any two adjacent nodes can be reached by using a smaller cost map we can avoid having to keep a full cost map of the entire environment. Whenever MARVIN wants to go to a location outside of its current cost map, it can use the map generated by the Blackboard to generate a list of nodes to follow from its current location to its goal location with the guarantee that the ROS Navigation stack can handle the path planning between the individual nodes.

Exploration

Autonomous mapping is performed using the Exploration module. This is based on the ideas in the paper A Frontier-Based Approach for Autonomous Exploration. The Exploration node subscribes to the Occupancy Map published by RTAB Map along with the robots current pose estimate. Next a convolution is done over the Occupancy Map to extract frontier points. Roughly speaking these are points that meet the following criteria:

  1. The point is a “free point” on the Occupancy Map.
  2. The point has at least one neighbor that is an “Unknown” point on the Occupancy Map.
  3. The point has no neighbors that are “Occupied” on the map.

Once the frontier has been extracted, the closest frontier point outside a pre-set threshold distance to the robots currently believed location is chosen. This location is transformed into world co-ordinates and then fed into the system as the current goal.

If the goal is “close enough” it is fed directly to the ROS Navigation system. Otherwise a path is planned using the pre-existing Blackboard map from the current location to a node close enough to the frontier point that the ROS Navigation system can take over.

Remote Control

MARVIN’s remote control

Next on the list is MARVIN’s remote control system. The ROS node created for this is fairly simple, it takes in string messages corresponding to five different actions “Forward”, “Backwards”, “Turn Left”, “Turn Right”, and “Stop”. It then converts these actions to a twist message and publishes the twist command on the /cmd_vel topic. In software this message is received by the differential drive gazebo plugin which performs the corresponding motion on the simulated robot. In hardware this is intercepted by the ca_autonomy package and then translated into motor control commands for the iRobot Create 2.

Map to Image and Camera Capture

These are two minor utility nodes that collect data from MARVIN and transfer it to a format that can be used by the user interface.

The Map To Image node (publishing on the projMapToImage topic) as the name suggests converts the occupancy map provided by RTAB Map into an image using opencv. In addition to creating an image of the map, this node subscribes to the current pose of the robot and marks MARVIN’s current position in the world with a little red square. This is intended to make it easier for users to track the robot when they are not in the same room (as would occur during telepresence).

Camera Capture should be self explanatory — it captures the RGB input of MARVIN’s camera (either simulated, or the Orbbec Astra) and transfers that information to a format that the user interface can display.

Output from both of these nodes can be seen in the screenshot of the UX provided in the previous section of this article.

I had originally intended to discuss the Subsumption Controller module in this article, however I have decided to dedicate the next article solely to this instead. First because this is in active development, second because this system is integral to understanding my approach to designing MARVIN’s decision making systems. This is a big topic (by my standards), so I felt it should be given its very own article.

--

--

Gene Foxwell

Experienced Software Developer interested in Robotics, Artificial Intelligence, and UX