MARVIN’s Head Pt 1.

Gene Foxwell
7 min readMay 19, 2018

--

ROS rqt_graph output for MARVIN’s current configuration (running in a Gazebo Simulation)

continued from previous article …

My Approach to Robotics

This title is a bit of a misnomer, it would be more accurate to label this “Rodney Brooks’ approach to robotics that I stole and claimed as my own”. The design principles for MARVIN’s software are inspired by this paper Intelligence without representation and the work described in Minimalist Mobile Robotics.

https://youtu.be/YtNKuwiVYm0

Quoting from “Intelligence without representation”, I would say the biggest takeaway would be these guidelines:

• A Creature must cope appropriately and in a timely fashion with changes in its dynamic environment.

• A Creature should be robust with respect to its environment; minor changes in the properties of the world should not lead to total collapse of the Creature’s behavior; rather one should expect only a gradual change in capabilities of the Creature as the environment changes more and more.

• A Creature should be able to maintain multiple goals and, depending on the circumstances it finds itself in, change which particular goals it is actively pursuing; thus it can both adapt to surroundings and capitalize on fortuitous circumstances.

• A Creature should do something in the world; it should have some purpose in being

It is of course not a perfecting mapping — I am clearly using representations of the environment, I am clearly using not using a minimalist approach everywhere — what this does form is a framework for my thinking. The idea as this goes forward will be to layer different behaviors on top of each other, with complex behaviors overriding simple behaviors.

Unlike the linked to work however, I won’t consider the real world directly as input for these behaviors (for the most part), but rather I will consider the pre-processed output from the various modules of the robots perception system as input. These will be shared on a structured blackboard (which will be explained in the next article) and available for the modules use at any given time based on the current state of the robot.

First things first however, I shall describe the ROS packages that my work so far has been built on.

MARVIN and ROS

At its core, MARVIN uses ROS to interact with and control the behavior of the robot. That nice big diagram at the start of this article is a representation of the current version of MARVIN’s ROS node and how they all fit together (mostly). MARVIN’s software stack has a lot of moving parts, too many to fit into a single article (at least in my opinion), so I am going to eat the elephant so to speak, and split it up into many parts.

In the remainder of this article I am going to briefly go over each of the ROS packages that I have used so far in MARVIN’s construction.

In the next article I will provide a brief overview of all the custom ROS Nodes and services that went into this robot.

Finally in the third article I’ll overview the UX and the basic middleware that connects the React front end with the ROS backend to allow the user to control the robot.

It’s a big task (for me anyway), so lets get started!

Primary Package — rover_platform

The central package in MARVIN’s ROS setup is the rover_platform package. I’ll admit the name is inspiring here, I started work on this before I really had a name or even goal for the robot. It was originally imagined as a rover, not a turtle bot derivative.

I’ll be leaving most of the details about this package to the next article as lot of what’s in here is custom code intended to interact with pre-existing ROS packages, but there are a few things to note here:

MARVIN Mapping its world
  1. This is where the URDF used to simulate MARVIN in the gazebo environment is kept.
  2. Parameters and launch files for rviz and the ROS Navigation stack are also present in this package.
  3. Shell Scripts and launch files required to run MARVIN in either simulation or “hardware” modes.

ROS Navigation Stack

Path Planning for MARVIN is handled by the ROS Navigation Stack. This stack provides Local and Global Planning services, along with generating the associated cost maps based on the currently available Occupancy Map. Furthermore, it provides a nice API for communicating with it that allows the other nodes to set new navigation goals and monitor their process via the action_server.

The Navigation stack does not work “out of the box” and did require some configuration. The configuration files used can be found here. Most of these parameters have been tuned based on the behavior seen in the Gazebo simulation and will need to updated again once the physical robot is constructed. Of key important however is the rolling_window parameter. This was set to true to allow us to update the cost map as we build the map — without this I’d need to have the robot build the map first, then use the Navigation stack to move it around. This does not make for a good user experience in my opinion.

RTAB Map

Short for Real Time Appearance Based Mapping this module is being used to perform SLAM (Simultaneous Localization and Mapping) for MARVIN. It is capable of generating both 3D maps by combining the depth clouds collected at each of its nodes into a full 3D object, as well as generating Occupancy Maps. I will mostly be using this module for its ability to generate Occupancy Maps.

Typically, based on what I’ve seen, folks would generally use the gmapping package in this case, however I found I wasn’t getting satisfactory results from that package. The downside of this choice is RTAB Map takes considerably more resources than the gmapping package, but as I am running this all on a Jetson I appear (at present) to have sufficient computational cycles to handle it.

Occupancy Map Generated by RTAB Map

Create Autonomy

This package is used provide an interface between the Jetson TX2 and the iRobot Create 2. It also provides the mesh for the iRobot Create 2 in the gazebo simulation.

It’s primary use is for controlling the linear and angular velocities of the Create 2 via the /cmd_vel topic, which is published to by both the remote control system and the ROS Navigation Stack. It is also used for local collision detection by subscribing to the front ir sensors and the two front bumpers. The music playing abilities, and other various LED related functionality are not used.

Depth Image to Laserscan

As mentioned in the article on hardware, I am trying avoid having to place an actual laser scanner on the robot in order to keep the total costs down. However, I still need laserscan input of some form in order to generate an occupancy map.

That’s where the depthimage_to_laserscan package comes in. This package consumes the output from the Orbbec Astra depth camera and outputs a simulated 2D laserscan. This laserscan is then published on the /scan topic for use by RTAB Map and the ROS Navigation stack.

ROS Astra Camera

The ros_astra_camera package is used to provide an interface between ROS and the depth camera. It publishes all the required data for RTAB Maps to generate the occupancy map (and 3D map as needed) as well publishing a standard RGB image which can be used by the object recognition systems, the standard UX, and the Telepresence modules.

Rethink Robotics ROSNodejs

This package, provided by Rethink Robotics is used to provide a simple API between the React Web App I am using for the FE and the standard ROS nodes being used to control the robot. Currently this system makes use of nodejs, expressjs, and socket.io to perform the various functions needed for the user interface.

ROS Deep Learning

This module, provided by NVIDIA is used to allow ROS to interact with TensorRT, or caffe models built with NVIDIA Digits. I haven’t fully utilized these yet — for the current version I simply intend to use the out of the box object recognition model — but the goal is to integrate a more advanced version of the inference project I did for my Udacity Nano-degree here. The idea is to use a deep learning model to allow MARVIN to make an educated guess on what “class” of room it happens to be in at any given time. This will allow the user to give commands like “Go to the Living Room” without having to explicitly tell MARVIN where the living room is — it can simply use the Exploration algorithm to search until the deep learning model finds something that should match.

That pretty much summarizes the out of the box packages being used for MARVIN. I will likely integrate more as time goes on, but as of this article the list is reasonable exhaustive. Next article will deal with the custom code that was written on top of this stack.

--

--

Gene Foxwell

Experienced Software Developer interested in Robotics, Artificial Intelligence, and UX