EZ-RASSOR 2.0: GPS-Denied Autonomous Navigation
Over the course of the Fall 2019 and Spring 2020 academic semesters, we (a team of five Computer Science students at University of Central Florida) added an extensive update to the EZ-RASSOR software suite. This project was done in conjunction with the Florida Space Institute and NASA’s Swamp Works.
What is EZ-RASSOR?
EZ-RASSOR is a software suite for controlling regolith-mining robots. These robots are designed to traverse terrains on other planets, like the moon or Mars, and collect regolith. Regolith (the equivalent of dirt on other planets) has a variety of uses for future long-term planetary missions: water can be extracted from it, it can be mixed with plastic for use in 3D printing, and it can be used as a radiation shield for bases. The EZ-RASSOR software is currently run on a simulated robot, but in the future it will be used on low-cost robots for use in education and research.
What did NASA want?
The main goal of this project was to add GPS-denied autonomous navigation to EZ-RASSOR. GPS-denied means that a robot running the EZ-RASSOR software should be able to determine its location without using GPS. This is necessary because there aren’t any GPS satellites orbiting other planets. Autonomous navigation means that a robot running the EZ-RASSOR software should be able to drive itself to a destination without the need for human intervention.
What did we deliver?
Our project can be broken up into three major components:
- Localization: determines the robot’s location
- Obstacle detection: determines what obstacles in the environment need to be avoided
- Path planning: navigates the robot to a destination while avoiding detected obstacles
As shown in the figure above, the path planning system receives a target location, an estimate of the robot’s current location, and the detected obstacles in front of the robot. It then produces movement commands to move the robot along a path to the target location.
Localization
Without using GPS, accurately determining the location of the robot is a difficult problem. We implemented our localization system by combining two types of localization:
- Relative localization (odometry): measures how far the robot has moved using sensors onboard the robot
- Absolute localization: determines the location of the robot without using prior knowledge of where it has been
To understand what odometry and absolute localization are, it’s best to imagine real-world scenarios where they could be used.
Odometry could be measuring how far your car has moved based on the number of times (and in what direction) its wheels have turned. If you know where your car was when you started counting the number of times the wheels turned, you could estimate your current location.
Absolute localization could be determining your location based on landmarks you encounter. For example, if you’re lost walking around in a city and end up walking past your favorite restaurant, you now know where you are.
Odometry
In our project, we implemented and combined three types of odometry:
- Wheel odometry: estimates the movement of the robot based on the number of times and in which direction its wheels spin
- IMU odometry: estimates the movement of the robot using the robot’s IMU (Inertial Measurement Unit) sensor
- Visual odometry: estimates the movement of the robot based on changes between frames of the robot’s camera (i.e., if an object becomes larger between frames, meaning the robot has moved closer to the object, we can infer the robot moved forward)
Absolute Localization
We implemented absolute localization in two parts: Cosmic GPS and Park Ranger.
Cosmic GPS
Cosmic GPS refers to estimating the robot’s location based on the positions of stars in the sky, similar to how sailors determined their location at sea before the invention of GPS.
An upward-facing camera takes an image of the sky, identifies the stars that are present in the image, then uses the relative positions of the stars to calculate the location of the robot. Since there is data freely available about where the stars are expected to be at any given time, the timestamp of the camera’s image can be used to determine where certain stars should be relative to each other when the image was taken.
Park Ranger
Park Ranger refers to estimating the robot’s location by comparing its surroundings to an overhead map.
To illustrate how Park Ranger works, let’s look at a real-life example: let’s say that you wake up lost in the middle of a forest, but you have a map of the area. To determine your location, you start walking around and observing your surroundings. At some point, you stumble upon a river, so you know you must be along one of the rivers on the map. As you keep walking, you start to slowly narrow down the possible places you could be. Park Ranger uses this same idea. However, instead of looking for landmarks, the robot compares the terrain it can see to an overhead map of the area taken by satellites.
Obstacle Detection
For our project, obstacle detection involves processing an image of the environment in front of the robot to find the nearest obstacles in every direction the robot can see. Our robot uses a depth camera to obtain a 3D view of the surrounding environment.
Detecting obstacles in a moon-like environment is challenging due to the uneven terrain; many obstacle detection algorithms assume the robot is moving along flat ground. To detect the nearest obstacle in a certain direction, we look at changes between points in that direction. We use two methods for detecting obstacles:
- Hike: if there is a large gap in distance between points in a direction, there could be a hole.
- Slope: if there is a large change in height over a short distance between points in a direction, there could be an obstacle.
To determine the closest obstacle to the robot in a direction, we combine the two methods by taking the closest obstacle detected by the hike and slope methods in that direction.
Path Planning
For our project, path planning can be defined as simply getting from point A to point B while avoiding obstacles like rocks and craters. We used the WedgeBug algorithm to accomplish this. WedgeBug is a path planning algorithm that mimics how bugs navigate only using what they know about their immediate surroundings.
The WedgeBug algorithm works as follows:
- If there are no obstacles detected, move directly towards the destination.
- If there are obstacles detected, move in the direction that will avoid obstacles while minimizing the distance to the destination.
- If there is not any visible direction that is safe to move towards, turn in place to evaluate more potential directions to move in.
This process is repeated until the destination has been reached.
Outcome
By the end of the project, the simulated robot was able to successfully navigate moon-like environments autonomously without the use of GPS:
Who worked on the project?
This project was a team effort and consisted of five UCF Computer Science students. Everyone led some component of the project and assisted with others. Our team consisted of:
- Jordan Albury: Odometry Lead, Obstacle Detection, Path Planning
- Shelby Basco: Project Manager, Absolute Localization, Odometry
- John Hacker: Obstacle Detection Lead, Path Planning
- Mike Jimenez: Path Planning Lead, Obstacle Detection
- Scott Scalera: Absolute Localization Lead, Path Planning
What’s next?
This is just the second year of EZ-RASSOR development. Many more university teams and researchers will work on improving the software for years to come. Some of the functionality that could be implemented in the immediate future are:
- Integrating the EZ-RASSOR software with a physical robot
- Integrating the EZ-RASSOR software and simulated robot with the Simulation Exploration Experience (SEE) project
- Implementing mission-specific behavior, such as coordinating robots to perform a specific task once they reach a destination
Acknowledgments
We’d like to thank Mike Conroy of the Florida Space Institute and Kurt Leucht of NASA for meeting with us regularly and providing us feedback throughout the completion of this project. We’d also like to thank Ron Marrero and Tiger Sachse from the original EZ-RASSOR team for answering any questions we had about the original EZ-RASSOR codebase.