AMAC — Another Step Towards an Autonomous Future

Travis Brashears
6 min readJan 21, 2019

--

We are a team of Berkeley students building autonomous mobile robots.

Autonomous Motion at Cal (AMAC) is a team of Berkeley students working to change the way autonomy is pursued. We aim to accelerate autonomous innovation through our development of revamped mobile robots that will be able to navigate the densely populated UC Berkeley campus.

There is currently a huge influx of capital and research resources within the autonomous vehicle space. Unfortunately, the barrier to accessing capital and resources is often very steep for undergraduate students and these assets are often funneled into research labs. We wanted to work on actualizing these advances while making them more accessible for students.

We’ve built a 1/8th scale Autonomous RC car from the ground up as in initial project. Our initial goal is that the vehicles should navigate through multiple distinct routes and on various terrains around campus. These cars would be durable, adaptable, and agile. Our end goal is multi-agent cooperation through a fleet of mobile robots.

Current Progress

All of the code for the AMAC car is available on Github and you can find more info on our website.

Hardware Included

Ouster LIDAR, Inertial Sense IMU, 3 RGB Camera Assembly, Intel NUC, BLDC Motor, and ESC — Motor Controller

High Level Scheme

All Implemented Software Modules

For this car, we prioritized sensors that would aid in environment mapping, localization, and nearby object detection, so we chose a LiDAR, Inertial Navigation System (INS), and 3 cameras. Most of the RC car components were purchased off the shelf so we could focus on sensing, planning, and actuation. The computer, an Intel NUC, was essential for processing all of the sensor data into vehicle control commands. The battery (7 cell 29.4V, 6000Mah, 144Wh) was selected to account for the heavy power demand from the computer, drivetrain, and sensors. The design criteria for the car includes having robust components for the car in addition to features such as path planning and object detection, following, and avoidance.

Implementation Details

Constructing the Vehicle

We then added wiring for drive by wire capability, 3D printed a sensor chassis, created a power budget and wired everything to one battery using multiple buck-boast converters.

Power Budget
3D Print of Electronics Chassis

Drive-by-Wire Communication

This is one of the most crucial components as it makes the whole system work. Throttle and turning values from the remote control are stored in the onboard microcontroller. If the vehicle is in RC mode, these values are also sent back to the car’s servo and motor. If the vehicle is in autonomous mode, the car’s servo and motor receive values from the communication node.

Drive-by-Wire zooms

The Software Stack

ROS Modules

LiDAR

We chose to utilize a 64 channel LiDAR to determine the pose and position of objects in the surrounding environment. We utilized a 3DPointCloud (as seen in the video above) to both localize our vehicle via the GTSLAM algorithm and also project 3D objects to the 2D plane of the ground. This allows us to generate a cost map of “good” and “bad” locations to drive.

We’d also like to thank Ouster for supporting our team with a LiDAR. They’ve been really helpful with providing us with any support that we would need.

INS

The Inertial Navigation System (INS) publishes a unique message type “ins” which is converted to an odometry message “odom”. The ROS package, gmapping, uses 5DOF SLAM with the 2D LaserScan and the odometry message to publish data in an occupancy grid and to determine relative position within our environment. The INS is also used in conjunction with our Ackermann steering model to determine the car’s position as it moves.

Shout out to InertialSense for supporting us with their sensor!

URDF

Below is a picture of our Universal Robot Description Format (URDF) and TF Tree that is utilized for knowing all relative poses of car electronics and wheels.

URDF Capture
TF Tree

The navigation stack uses a TEB planner to develop a local and global cost map which utilizes the occupancy grid and “tf” transform. The TEB planner uses an Ackermann steering model. When selecting a navigation goal, it calls the local and global planners to find a path via the planner, which then publishes waypoints (in the form of a Twist) to the cmd_vel topic. These waypoints are then converted to PWM values for our steering and drive motors. The drive-by-wire computer communication node then subscribes to these values to send the necessary signals to the ESC and servo.

Ackermann Steering

Cameras

3 Cameras working on one USB port

The current implementation of our object detection algorithm was created using OpenCV. It can detect a set of predefined known objects and respond with one of four actions: forward, reverse, turn and stop. For example, the vehicle will stop when in close proximity to a person. This information is published to the “cmd_vel” topic as a Twist and is processed by our drive-by-wire communication node to move the car.

Software and Visualization

Team:

  1. Travis Brashears, Engineering Physics
  2. Philipp Wu, MechE and EECS
  3. Malhar Patel, EECS
  4. Bradley Qu, EECS
  5. Gan Tu, CS
  6. Amanda Chung, Poli-Sci and Journalism
  7. David Yang, MechE
  8. Daniel Shen, MechE
  9. Carl Cante, MechE
  10. Andy Meyers, MechE

Thanks for all the love and support from our friends in Supernode!

Next Steps

This project is all open source and on our GitHub so anyone can join the race to autonomy! We will be periodically updating the software and showcasing this initial car. After building this prototype we will utilize what we have learned here to create other mobile robots in the near future. More info to come soon :)

We believe that the road for autonomous motion is endless. In the future, we hope to even have a fully sovereign rover on the moon. While our work is largely focused on the ground right now, we’re excited to shoot for the stars in the near future.

If you have any thoughts, suggestions or comments about what we’re working on, please feel free to contact us at trbrashears@berkeley.edu and malhar@berkeley.edu. We’re also looking for additional support (contributors and sponsors) so let us know if you’d like to get involved!

Other Cool Projects

If you’re interested in reading about our first step, check it out below.

--

--