Building a 3D-printed robot that uses SLAM for autonomous navigation
An autonomous robot using Jetson Nano, Arduino, and the Nav2 library in ROS 2
In my previous article, I talked about SLAM (Simultaneous Localization and Mapping), how it works, and its enormous potential in various areas. It’s the way robots can locate themselves in an unknown area, kind of similar to the way humans do it; they note some distinctive characteristics of the environment and combine data from their motors and another sensor (usually a LiDAR) to create a map and estimate their location in it. There is a huge math background in the way the algorithm works, but it can be extremely enlightening once you understand it (you can check out Probabilistic Robotics if you want to get deep into the subject).
However, being a maker and having an extreme passion for building projects, I wanted to get hands-on with it, explore its various possibilities, and find ways to make something useful out of it. The first step: build a robot.
You can check out the complete technical specifications and code of the project on GitHub: https://github.com/pliam1105/3D-Printed-ROS-SLAM-Robot
The idea and some parts of the chassis design and electronics were inspired by James Bruton’s ROS robot, but I designed and built everything from scratch, as it’s the main purpose of this project.
Building the robot
It was my first experience building a robot from scratch, since until now, regarding the chassis of the robots I worked on, I had only assembled mechanical parts of DIY kits and then integrated the electronics. Therefore, it was a new experience, but one that I had been looking forward to for quite some time, since I wanted to learn how to design a chassis, make it, research the motors, encoders, batteries, regulators, all the necessary electronics needed for it to work, assemble it and figure out all the emerging issues on my own. This process is the reason I love engineering and projects, and use it as a learning experience.
The configuration I planned for the electronics is the following:
- Using an NVIDIA Jetson Nano as the main microcontroller board, due to its high capabilities, especially its graphics card, as I want to, later on, integrate a camera and add some AI capabilities.
- An Arduino Mega is responsible for controlling the motors and capturing the encoders’ speed, meaning it would be in frequent communication with the Jetson Nano, hence they were connected via Serial USB.
- An RPLiDAR A1 which provides sensory data to the Jetson Nano, required for implementing SLAM
- 2 12V DC planetary gear motors, which due to their gear ratio (19.2:1) and mechanism provide high torque and less speed, exactly what we need.
- 2 BTS7960 motor drivers to control the motors using PWM pulses from the Arduino.
- 2 incremental rotary encoders, used to control the speed of the motors and provide odometry data also required for SLAM
A 12V LiPo Battery (3S) was used for powering the motors, as well as the encoders after passing through an adjustable step-down module, set to 5V. I also used a powerbank to power the Jetson Nano when it’s on the move, and the Arduino is powered via USB from the Jetson Nano.
Finding the parts was a bit of a hassle. I was able to find the encoders, batteries, and all the other electronics with a small delivery time, but the motors needed to come from the UK. Due to Brexit and the resulting changes in customs, they ended up arriving (the second set of motors, after the first was lost on the way) a month later; not ideal.
In the meantime, I took up Fusion 360 for the component design. I got used to the interface and workflow (with the sketches, components, and various tools) quite quickly, as it was intuitive, with some tips from online tutorials.
I started designing the motor and encoder mounts, the platforms, the gears, and the wheels, for each side of the robot (it was the same design rotated 180° horizontally). I then combined them with aluminum extrusions, added caster wheel mounts (for the robot to not tip over), and a platform higher up, where the LiDAR and all the electronics would be placed, connected with 4 parts to the extrusions below.
During that process, I used calipers and the online specifications of the parts to note their dimensions and be able to make an accurate design that can fit together perfectly. It thankfully ended up as intended (except for the voltage regulator holes), as confirmed during the assembly.
After finishing the design, it was finally time for my 3D printer to become useful. I printed all the parts in about 5 days (maybe even a week), using a little less than 1kg of PETG filament (in addition to the wheels that needed to be reprinted since I had made an alignment error in Fusion). At the time, the motors hadn’t come yet, and I was fulfilling the screw, nut, rod, shaft, and spacer needs of the assembly (which took a couple of orders since I realized I needed more than I thought). Finally, the motors arrived, and it was time for the assembly.
I assembled all the mechanical parts, and put on the electronics (twice, since the first time I put the top platform on the wrong side, and realized the holes were wrong). I then wired them all together, which needed quite some soldering, but I finally completed the hardware part and was ready to move on to the coding.
Controlling the motors and encoders
First, I needed to make the Arduino measure the encoder speeds and move the motors at speeds specified on runtime. First, I figured out how to provide certain PWM (Pulse Width Modulation, a way to simulate analog signals in digital devices) pulses to the motor drivers, to control the motors’ speed (basically voltage for now, we’ll get to the exact speed in a moment).
I also learned how encoders work, and implemented a way to measure their speed based on the pulses of two pins: A and B. A visual, intuitive explanation is provided in the image below, and it is basically translated into code by attaching an interrupt to the event that pin A changes its signal from LOW
to HIGH
(rising signal = pulse) and checking the B signal to determine its direction.
To measure the speed, we just divide the number of pulses in a specific time interval with that interval and convert it from pulses/millisecond to meters/second. That last part required conversion from meters to pulses, which I did using a measuring tape and moving the robot, then checking the number of pulses recorded (which I later changed due to inconsistencies of the measured speed with the PID controller’s intended speed).
Now we have two parts ready: controlling the motor voltage (output), and measuring their speed (input). In order to make the motors move at the desired speed at each moment, we need to connect those two. This is where the PID (Proportional, Integral, Derivative) Controller comes into play. It is a closed-loop controller, that is, adjusts the output for the input to reach a specific setpoint. The way it works is by creating an output by summing the error (input - setpoint) multiplied by a weight KP, with the integral (sum of the previous errors divided by time) multiplied by KI, and the derivative (difference of the last error with the previous one divided by time difference) multiplied by KD.
To that end, I used the PID_v1
Arduino library, and tweaked the KP, KI, and KD parameters until I achieved getting to the desired speed smoothly, in a short time, and without deviation from it.
However, the speed commands from ROS don’t come in that format (left wheel speed and right wheel speed), but in their linear and angular components, which requires some calculations, as shown below.
Connecting Arduino with ROS
This project mainly relies on ROS (Robot Operating System) to connect its different parts. ROS is a framework that allows different components (nodes) to work together, where each component achieves a different functionality. As it is not exactly an operating system, it runs on Ubuntu (I used ROS 2 Foxy and Ubuntu 20.04, installed both on the Jetson Nano and my PC).
Communication between the Arduino and the Jetson Nano
In order to connect the Arduino with the rest of the system, I needed to make a node (called arduino_serial
)that transfers data, via Serial, between the Arduino and the Jetson Nano. Because we want 2-way data communication, and it works with buffers of bytes, we need two things: a protocol, and a way to synchronize the data transfer between the two devices.
For the protocol part, I used the SerialTransfer
library on the Arduino side and the pySerialTransfer
Python counterpart on the Jetson Nano, which both convert variables and containers (like structs) to byte buffers, and ensure accurate data transfer via Serial.
However, synchronizing the data transfer between two devices with different cycle rates was a bit of a problem, as we can’t do this in an asynchronous way (except if we had two serials). Thus, after trying out different methods, I concluded on the following one that worked:
- At the start of both the Arduino and the Jetson Nano programs, we want to achieve a handshake, that is, a sequence of a message from the Jetson Nano (which tries it every 1 second) and a response from the Arduino.
- At every cycle of the Jetson Nano, I first send the velocity commands, then wait to receive the encoder speeds from the Arduino.
- On the Arduino side, I first wait to receive the velocity commands, then send the encoder speeds.
One key point that made that work, is sending data even if we don’t have new ones (just send the last ones available), since otherwise there are various problems in matching the data received with those sent.
Publishing and receiving data in Jetson Nano
The arduino_serial
node, which runs on the Jetson Nano and talks with the Arduino, needs to communicate with the rest of the system, and this is done in ROS using topics. A node can publish or subscribe to a topic, to send or receive data accordingly. In this case, the node needs to subscribe to the cmd_vel
topic to receive velocity commands (using the format described above) and publish to the odom
topic to send odometry data (estimated velocity and position/orientation).
Also, in ROS there is something called transforms, which converts sets of data (points in a coordinate system) from one coordinate frame to another. To achieve localization and mapping, 3 core frames are used:
map
is the frame that represents the real-world environment of the robot, which is supposed to be stationaryodom
is the frame that stays fixed with respect to the starting position of the robot and is used to depict the robot's odometry location estimatebase_link
is the frame that represents the robot's position
The map
to odom
transform is computed by the localization node, which I will explain later on, but it basically depicts the alignment between the inaccurate (due to drift) odometry data and the sensor data over time.
What I needed to implement myself is the odom
to base_link
transform, which presents the difference between the estimated current position and orientation and the initial ones.
Therefore, the two aspects that we need to derive from the encoders’ speeds (acquired from the Arduino) are: linear & angular velocity and position & orientation.
I compute the (linear, angular) velocity components using the following formulae:
I also time-integrate the left and right wheel speeds to get the estimated position & orientation using the following formulae (applied at each time step with new encoder data):
I tested the velocity command transmission and odometry pose estimation by moving the robot around with teleop_twist_keyboard
around the room and checking the drift of LiDAR data (which we will talk about below) with respect to a fixed coordinate frame (odom
), by visualizing it with RViz 2 (from ROS 2 on my PC). This confirmed that the communication between the Jetson Nano and the Arduino and the transmission of the correct data was working, so I could move on to make SLAM work with the robot.
Integrating Simultaneous Localization and Mapping
The part where all the above parts, along with the LiDAR, come together, to map an area and localize the robot in it. But first, we need to integrate the LiDAR with ROS. Thanks to the RPLiDAR ROS library, this is as easy as launching a node and viewing the messages in the scan
topic, which we can also visualize in RViz 2.
However, before we get to SLAM, we need to setup some more things. As mentioned above, ROS coordinates work using transforms, that allow us to convert coordinates from one frame to another. This can also be done between robot components, and this is the basis of the Robot State Publisher and Joint Publisher nodes. The robot state publisher uses joints to depict the relationship between different robot components, e.g. the translation & rotation to get from the base_link
frame (the robot center) to the laser
one (that the LiDAR data use). We save these joints in a URDF (Unified Robot Description Format) file (basically an extended XML file), and publish them using the robot_state_publisher
node. We also publish the joint states (e.g. the orientation of the wheels) with the joint_state_publisher
node (and we can also change them using a GUI).
We can finally implement SLAM in the robot. To do that, we use slam_toolbox
, which subscribes to the respective topics and transforms, combines the odometry and sensor data, and creates a map of the area while estimating the location of the robot in it.
We also add a bonus to this project; we will make the robot navigate to specific goals/points in the presented map, autonomously. This is done using the navigation2
package, containing various nodes for different functions required for navigation.
For that purpose, a node first takes the map computed before (which is also updated constantly) and creates another one, presenting how risky/safe it is to traverse a specific area (also accounting for the robot’s width), in the form of a cost function. Then, a planner node takes that costmap and computes the most efficient path to get to the goal, and a controller node takes that path and converts it to instructions for the robot, in terms of the velocity commands described above. There are also various backup nodes to make the robot recover from an uncertain/risky situation. All the above nodes are updated each time a new map & location is computed.
Robot in action
Here is a GIF of the robot mapping and navigating around my room, to specific goals that I send to it from my PC:
Conclusion
Here’s a summary of what I’ve accomplished during the implementation of this project:
- Designed and built the robot chassis from scratch, and wired all the electronics
- Figured out how to control the motors & measure the encoder speeds, calibrated the encoders from pulses to meters, and tuned a PID controller to achieve constant speeds on demand
- Implemented 2-way serial communication between Arduino and Jetson Nano, and converted encoder speeds to the desired odometry position & velocity estimations
- Learned about ROS and created a node to integrate the above functionality with the rest of the system
- Integrated the LiDAR node, setup the robot & joint state publishers, and integrated the SLAM and Navigation nodes to achieve autonomous navigation.
This is the first step to making a robot that can accomplish useful tasks in various aspects of our lives. SLAM is a foundation that all the other functions of a robot depend on, as the robot needs to know where it is. Thus, the next step is for me to explore these useful purposes that a robot can have, and apply them to this project. I am thinking of adding a camera onto the robot and utilizing the enhanced AI capabilities of the Jetson Nano, possibly with some manipulator and end effector to interact with the environment around it.
You can check out my monthly updates (including my work on the next part of this project) in my newsletter: https://panagiotisliampas.substack.com/