How to Build a Mobile Robot Platform

Alexander Savinkin
GeekForge.Academy
Published in
10 min readMar 6, 2019

What is your background? Can you please tell us a little bit about your career path.

I am pursuing a bachelor’s in Engineering Product Development, with a specialization in robotics and a Master of Science degree in Technology Entrepreneurship at the Singapore University of Technology and Design (SUTD.)

But I’ve forged my own path beyond academics by self-teaching and pursuing robotics and computer vision development in my own free time. I do this by preparing tutorials and notes and building projects to really lock in the concepts. I also contribute actively to the open-source community, sharing my tutorials and writing clients and APIs.

I’ve been operating as the President of the SUTD Organisation of Autonomous Robotics (SOAR) for the better part of two years, building up the club almost from the ground up and instituting sustainable practices and programs.

I’ve been hired to train engineers in the Robot Operating System (ROS) framework, and I implemented autonomous wheelchairs and other Sensor and IoT type solutions for the Government Technology Agency of Singapore for some time. I’m also currently building up some startups while keeping my club afloat.

What is your current specialization?

I have been mainly focusing on autonomous ground vehicles (AGVs) with ROS and computer vision. But of course, since those specialties actually span a very wide array of disciplines, my actual range of specialties is quite broad, covering hardware, electronics, and software. I have a fair bit of familiarity with the ROS framework and I know my way around its standard navigation stack and mapping algorithms.

A lot of my time has been spent writing tutorials, workshops, and courses for general programming, ROS, and computer vision (particularly with OpenCV). I usually do this not only to help grow teams and keep my club sustainable, but also to use as reference material to keep me refreshed as I constantly hop between projects that might use very different disciplines and frameworks.

But I do practical work too, of course! My two most recent major projects have been:

1. MOMObot, a MObile, MOdular service AGV.

My team and I built this robot platform entirely from scratch (that means mechanical designs, fabrication, electronics, and software!), and I implemented the ROS navigation stack on it.

It can handle crowded, dynamic environments (like in an exhibition) and it weaves through crowds fairly robustly. It has all the standard bells and whistles of an AGV, a fairly robust navigation and localization stack, well-tuned motor controls, and a middling payload capacity of about 80kg.

(I also wrote custom ROS adapters for use with ultrasonic beacons that could optionally be added in for additional robust localization, and I wrote a proper tutorial for integrating them with ROS’ sensor fusion package since there wasn’t a comprehensive tutorial for that!)

However, the more interesting thing about MOMO is that I actually leveraged a couple of UX tricks to enhance the experience surrounding it. I installed a screen for MOMO and wrote an emotion expression module for it, so people around it could see MOMO expressing, as well as see where MOMO intends to move, since the eyes actually follow the strafing, pivoting, and steering commands for the motor control.

I found this to be quite effective since it immediately makes the normally internalized intentions of the robot visible, so when the robot achieves what it intends to do, it feels a lot smarter than if there were no expectations of intention. Additionally, since it makes the robot look cute if it does occasionally slip up, it’s less like the robot is messing up and more like a child is failing adorably!

Additionally, adding a location-based sound trigger to trigger cute voice lines to be played at specific locations in the robot’s map along with putting a peach plush toy on top of the robot (Since Momo 桃 also means Peach in Japanese) helps to tremendously increase the robot’s appeal.

It’s really been interesting to see how simple, cheap, and low-effort solutions like these can help to increase the perceived effectiveness of your localization and navigation algorithms, especially for a service AGV. And it’s even nicer seeing it actually be deployed in exhibitions and events. We found that people were more likely to take brochures and other distributables from the robot than if they were given out by people or at booths! We intend for MOMO to eventually be able to act as an autonomous navigator.

2. Distributed People Counting

I also had the privilege of being the project director for a people-counting web service that’s built to be fully scalable and easily installable so we could market it to non-technical customers. It was quite a challenge settling the full software architecture and making sure everything was loosely coupled and modularized, and even more so with a fairly inexperienced team.

I built the system around a couple of people-counter models I’ve messed around with and tweaked, and I exposed a fair bit of pre and post-processing parameters to the end user in a simplified form. We eventually decided to leverage the Google Cloud Platform (GCP) to help settle our live data reporting, databasing, and security layers. I remember having to write a few adapter clients to deal with GCP since a couple of functionalities weren’t implemented at the time with the client libraries we were using, and a whole bunch of other server-side functions we wanted to settle (like sharding request queues) to ensure our framework would remain stable when brought to scale.

It was a good exercise getting the team up to speed, and my first time properly managing a software team instead of just being a technical lead as well, but I’m proud to say that at the end of it we have a fairly robust platform with a few clients already that we’re hoping to continually grow and add more features, like a proper analytics dashboard and additional quality-of-life measures to make the framework more intuitive for our clients!

What are the most important problems your customers are met with?

I think the main issue for most robotics, computer vision, and associated AI solutions is the fact that our clients usually aren’t going to be very technically proficient. So, building solutions to scale and to be as robust and easily maintainable as possible will always be a challenge. Additionally, learning to forecast regulatory issues and write in mitigation measures will always be a good way to gain client confidence and make your team feel more professional.

Take, for instance, the people-counting framework. We had to bake in censoring algorithms to ensure that the images we captured of people didn’t feature any identifiable features in order to fulfil privacy concerns and regulations.

Or for MOMO, concerns about safety lent further justification for letting the robot’s navigational intentions be easily read and understood by the people around it, allowing for the robot to path in an expected way, and, interestingly enough, for people to begin collaboratively giving way to the robot, tangibly increasing the effectiveness of our navigation stack.

On our side, it’s very important not only to try to forecast client needs (and gotchas), but to also include the client when iterating on our designs to ensure our projects are as easy to operate as possible (and to make sure we protect our clients against any hiccups or mishandling of the projects once deployed). Working with fairly lightweight AGVs and cloud frameworks tends to be more forgiving, since if those systems fail it’s not as catastrophic as when a self-driving car fails, but it always pays to be safe.

I think the main takeaway is to ensure that you always include the client in your design process, and to ensure that the proper measures are taken to consult them on a regular basis to keep up with their needs. (Then, on your end, to write in failsafes to ensure that if the customer does manage to mess up how they’re meant to operate your product, it’ll get caught and corrected before it does any damage. That way you get usability as well as robustness out of your product!)

What is the most remarkable experience you’ve had during your professional career in this field?

I am actually fairly new to my field, so I really had to hit the ground running and pick up as many necessary skills to get to where I am. It’s been really interesting to see that even though there’s great depth to be had in robotics applications (there’s really a huge array of research into the topic), as well as the multitude of associated disciplines (think of all the research in control systems, perception, navigation, mechanical designs, batteries, and many other fields), the only barrier to entry for getting started is truly just being able to grapple with the insane breadth of the material. The very development of the skills necessary to do robotics is a project in and of itself, and I feel that it’s something you can never fully become a master at.

That single insight was what allowed me to get started on my journey, and I tackled the various disciplines bit by bit, until everything started to link together and click. That’s the beautiful thing about a cross-disciplinary field like robotics — almost anything you learn can be applied and give some insight or additional tools in your palette to apply to a new kind of robot or a new kind of application. But it’s important to really break things down and take it all in a bit at a time so one doesn’t get overwhelmed.

I feel that it’s really great to be excited to learn more about novel frameworks, algorithms, and sensors and try to think of ways to implement them in new robots, and even more so to know that there’s going to be so much more to learn and do. This feeling of always progressing and never really reaching a plateau is what keeps me driven to keep learning and to keep enjoying every new project that comes along. And the feeling of completing a project is always worth the slog to get to that point.

I’m also very grateful and lucky to have met a mentor who helped guide me at the start of my journey (the developer of the Linorobot), and a stint at the Government Technology Agency of Singapore helped me lock in and master what I now know about ROS through working with several AGVs and creating an autonomous wheelchair. I’m also very impressed with the open-source community ROS has developed and the wealth of information available.

What are the sources of knowledge you use to improve your skills?

There’s really a lot of material to cover if you want to pick up robotics, and I’d generally advise people who want to do it to first pick up a couple of prerequisites, depending on what they intend to do. (If you want to build robots and don’t have a team, you’ll need CAD, fabrication, electronics, and software skills. If you don’t want to mess around with built robots, you can just focus on software and play around with ROS’ Gazebo simulation suite.)

- SOLIDWORKS/Equivalent CAD software — Look for YouTube tutorials!

- General hardware assembly or fabrication (you can outsource some of this.)

- Electronics — If you’re new to this and want to pursue robotics at a hobbyist to intermediate level, consider looking at Arduino.

- Learn about the different sensors and actuators as well! Robots are just machines that take in inputs and produce actuations. LIDARs, radars, ultrasonic beacons, motors, servos, encoders, IMUs, and more!

- Programming languages (Python and C++ in particular); I have tutorials for these.

- Linux — I have tutorials for this too!

Then you should pick a field of robotics they want to focus on and then expand their horizons from there. (AGVs, perception, navigation and planning, robotic arms, swarm robotics, legged robots, etc.) I think AGVs are a good start, but you should pick what you’re really interested in. It’s going to be a long road. Curiosity will help a lot, too.

Luckily enough, there’s a very strong open-source community that is brimming with packages to use, tutorials, and support! If you ever need additional help, you can always go on the ROS forums.

For me personally, I had a good start following full-build tutorials with the Linorobot project.

But to go beyond, it’s good to master the Robot Operating System. The ROS Wiki has a good tutorial series, but I also have a fairly comprehensive tutorial, together with template starter code that will really help to get your projects up and running!

Once you do that, it really depends on what sort of subfield you’re looking at.

For perception, it’s good to start picking up computer vision, so things like:

- Convolutional neural networks in the machine-learning framework of your choice

- OpenCV — I have a tutorial for that

For AGVs and AGV navigation:

- The Linorobot tutorial should be good enough

- Looking at papers about navigation tuning helps a lot

- Exploring the alternative navigation stacks and packages on ROS helps, too (Rtabmap is a very compelling alternative.)

- Looking into sensor fusion with Kalman filters will also help a lot in robot localisation — I have a tutorial for that

- If you want to delve deeper into a proper self-driving stack, then consider Autoware

o Though you will also need all the knowledge gained from the perception subfield mentioned above

For robot arms:

- ROS has a useful library for most purposes called MoveIt!

- But for more industrial arms, consider ROS-I’s (ROS Industrial) tutorial series

- Or this git book

For swarm robotics:

- I don’t think I’m familiar enough with it to offer advice, but do look at research papers covering the field

Make sure you stay afloat of developments and remember that most robotics companies out there only use ROS as a prototyping tool and move forward to develop proprietary stacks, but it’s very good to get started with!

Good luck, and go forth and build marvellous things!

Questions asked by Alex Savinkin

Former number cruncher in investment funds & strategy consulting. One of Geekforge Founding Fathers. Blockchain and technical singularity true believer.

--

--