Designing a Home Service Robot (Pt 1)

Gene Foxwell
Coinmonks
8 min readJul 10, 2018

--

MARVIN

Overview

This is a start of a series of articles in which I will cover what could loosely be described as my design philosophy towards building home robots. In keeping with the other articles I have provided on this blog I will use my own robot “MARVIN” as the primary example. I believe however that these design principles could be extended and applied to any home robotics project you should like to undertake so please don’t feel constrained by my example.

The basic principles of a good home service robot (in my opinion) are as follows:

  1. Usefulness — it should provide a useful service to the user in either a physical or emotional capacity.
  2. Modularity — any part should be easily replaceable, and when appropriate easily swapped out for an equivalent component.
  3. Controllability — while ideally the robot should be able to behave autonomously, it should be under the users control at all times. This means it only performs tasks it is sure the user has requested or authorized in advance and provide a mechanism to stop its current action.
  4. Transparency — it must be clear via design, behavior, or both that this device is a machine. That is, there should no attempt by the designer to make the robot appear to be anything other than a robot.

I do not insist that these are the only design principles one could follow for a home robotics platform, they are merely my own. These are derived from the basic premise that a robot should serve the needs of the human they “belong” too, and while doing so cause as little harm as possible. As its not practical yet to simply endow a simple robotic creature with the three laws of robotics, it is required to solve these problems by way of design.

The rest of this article will be split into two parts. In the part that follows I will look at applying the first two principles that have been delineated here — Usefullness and Modularity — to the design of a home robot (MARVIN in my case). In the second part I will complete the excercise by looking at Controllability and Transparency.

That’s the plan, so let’s get started!

Usefullness

Before deciding what features the robot should have it helps to ask:

what can a simple robot do that could be helpful for people?

For my purposes I suggest two primary use cases (others exist):

  1. It could serve as a smart table — delivering / transporting objects around the users house as needed. This application would be helpful for people who have mobility issues, or as has been seen in a few select cafe’s around the world as a delivery device for food / beverages in restaurants that have limited staffing options.
  2. Telepresence — A home robot could also serve as a telepresence device, allowing the owners friends and family to communicate in a meaningful way over long distances. This is especially helpful in retirement homes where the elderly may live far away from their friends and family but still wish to interact. A robot provides a more dynamic interaction (in my opinion) than a simple skype call.

So what do we need to have on our robot to satisfy the usability constraint?

Well, if we are looking at building a “smart table”, the simplest solution is to make the robot to have a table like surface that the owner can place objects on. This is a simple constraint to solve — we just need to have a flat surface on the top of the robot.

We want the robot to be able transport the objects on it, so we need a method of localization. Moving around requires we have a rich enough input to navigate the environment and possibly follow the user around. We’ll provide the input for this using a 3D camera.

In addition to being able to move around, for usability services it would helpful if the robot could understand its environment in some respect. This would make it easier for a person to direct the robot to different locations within a home or business environment. To that solve that we can use a slam package, allowing the robot to map its environment and then refer back to that map when executing user requests.

What about tele-presence? For this application we have a few additional requirements. For telepresence to work, the operator needs to be able to hear and see what the robot sees. The previous requirement provides us with a camera, a method of movement, and mapping functionality (so the user can keep track of where the robot is). All that’s missing is a microphone, a screen to broadcast an image, and some speakers. We can solve all these problems by simply allowing the user to place their cellphone into a cradle attached to the robot and providing a web app they can turn on when its time to allow tele-presence.

Using only the usefulness criteria then, we already have a lot of different requirements for our design — I’ll put it all together into a diagram for the case of the MARVIN robot example:

Modularity

Usefulness guides what capabilities and basic components our robot will need. Modularity helps us determine exactly what hardware and software we should use to meet those requirements. Our guideline here should be to choose hardware and software that is “loosely coupled” to each other — that is we want as few hard dependencies as we can manage.

This won’t be the case for all robotics applications — there are many applications where high speed processing, or being as close to 100% reliable, or high precision are the dominating factors. I find these are less important in the slow paced world of the home robot where at most the machine should move at the pace of a some what fast walking average human.

To this end, I feel ROS provides a good framework for home robotics software. It allows for the software components to be modeled as nodes that only have a passing knowledge of the other nodes on the system via communication over ROS topics. If you have not yet had any experience with ROS I suggest you check out their beginner tutorials to get a feel for what I mean here.

ROS will provide the robot with packages for navigation and mapping capabilities that are reasonably easy to configure and swap out. For the example robot, MARVIN, I’ve made use of the standard navigation stack for controlling the robots movement, and the RTAB Map package for enabling real time mapping of the robots environment.

If we are going to use ROS then we will want an onboard computing unit of some form. Since I am also looking to utilize some deep learning models in order to recognize the user and follow them around (among other possibilities) I have decided on the Jetson TX2. Other computing units could be used here as well — Raspberry Pi is a popular low cost unit that you could use if you aren’t looking to do too much computationally heavy work. A laptop could also be worked into the robot as well.

Onboard computers need power. From a Modularity stand point this is actually a challenging concern to meet. Each system will have its own power requirements and its own space concerns and typically these will not be “easily” swappable without a serious rethinking of the robots requirements. In order to simplify MARVIN, I’ve chosen to solve this problem with an off the shelf battery pack from talent. This connects via a barrel plug directly to the TX-2 in order provide power. While I can swap this out for another equivalent power source that provides the right size barrel plug, this component remains the least flexible part of the entire robot.

Note: Theoretically, I could try to draw power from the iRobot Creates power supply, however this would require a custom setup that would almost certainly not be easily swappable for a new component. Furthermore, it would tightly couple the robots design to the iRobot itself, which is the opposite of what I am trying to achieve with the Modularity principle.

As mentioned in the previous section, we want to use a 3D camera in order to allow our robot to navigate around its environment. We could also have chosen to use a LIDAR scanner for this purpose as well. The advantage of using a 3D camera is that we can use it both as a standard camera (which is needed by the tele-presence application) and using a ROS package we can convert the input of the 3D camera into a laser scan. This reduces the total cost of the robot, and the number of components we will need at a tradeoff that we will need do a bit more computing on the on board computer.

There are a few options on the market for 3D cameras, and its a bit out of the scope of this article to review them all (I may do this in a future article). A few notable examples are the MS Kinnect, Intel RealSense, ZED, and the Orrbec Astra. For MARVIN I choose to satisfy this requirement for the Orrbec Astra, it has a good maximum range (which is important as it also serves as the laser scan source), a reasonable resolution, and a fair price point. As a bonus, it also has a built in microphone which will allow the robot to hear spoken commands without having to cradle a cellphone in it first.

We still need hardware to actually move the robot around. Theoretically we could build our own drive system for the robot, and if that strikes your fancy feel free to do so. For my purposes however, it was far simpler to make use of the iRobot Create 2 as the drive system. This handy little low cost robot provides its own wheel encoders, drive system, and basic sensors for obstacle avoidance. It also allows for a very simple usb serial interface which means that when needed it can be swapped out for either a new and improved custom drive system or a better version of the same robot.

This accounts for the major components that will be needed for the robot, but how can we connect them together? Hardwiring all the components together does not seem ideal, as it will make it more difficult for us to update the design later. Fortunately, all of the hardware chosen so far (with the exception of the battery) has in common that it can communicate via a USB port. Thus, the Modularity principle can be honored by simply using a USB hub to connect all the components to the onboard computer. (The Jetson TX2 development board only has one standard USB port, your onboard computer may not have this restriction).

Summary

Applying the first two principles we have the start of an fairly modular robot that could potentially perform a useful service. It can sense its world, create a map, and has sufficient hardware to respond to user inputs. There is still some work to do however to come up a complete home robotics solution, and that’s where the final two principles will come into play.

Next week I’ll complete this robotics design exercise by applying to the final two principles to derive the MARVIN robot’s prototype specification. Until then,

Share and Enjoy!

These principles are of course of my own devising, and I am open to suggestions, refutations, or extensions of them as people read through these articles.

--

--

Gene Foxwell
Coinmonks

Experienced Software Developer interested in Robotics, Artificial Intelligence, and UX