And two more steps that’ll get you even further.

A robot Hand without a robot Arm is most of the time useless. At Shadow we have a long history of interfacing different robot arms with our software and hardware. In the different projects we’ve run over the years, we’ve written software for arms from Universal Robot, Denso, Kuka, Staubli… We’ve also developed a few intriguing arms internally, from an arm actuated by air muscles to a lightweight arm that picks-up strawberries.

Shadow’s historical muscle Arm

On this journey, we’ve learned a few things. Let me share a few tips on what it takes to write a good interface for a robot arm quickly.

Using a great framework: ROS

Developing software for a robot can be a daunting task if you were to start from scratch. As you can see from our roadmap, to deploy a useful robot, you need to develop cutting edge capabilities — drivers, controllers, planners, image analysis, interfaces…. This is where ROS — the Robot Operating System — comes in handy.

What’s ROS — from www.ros.org

ROS is a modular framework: using the plumbing and the tools in the drawing above, it’s possible to spawn different implementation of the same capabilities without changing the code.

Let’s take a concrete example, say we have two different ways of computing grasp quality metrics. With ROS, you can decide to use either implementation A or implementation B at runtime — as long as they present the same interface.

In my opinion, the biggest asset of ROS is that it gathers collaborations from a vibrant community. Due to its openness, you can — almost — always find different state of the art implementations of a given capability, developed by the leading expert in that domain.

Combining those capabilities to form a robust system is an art.

I have an arm, where do I start?

Interfacing a new arm to ROS is actually very easy — as long as the arm has a sensible interface. To get started you simply need to do those two things:

  • create a model of the arm —called urdf, based on xml. This description is used throughout the system. It contains the list of joints for your robot, its geometry, its collision model, different informations for the simulation…
A simple R2D2 visualised on http://mymodelrobot.appspot.com/
  • develop a ROS driver. This is very dependent on the interface offered by the arm you want to connect to ROS. The most basic driver will read and publish the current joint states — each joint name, position, velocity and torque. It will also make it possible to send commands to the different joints — for example joint position commands.

After going through those two steps, you can control your arm and — for example — display it using one of the greatest ROS tools: rviz the all powerful ROS visualiser.

Shadow Hand and muscle arm visualised in rviz.

Going further

The two steps above are the bare minimum to get your arm in ROS. However, you can get many more capabilities if you spend more time with these two additional steps…

Once you have a driver and a model, it’s worth spending a few more minutes interfacing with MoveIt!. MoveIt! is a ROS library for mobile manipulation. After going through a quick setup wizard, you’ll get a great interface to interact with your robot, Inverse Kinematics and collision free planning, plus it can even use live updates from a 3D sensor.

left: MoveIt! setup assistant / right: a collision free plan in MoveIt!

A ros_control interface is also very nice to have — if your robot interface is fast enough. Let’s assume that the driver you’ve written sends torque commands to the robot at a fast frequency — for example 1kHz as is the case in our hardware. With ros_control, you can dynamically load or swap different controllers: position control, trajectory control, torque control… This is very useful especially when working with complex robots such as our Hands. For an arm, the main interest is that it’s possible to have multiple robots in the same control loop — so all the robots can be strictly synchronised to realise one action. This makes it a lot easier to realise an action with an arm and hand for example.

Final worlds

Once you have all those capabilities for your robot arm, you just need to choose which other capabilities you want for that robot of yours. Object recognition? A mobile base? Voice interaction? ROS can offer all those and much more!

Anything I missed? Or do you want help wrapping your hardware in ROS? Let’s connect on Twitter @ugocupcic.


Well done for making it all the way to the end! If you enjoyed it, how about liking/sharing this article?