Creating a Gazebo Simulation with ROS2 for your own robot

Section 1: Introduction

The need for a simulator for any robot cannot be dismissed. Robots are getting more and more complex and are therefore harder to develop. Having a simulation where you can test your robot and gather data and experience with it is nice but easier said than done.

Gazebo is a powerful and accurate simulator where you can define robots and worlds and fully control them. Sadly the documentation of Gazebo is inconsistent and one has to do a lot of reading and researching through so many different tutorials to finally get a Gazebo simulation running for a custom robot and world, even for small projects.

In this article we take you through developing a Gazebo simulation for a self-driving RC car and connecting it to ROS2. We will first explain the basics of Gazebo and explain how to install everything, next we will describe how to develop a robot in Gazebo. After that we will describe how to create a world for the robot to act in. In the next section we will show you how to connect your robot simulation to ROS2 and implement your ideas of the robot with the help of ROS2. Lastly we will give some troubleshooting advice and a short insight on how Gazebo worked in our custom autonomous RC car project.

This article is intended for people who already have experience with ROS or ROS2. All examples will refer to the development of a simulation for self driving RC cars. All Examples are built and tested on Ubuntu 18.04.

Section 2: Gazebo

Gazebo is a simulation environment with an built-in physics engine called Ignition. Ignition allows you to simulate the robot with realistic physics. This comes in handy when you try to detect flaws in the hardware and software design of your robot. Gazebo integrates very well with ROS. You can very easily change the simulated and real hardware with Gazebo without needing to change the brain of your robot. The power and accuracy of the Gazebo simulation comes from its Simulation Description Format (SDF). Sadly it’s not very intuitive because you’ll have to know a lot about physics and the properties of your robot to create a suitable SDF file. That’s why we will take you through the development of a custom Gazebo robot in SDF in this article.

Section 3: Installation

To connect Gazebo with ROS2 you will have to do some installation. Let us first start with Gazebo. You can get it via one simple call in your terminal. We recommend getting the newest version of Gazebo (which is 11 at the publication of this article). Get yourself a Gazebo with

$ curl -sSL http://get.gazebosim.org | sh

Next you need the libraries for communication between your ROS version and your Gazebo version. The general installation scheme is as follows:

$ sudo apt install ros-<version>-gazebo<version number>-*

In our case we have ROS2 Dashing Diamata and Gazebo 11. This means that we have to execute

$ sudo apt install ros-dashing-gazebo11-*

Lastly, you can optionally install Unified Robot Description Format (URDF) and XML Macros (XACRO) parsers. These follow the same installation scheme as above. Therefore you need to run

$ sudo apt install ros-<version>-urdf

and

$ sudo apt install ros-<version>-xacro

For this article you will need to run

$ sudo apt install ros-dashing-urdf

and

$ sudo apt install ros-dashing-xacro

Section 4: Creating a Gazebo Robot

Section 4.1: URDF vs SDF

There are two different ways to model a robot in Gazebo. For those who already have experience in ROS or ROS2, you will already know URDF. URDF is used to describe kinematic and dynamic properties of a robot. It is written in XML and commonly used in ROS applications. However, if you want to use Gazebo, you will quickly realize that URDF files are not designed for this. Additional simulation-specific properties must be defined, so that e.g. the position of the robot can be set in the simulation.

SDF is the other option that can be used for modelling robots. It was specifically developed for Gazebo to describe robots and simulation worlds. It does everything that URDF can do, with the difference that the simulation-specific properties are added directly. For this article we will introduce and use SDF.

Gazebo converts a URDF file into an equivalent SDF file, once the simulation-specific properties have been inserted. If you already have a completed URDF file, it is more convenient to use it instead of writing a new SDF. But if you want to build your robot model from scratch and to use it later in Gazebo, we recommend using SDF.

The goal of the next sections is to provide you a basic understanding of how to use SDF. We will first introduce the basics and then cover some important tags. Subsequently, we are going to show you a real example of how to create a robot model with SDF.

We also highly recommend the official SDFormat.org page, if you want to learn more about how to use the different tags.
If you are interested in preparing URDF files for Gazebo, we refer to the tutorial made by Gazebo.

Section 4.2: SDF — The basics

We will first discuss three important components of an SDF file that are used to describe a robot model:

  • A link is used to describe bodies. Physical properties, such as inertia, visual and collision can be assigned to links.
  • The joint, on the other hand, is used to connect two links. One link is defined as a parent and the other one as a child.
  • Plugins allow you to extend the functionalities in Gazebo by adding software that either comes from third parties or is written by yourself. However, plugins are optional, so they are not necessarily needed for robot modeling.

Let us continue with the properties of a link. The visual properties (<visual>) are responsible for the visual presentation of the link. These can be simple geometric figures, such as spheres or cubes, or meshes that can be imported. These meshes can be provided by other developers or designed by yourself. We recommend using software such as Blender to create the meshes, which can then be exported as Collada (.dae) or Wavefront OBJ (.obj) files.

The collision properties (<collision>) are implemented to enable collision detection at the link. As with the visual properties, either simple geometric figures or meshes can be used. In this case we would recommend keeping the figure of the collision object as simple as possible, to reduce computing time.

The <inertial> tag is used to define properties related to mass, so that the model is treated correctly by the Gazebo physics engine. The weight of the model in kilograms is determined with <mass>. With the <pose> tag you can specify where the center of mass of the object is located . Finally, the moment of inertia of the object can be defined with <inertia>. If there is already a real model of the object, these properties can be determined on the basis of the robot parts, for example by weighing them. Alternatively you can make up the values or leave them at the default values, if this is not as important for you. However, an unfavorable weight distribution can cause the robot to behave undesirable.

Section 4.3: SDF — Example

Now that we know the basic components of an SDF file, we will look at a practical example in this section. We will go through the code for the mule_car model step by step and go into detail about the individual properties for it. The Mulecar is an autonomous RC car which was developed at the University of Bremen.

The following tags enclose the entire SDF document and define which versions of XML and SDF are used and the name of the robot model. Make sure that your Gazebo version is compatible with the SDF version, otherwise problems may appear. You can check the changelogs to see which SDF versions are supported.

The <static> tag ensures that the robot model is not affected by the physics engine, if it is set to true. This is useful if you have not placed the components of the car in the right place yet, as the car will stand still. The default assignment of <static> is false.

Minimal example for an SDF file

Let us continue with links. We can define the position of these links in the world with <pose>. The parameter order is:

<pose>x y z roll pitch yaw</pose>

The x, y and z coordinates are used to define the position of the link and the roll, pitch, yaw parameters can be used to determine the orientation of the link.

The collision properties of the link are defined with the <collision> tag. This tag contains the name of the collision, which must be unique, if you are using multiple collision properties inside a link.

In addition, it is mandatory to define a <geometry> tag to describe the shape of the object. Within the tag simple geometric figures, such as box, sphere, cylinder and plane, or more complex figures, like mesh, heightmap, and image, can be defined. As we mentioned before, stick to simple geometric figures to reduce the calculation time of collision detection. We have created a box with the dimensions 0.275 m x 0.1 m x 0.05 m for our collision object.

It is also possible to define additional <pose> tags in the collision properties as well as in visual properties. These are set depending on the position of the link. In our case we set the collision object in the shape of a box slightly higher than the car by increasing the z coordinate. The reason for this is that otherwise the floor and the box would have touched each other constantly, causing a collision.

The structure of the visual properties is similar to the collision properties. Inside the <visual> tag we decided to import our own mesh instead of using simple figures. For this we are using the <uri> tag to specify the path where our car model mesh can be found, which we have created with Blender beforehand.

Example of creating a link in SDF

Joints are used to connect different links together and to define kinematic and dynamic properties. There are many different types of joints you can use for this. In this tutorial made by SDFormat.org the types are discussed in more detail.

In our case we use the type revolute for the wheels. It allows us to rotate the wheels along a single axis, which is specified in the <axis> tag inside the <xyz> tag. For our wheels we have set up a rotation along the z axis.

When you define joints, you must also specify which of the links is the parent and which one is the child with the <parent> and <child> tags respectively. This affects the movement of the joint as the child moves relative to the parent. Here we determine that the link front_left_wheel is the child of the chassis.

Example of defining a joint in SDF

We can also attach various sensors to the robot to obtain information from the environment. Gazebo already provides a variety of different sensors for this purpose, which can be added with the <sensor> tag. The type of sensor is then defined in the <sensor> tag with the type attribute. So if we want to add a camera for example, it would look like this:

Minimal description of defining a Sensor in SDF

In addition, five elements in the <sensor> tag can be configured to describe the state of the sensor. With the <update_rate> we define the frequency, how often data is generated within a cycle and depending on this we determine with <always_on> that the sensor will be continuously updated with the same frequency. We enable a graphical visualization of the sensor inputs with <visualize> and we can specify the ROS topic with <topic> on which the sensor information should be published.

General description of defining a Sensor in SDF

In the following we will take a closer look at a specific sensor for an example. Using a laser sensor, we can detect objects in the environment and perform collision detection. To add such a sensor you must enter ray in the type attribute within the <sensor> tag.

In the <ray> tag you have to specify how the data should be collected and how far the range should be. With <scan> you can decide whether you want to perform horizontal (<horizontal>) or vertical (<vertical>) measurements. Then you have to define how often the environment is scanned per cycle with <sample>. Furthermore, the resolution must be specified with <resolution> and the angle range must be selected, which ranges from <min_angle> to <max_angle>.

The range of the laser is set within the <range> tag, where the range extends from <min> to <max> in meters. In addition, the resolution for this must also be specified here.

We have also added a plugin to create an interface of the output from the laser sensor in ROS. We have specified the topic on which we want to publish in <argument> and the type of the message in <output_type>.

Full SDF block for defining a LIDAR

To learn more about the different sensors that Gazebo provides, we recommend the Gazebo tutorials and the SDFormat.org specification page.

If you have looked up the pages for the other sensors, you may have noticed that we did not use the Gazebo LIDAR sensor, but built our own LIDAR with the help of ray. We made the experience that in our case the Gazebo LIDAR did not work at all. We would like to save you some hours of work with this workaround.

Section 5: How to create a World

Now that we have created our robot model, we want to simulate how it behaves under different physical conditions. For this we need a world in which we can load the robot model. When we start Gazebo, a default world is already loaded. However, the default world of Gazebo is a ground plane, which not only looks completely uninteresting, but also has little to do with realistic environments.

Default world in Gazebo

Certain functionalities of a robot cannot be sufficiently simulated in this world. For instance, let’s take a look at the LIDAR in the images below, which can be used for collision detection. The default world of Gazebo contains no buildings, walls or other objects, which means that the LIDAR cannot detect anything.

LIDAR scanning without obstacles (left) and with obstacles (right).

Fortunately, Gazebo offers us the possibility to import worlds and even provides us with tools to build our own worlds directly. Let’s get started with that!

Section 5.1: Exploring the Building Editor

Make sure that you start Gazebo with root permissions (sudo), otherwise you might not be able to save the created world, because the dialog box for this will not appear. If you are interested in a workaround for this particular problem, check out the troubleshooting in section 8.

Let’s first open the Building Editor. You can enter it under the Edit menu by clicking Building Editor or simply by pressing Ctrl+B. You will now see three different areas. On the left side is a palette where you can choose walls, doors, windows or stairs. It is also possible to assign colors or textures to these components. In the upper area you see a 2D view from the top-down perspective. The components from the palette can be inserted here. In addition, a floor plan can be imported, which is then displayed in this view (you will see more about this later). Finally, in the area below you can then see how the building is displayed in the simulation.

Building editor of Gazebo

With the help of an example, we want to visualize the functionality of the Building Editor in a much more understandable way. Select the wall on the palette and move your cursor to the 2D view area. Click on a spot and move your cursor to drag the wall with the desired length. You may have noticed that you cannot set the wall completely free. The length changes only in 0,25m steps and the direction in 15° steps. But do not worry! To get full control, press Shift while dragging the wall. If you are not satisfied with an object you set, you can just remove it by selecting it and pressing Delete.

Placing walls, windows and doors in Gazebo

After finishing your first room, you want to get inside somehow, right? Click on the door on the palette to create an opening onto any wall. And since we are on the subject of creating openings, let’s add some windows, too. Please note that you do not insert “real” doors and windows, because Gazebo does not offer it yet, so you just see holes in the wall at the moment.

You may not agree with the size of the windows, doors or walls and want to change them. For this case every component of the palette has its own inspector, where you can configure different parameters for every single object.

For instance, something you cannot initially influence by the time of placing a window is the size of it. Open the window inspector by double-clicking a window or by right-clicking and choosing “Open Window Inspector”. Here you can configure it to a value you desire.

You can also add stairs and therefore more floors. To do this, click on the stairs in the palette and place them somewhere in the 2D view. After you have placed the stairs, you can click on the + symbol above the 2D view to add another floor. Please note that you have already placed walls on the ground floor before adding another floor, as the walls will be copied to the next level. You can also select these walls afterwards and delete them with Delete if you want to place new walls.
To decide which 2D view we want to edit now, click on the drop down menu (left of the - and + symbols) and select the desired level.
If you don’t want to use the created level anymore, you can remove it with the - symbol.

Building additional floors in Gazebo

If you got a floor map of a building, you could import it to display it on the 2D view. If you need a building based on a real example, this is a great option. This way you will be able to easily drag the walls along the template and place doors and windows in the right place.
To do so, click Import on the palette and choose the file with the floor plan you want to import.

Import a floorplan

Then you have to make the right scaling so that the walls have the right length. Otherwise the measurements will not be correct. If you already know the resolution in px/m, you can enter it directly. Alternatively, you have to choose a wall whose length you know to determine the resolution. Click Ok and the floor plan is displayed with the correct scaling in the background of the 2D view.

Scaling the floorplan
Tracing the walls

When you finish your work, it is time to save your world. First name your model on the palette. Afterwards you can save your world by pressing Ctrl+S or by selecting File in the menu and choosing Save As. Before exiting the Building Editor, make sure you created everything you wanted, because you will not be able to edit your model once you exit, which is a major drawback of Gazebo. No model or world is perfect at the first iteration. There will always be things to improve. Gazebo currently does not offer this option, which is very unfortunate and leads to more work.

If everything is alright and you are satisfied with your world, click File and select Exit Building Editor.

Section 5.2: Adding more objects

We now have a robot and a world in which it can move. But a world can also consist of much more than just walls. We also want to show you how to add different objects to the Gazebo world, which ranges from simple geometric figures to complex models from the Gazebo model database.

Geometric figures added in the world

We see a tab in the Gazebo window where we can already select the geometric figures and place them in the world. Click on the cylinder and move your mouse to the window to place it into the world. Afterwards you can translate, rotate or scale the objects, by selecting the icons in the tab or by pressing t (translate), r (rotate) or s (scale) and selecting the object.

Placing a cylinder
Translate, rotate and scale the cylinder

Besides the simple figures you can also select different models from the Gazebo model database. To get there, click on the Insert tab in the selection menu on the left to open the database. Then you can choose from a variety of different models. For example you can select a table and then place it in the world.

Placing a table

If you are not satisfied with some objects, you can also delete them by selecting them and pressing the Delete button. Alternatively you can right-click and delete the object.

When you have placed some object and want to leave, you can save your world by clicking on File and then Save World As, or by pressing Ctrl+S and then saving the world.

Section 6: Implementing your robot in ROS2

When you are implementing your robot in ROS2 you are basically forced to choose an appropriate architecture (that’s another implicit advantage of ROS2). In ROS2 every software part works independent of each other. The only relevant interfaces are the topics. Because topics have fixed types you’ll have a fixed interface description to your code. You’ll find quickly that you are also strongly encouraged to write your software independent of the existing software. This means that your software also will only communicate via topics internally and externally. Therefore the abstract architecture of your robot will probably look like this:

General description of a closed loop controller

You will have sensors which publish their measurements over some topics to your decision making unit, i. e. the brain of your robot. The brain will then output some actions for your robot to execute. The actuators will then feedback some information, like their new pose, to your brain. This architecture is often referred to as a closed loop controller because the action feedback and action selection form an infinite closed loop.

The implementation of your sensors is in most cases not done by yourself, because most companies provide a ROS node for the sensor. Therefore you just install the node and get the sensor information. For the actuators it’s getting a bit more complicated. The common way is to send a control message to your actuators. The actuators then interpret the control instruction and behave accordingly. The way your control message is designed depends on the robot you are building. Thankfully most control messages are already implemented in ROS. You just need to choose one that is suitable for your use case. For example if you want to control a car you could write a custom message which contains the throttle and the steering the car should perform. This is bad because most ROS software works better with standard messages and therefore you will miss a lot of benefits using a custom message for this. You could also use geometry_msgs/Transform Message. This message holds a translation as 3D Vector and a Quaternion for the rotation. You can control the velocity of the car via the X-Coordinate of the 3D Vector and the steering via the Z rotation of the quaternion. This message is often used for moving tasks and therefore has a lot of support with visualisation and handling (especially with RVIZ).

If you end up with a similar architecture which separates controlling and acting/sensing in the same way it’s fine too. The benefit is now that you can just replace your sensor and actuators by simulated ones and your brain won’t care/notice. This leads us to the next point:

Section 7: Connecting Gazebo with ROS2

Now your previously defined SDF files and implemented robot logic will work. Usually you don’t want to start every process on your own and therefore you write launch files to start the whole robot. Unfortunately the documentation of both ROS2 and Gazebo is a bit unclear about how to invoke each other from launch files in the most comfortable way. Therefore we will show the way we did it which will work but may not be the intended one. We added

ROS2 launch file command for launching Gazebo

to our launch file. This will launch a Gazebo server and client. After that we added

ROS2 launch file command for spawning a model in Gazebo

to our launch file. This will spawn a custom build Mulecar into the Gazebo simulation.

A slight modification can be done in the first code snippet. You can change ‘gazebo’ to ‘gzserver’ to only start the simulation without rendering it. This will boost your performance by a lot.

Section 8: Troubleshooting

As we mentioned before, it may be the case that your dialog box for saving the world does not display. In the following, we want to show you how you can still manage to secure the world.

For this we need to introduce some background knowledge about Gazebo. Whenever you start Gazebo, you start two processes. A parent process which is called gzserver and a child process which is called gzclient. The gzserver handles all the calculations and information storage in the world. The gzclient just renders it. So if you have run Gazebo without root permissions and your saving dialog doesn’t show you need to close the client in your terminal. This can be done by opening a new terminal and then entering the following commands:

$ ps -a

Then search for a process called gzclient. Remember the process id and call

$ kill <process_id>

Then your gzclient client will close and you may think you lost everything. But the gzserver is still running. You can check that again via

$ ps -a

and looking for a process called gzserver. Next you need to call

$ sudo gzclient

Then the Gazebo GUI will show up again but this time with root permissions. Wenn you try to save the world now the saving dialog will show up.

A few interesting conclusions can be drawn from the explanation above. First and most important you can speed up your simulation drastically by not starting a gzclient. Gazebo is very inefficient at rendering and therefore the gzclient process will consume most of the computing power. As a result the simulation will have to slow down in order to keep everything correct. Without the gzclient the ROS interface will still work perfectly fine. A second observation is that you can have multiple clients for one gzserver. Therefore you can render different parts of the simulations on different computers if needed.

Section 9: Conclusion

In this article we have shown how to create a custom Gazebo simulation for a roboter to work with ROS2. We have gone through the basics and intentions of describing robots in the SDF and how to use plugins provided by Gazebo. We have also shown how to create a world for your robot. Lastly we went through some troubleshooting for Gazebo which may occur when you are using Gazebo as a beginner. If you are interested in a project where we used Gazebo as a simulator for an autonomous RC Car you can check out the project website and the corresponding Git repositories for the Mulecar simulation and Mulecar implementation.

Picture of the Mulecar

--

--