Testing Ansible role of a systemd-based service using Molecule and Docker

Tomasz Klosinski
5 min readMay 16, 2017
Img source: https://www.ansible.com/hubfs/2016_Images/Blog_Headers/Ansible-Docker-Blog-2.png

For the last two years, I have been writing Ansible roles for services running on top of CentOS 6 and 7. That included provisioning VMs, bootstrapping operating systems and deploying software like Cassandra, Elasticsearch, Kibana, Grafana, Fluentd, monit, nginx, HAProxy, MariaDB, in-house Java and Scala apps, and many, many others. Since the release of CentOS 7, systemd manages most of these services.

When Docker was introduced, the first thing that came to my mind was that I should start using it for testing my Ansible code. Some time later I found a tool that actually facilitates that approach — it’s called Molecule.

The only problem is that it’s not easy to run a systemd-based service within a Docker container. To make systemd and Docker work together, one has to overcome number of issues and abuse the configuration of both.

First of all, due to the fact that systemd requires the kernel capability CAP_SYS_ADMIN, your container has to be privileged (which makes it less secure). It’s not a disaster for development/testing environment, but eliminates it from potential production deployments.

Secondly, systemd needs /sys/fs/cgroup to be mounted to start — therefore you have to mount it using a docker volume from the host system to give the container access to it.

Next, systemd by default starts lots of services that you might not need for testing one little service in one little container. Thus, you have to figure out how to stop it from doing that automatically.

And finally, due to the annoying relationship between systemd developers and Docker developers, none of these projects really aims at making users life easier for deploying systemd-based services within Docker containers. They just don’t care.

Here’s an example of a Dockerfile, created by Dan Walsh, a famous Red Hat developer, that overcame these difficulties and actually made it possible to run systemd within a container.

Now we can have a look into Molecule to figure out how to replicate that Dockerfile in it’s configuration. Let’s do that on an example.

First we need an Ansible role. I’ve decided to fork a popular nginx role from Ansible Galaxy: jdauphant.nginx. Once we have an Ansible role on our local machine, we need to install the testing framework — Molecule (and Docker SDK for Python to help it interact with Docker):

$ pip install molecule docker

In the next step, we enter our role directory structure and generate the Molecule configuration crafted for Docker:

$ molecule init --driver docker

This command will generate the molecule.yml configuration file, playbook.yml file and tests directory (later, running Molecule will generate the .molecule directory with temporary stuff — you might want to add it to .gitignore, in case you don’t want to push it to your git repo).

Default values in the configuration file should already work, although they are set for Ubuntu while assuming an Ansible role for a non-systemd-based service. To run it on CentOS 7, and provide all the configuration that systemd-based service requires, we have to replace this part of molecule.yml:

- name: ansible-role-nginx
image: ubuntu
image_version: latest
ansible_groups:
- group1

with:

- name: ansible-role-nginx
image: williamyeh/ansible
image_version: centos7
ansible_groups:
- nginx
port_bindings: { 80: 80 }
privileged: True
volume_mounts:
- "/sys/fs/cgroup:/sys/fs/cgroup:rw"
command: "/usr/sbin/init"
environment: { container: docker }

To clarify a bit what is happening here, we should discuss it line by line. First, we start with a name for our container — this is how we can identify it with the docker ps command.

Then, we have a Docker image and its version. williamyeh/ansible is a Docker container image that is pre-configured to run Ansible plays. The main reason why I used this container is that I was able to gather all Ansible facts — including ansible_default_ipv4, which is crucial for most roles depending on the network. It was unfortunately not possible using the official centos/7 container image.

UPDATE: Actually it is possible to use official centos docker image. To make Ansible collect the ansible_default_ipv4 one has to add pre_task to install iproute package in playbook.yml file. This adds to the system ip command, which Ansible uses to fetch the networking facts. Why basically all container images (debian, ubuntu, alipne, etc.) have that tool but centos? I guess they treat docker philosophy too serious and went over-minimalistic.

Next configuration option is ansible_groups, which indicates the Ansible inventory group that the container should be belonging to. port_bindings should be familiar to every Vagrant user — it’s a way to forward ports from the container to the Docker host system.

Last four options are a direct translation from Dan Walsh’s Dockerfile into a Molecule configuration. They indicate that the container has to be privileged, it should mount read-write /sys/fs/cgroup from the host machine, it should execute /usr/sbin/init in the container at startup and that the container environment variable should have the value docker.

If you remember the Dockerfile, it included one more thing—the RUN instructions to pre-configure systemd to not start its additional services. You can do that, but you don’t have to. It’s up to you whether you’re fine with a longer execution of your play (with a complete systemd), or a bit shorter with the bare minimum.

Unfortunately Molecule doesn’t have a section for running pre-tasks, but we can add it to the playbook.yml file that Molecule triggers to run the Ansible play:

Furthermore, if we have dependencies on external roles from Ansible Galaxy, we can ask Molecule to download them for us automatically. Listing them in meta/main.yml file is unfortunately not enough, we have to list them again in the requirements.yml file and tell Molecule where that file is. For example if we have a role that is dependent on the EPEL role, we have to specify it in the tests/requirements.yml file:

- src: geerlingguy.repo-epel

and then add that file to molecule.yml:

dependency:
name: galaxy
requirements_file: tests/requirements.yml
options:
ignore-certs: True
ignore-errors: True

Et voilà, you can now run your tests for your Ansible role in a Docker container:

$ molecule test
Execution of “molecule test” on Mac OS X with Docker for Mac

The last fancy thing you might like to do is integrating your role’s tests on your CI pipeline, in this example I will use the Travis CI service — a popular CI service that integrates well with GitHub repositories.

For this purpose you can use a pretty default .travis.yml configuration file:

Just in case you’re not familiar with Travis CI, it’s a service that will run molecule test every single time you make a change to your role and commit it to your git repo. Then you only have to monitor its status to see if your code pass the tests or not.

That’s it. I hope you enjoyed gluing Ansible, its testing frameworks, Docker containers and Travis CI. I’m looking forward to your opinions about this awesome DevOps stack. Let me know in a Medium response (or an e-mail) how you test your Ansible roles and how you run them for development/testing.

Here you can find a git repo with the nginx role that I worked with in this article. If you have any questions, drop me a line.

--

--