Validate Ansible roles through molecule delegated driver

Fabio Marinetti
9 min readApr 7, 2020

Molecule is a great tool for testing Ansible roles, it carries out a robust and flexible validation flow for ensuring a good role quality level. Almost all the Molecule documentation is focused on the docker driver, where the tests are run against a containerized backend but, despite this is a good choice in the largest part of the use cases, there could be scenarios where it is useful to switch to an external cloud backend using the delegated driver.

Unfortunately, delegated driver documentation basically consists only of few lines in the official doc whereas a clearer explanation and some examples could give a huge help to those developers who want to use Molecule in such way.

This post is based on my experience in developing a simple Ansible role from 0 to galaxy and focuses the attention to the use of delegated driver integrated with Google Cloud Platform. As starting point I took the following useful references for my project:

Delegated driver: what does Molecule doc say?

One of the reason which forced me to write this tutorial is this statement in Molecule official documentation:

the developer must adhere to the instance-config API. The developer’s create playbook must provide the following instance-config data, and the developer’s destroy playbook must reset the instance-config.

The question is: what instance-config is and which data must the developer provide?

Instance-config is an Ansible fact stored in a YAML file in the Molecule cache ( $HOME/.cache/molecule/<role-name>/<scenario-name>/instance_config.yml), which has the following structure:

- address:
identity_file: /home/fabio/.ssh/id_rsa # mutually exclusive with
# password
instance: millennium_falcon
port: 22
user: hansolo
# password: ssh_password # mutually exclusive with identity_file
become_method: sudo # optional
# become_pass: password_if_required # optional

repeated for each instance you have to test against. For those who need dealing with Windows nodes, the documentation also provides the equivalent structure for WinRM.

The create.yml file

Once clarified what instance-config is, we can make a step forward in understanding what we need to do and how to implement the process for pupulating the instance-config. Fortunately, Molecule furtherly helps us in doing an additional step ahead by providing scenario template files through the molecule init command e.g.:

molecule init scenario -driver-name=delegated

which creates the following directory structure:

├── INSTALL.rst
├── converge.yml
├── create.yml
├── destroy.yml
├── molecule.yml
└── verify.yml
  • molecule.ymlis the Molecule configuration file which defines variables, states the phase sequence and the configuration for each of them
  • create.yml the Ansible code for creating the instances on the cloud platform and storing data in instance-config
  • destroy.yml the Ansible code for destroying the instances on the cloud platform and removing them from instance-config
  • converge.yml the role execution
  • verify.yml the verification test suite
  • INSTALL.rst instructions for installing required dependencies for running Molecule tests

Let’s focus now on the file create.yml that Molecule has generated:

- name: Create
hosts: localhost
connection: local
gather_facts: false
no_log: "{{ molecule_no_log }}"

# Developer must implement.
# Developer must map instance config.
# Mandatory configuration for Molecule to function.

— name: Populate instance config dict
instance_conf_dict: {
'instance': "{{ }}",
'address': "{{ }}",
'user': "{{ }}",
'port': "{{ }}",
'identity_file': "{{ }}", }
with_items: "{{ server.results }}"
register: instance_config_dict
when: server.changed | bool

— name: Convert instance config dict to a list
instance_conf: {{ instance_config_dict.results | map(attribute='ansible_facts.instance_conf_dict') | list }}"
when: server.changed | bool

— name: Dump instance config
content: "{{ instance_conf | to_json | from_json | molecule_to_yaml | molecule_header }}"
dest: "{{ molecule_instance_config }}"
when: server.changed | bool

The three tasks: populate, convert and dump, produce in the end the instance-config.yml file. The commented section is a placeholder for the Ansible code, which should create the cloud resources and return the servers array (containing instance details) as registered variable or fact. The following code snippet taken from this github issue provides an example of what stated above for a VMWare context:

7 - name: Create molecule instance(s)
8 vmware_guest:
9 hostname: "{{ molecule_yml.driver.hostname }}"
10 esxi_hostname: "{{ molecule_yml.driver.esxi_hostname }}"
11 username: "{{ molecule_yml.driver.username }}"
12 password: "{{ molecule_yml.driver.password }}"
13 datacenter: "{{ molecule_yml.driver.datacenter }}"
14 validate_certs: "{{ molecule_yml.driver.validate_certs }}"
15 resource_pool: "{{ molecule_yml.driver.resource_pool }}"
16 folder: "{{ molecule_yml.driver.folder }}"
17 name: "{{ }}"
18 template: "{{ item.template }}"
19 hardware:
20 memory_mb: "{{ item.memory | default(omit) }}"
21 num_cpus: "{{ item.cpu | default(omit) }}"
22 wait_for_ip_address: "yes"
23 state: poweredon
24 register: server
25 with_items: "{{ molecule_yml.platforms }}"
27 - name: Populate instance config dict
28 set_fact:
29 instance_conf_dict: {
30 'instance': "{{ item.instance.hw_name }}",
31 'address': "{{ item.instance.ipv4 }}",
32 'user': "vagrant",
33 'port': "22",
34 'identity_file': 'identity_file': "{{
molecule_yml.driver.ssh_identity_file }}"
35 }
36 with_items: "{{ server.results }}"
37 register: instance_config_dict
38 when: server is changed

The code invokes the vmware_guest module (lines 7–23) for creating a VM on the VMWare backend. This is done for each element of the platform array defined in the molecule.yml file (line 25). As you can see the variables defined in the molecule.yml file are accessed through the molecule_yml fact.

The values returned by each vmware_guest call are registered as elements of the server array (line 24) which is in turn used for populating the instance-config (lines 27 and ff.). Note that the update of the instance-config fact is skipped when the server variable doesn’t change.

Working with Google Cloud Platform (GCP)

Now that I have clarified what and how the developer needs to do when dealing with delegated driver I’m going to share the work done for the my docker-secured Ansible role. For this role I chose to use GCP as cloud backend for the delegated driver. Ansible provides the GCP module family for dealing with such cloud provider and I hope you can easily adapt my code in case you need to switch module family and cloud provider.

For this project I used these tools version:

  • python 2.7
  • ansible 2.9.6
  • molecule 3.0.2
  • ansible-lint 4.2.0
  • yamllint 1.20.0
  • flake8 3.7.9 (mccabe: 0.6.1, pycodestyle: 2.5.0, pyflakes: 2.1.1) CPython 2.7.17 on Linux

where yamllint, ansible-lint and flake8 are tools for the code linting which is included in the molecule phases.

The docker-secured role

The role to test installs docker on a node, exposes the docker APIs and secures them with ssl. The procedure I followed is described in these two links from Docker documentation:

The repository contains ready-to-use ssl certificate files for testing purpose, but there is also the possibility to provide your own if you have any.

You can look at my project by cloning my GitHub repo:

git clone

Preliminary steps for GCP

First of all, I had to create a GCP project, a service account and to download the associated key. These steps are out of the scope of this tutorial and you can refer to the official GCP doc for the overall procedure. As a good reference, I found also this link useful for all the stuff related to Ansible and GCP working together.

For this role I created the project ansible-272015 and the service account service, its key is stored in the file secret.json.

The molecule.yml file

In this section I will show and comment the relevant section of my molecule.yml file.

Project, authentication type and secret key are inserted in the molecule.yml file under the section driver. Furthermore, I also added in the same section all the other parameters which remain constant during the create and destroy phases i.e. GCP region and zone, ssh user and id file and the network parameters since the VMs are assumed to be in the same network which is created ad-hoc for the time of the test. All these values can be accessed from playbook through the molecule_yml fact (e.g molecule_yml.driver.region for accessin the region).

20 driver:
21 name: delegated
22 gcp_service_account_key: ${GOOGLE_APPLICATION_CREDENTIALS}
23 gcp_project_id: ansible-272015
24 region: us-east1
25 zone: us-east1-c
26 ssh_user: ${SSH_USER}
27 ssh_pub_key_file: "${SSH_ID_FILE}.pub"
28 ssh_key_file: "${SSH_ID_FILE}"
29 network_name: ansible-network
30 subnet_name: ansible-subnet
31 firewall_name: ansible-firewall
32 ip_cidr_range:

The platforms section of molecule.yml file contains an array containing parameters (name, image, type, size…) for the instances which I want to test against. My test coverage includes CentOS 7, Ubuntu Xenial 16.04 and Ubuntu Bionic 18.04, these machines are then grouped by the OS type (i.e. CentOS or Ubuntu), this for taking benefit of the inventory groups when executing Ansible.

41 platforms:
42 - name: "ds-centos7-${TRAVIS_BUILD_ID}"
43 image_family: projects/centos-cloud/global/images/family
44 machine_type: n1-standard-1
45 size_gb: 200
46 groups:
47 - centos
48 - name: "ds-ubuntu-bionic-${TRAVIS_BUILD_ID}"
49 image_family: projects/ubuntu-os-cloud/global/images/family
50 machine_type: n1-standard-1
51 size_gb: 200
52 groups:
53 - ubuntu
54 - name: "ds-ubuntu-xenial-${TRAVIS_BUILD_ID}"
55 image_family: projects/ubuntu-os-cloud/global/images/family
56 machine_type: n1-standard-1
57 size_gb: 200
58 groups:
59 - ubuntu

The other sections of molecule.yml define the test sequence and the configurations for each phase whereas they are not default.

The create phase and the create.yml file

As already stated before create.yml is the playbook which drives the create phase. Here, I widely used the modules of the gcp family for managing the resources on the cloud provider (GCP). GCP modules need some fixed parameter as the project Id, the authentication type and the path to the secret key and, to avoid to repeat these values along the code at each module invocation, I set them as module_defaults for the whole gcp family.

 7   module_defaults:
8 group/gcp:
9 project: "{{ molecule_yml.driver.gcp_project_id }}"
10 auth_kind: serviceaccount
11 service_account_file: "{{
molecule_yml.driver.gcp_service_account_key }}"

Differently to what we have seen before for the VMWare case, to create an instance in GCP is not just a matter of using a single module, but a process made of multiple steps: the creation of the boot disk, the assignment of the ip address and the creation of the instance itself. This implies that for looping over the platforms I needed to put the creation tasks into a separate file and include it within the cycle:

16 — name: create instances
17 include_tasks: tasks/create_instance.yml
18 loop: "{{ molecule_yml.platforms }}"

The file create_instance.yml contains the tasks for reserving the ip address, creating the boot disk and creating the instance. The way I invoked the related modules is pretty standard and changes if you want to switch to another cloud provider so I won’t discuss them furtherly, while I want to spend some word about how to return instance data to feed the instance-config population tasks.

7 - name: initialize instance facts
8 set_fact:
9 instance_created:
10 instances: []
11 when: instance_created is not defined
... create the instance and return instance variable ...56 - name: update instance facts
57 set_fact:
58 instance_created:
59 changed: instance.changed | bool
60 instances: "{{ instance_created.instances + [ instance ]}}"

The instance_created fact is then used after platform loop to populate isntance-config:

20     - name: Populate instance config dict
21 set_fact:
22 instance_conf_dict: {
23 'instance': "{{ }}",
24 'address': "{{
item.networkInterfaces[0].accessConfigs[0].natIP }}",
25 'user': "{{ molecule_yml.driver.ssh_user }}",
26 'port': "22",
27 'identity_file': "{{ molecule_yml.driver.ssh_key_file
}}", }
28 with_items: "{{ instance_created.instances }}"
29 register: instance_config_dict
30 when: instance_created.changed

Here this task is executed only if on of the instances got changed as happened for the VMWare case when the clause servers is changed was specified.

Finally, I tested the create phase by issuing the command:

molecule create --scenario-name=gcp

Once having verified the results were correctly created, I went ahead with the pipeline and executed/tested the phases:

  • lint which executes the code linting
  • prepare which prepares the instance for the role application. In this case it is just the update of package sources for ubuntu group.
  • converge which simply applies the role
  • idempotence which applies the role a second time for ensuring it is idempotent
  • verify which verifies the results of the role application match the expectations
molecule <phase> --scenario-name=gcp

In this case, given the simplicity of the role and the limited requirements I did not have to change so much with respect to what molecule generated when initialized the scenario.

Last step I wrote a destroy.ymlplaybook for removing the created resources from project (and my bill as well 😄). The code for destroying resources follows the same philosophy of the one which creates them. Obviously, the test was done by issuing:

molecule destroy --scenario-name=gcp

Once all phases were correct and gave no errors I could test the whole end to end process with the command:

molecule test --scenario-test=gcp


In this post, I explained how to use the Molecule delegated driver and show how I implemented it with GCP. It should be easy to adapt the same code to other cloud provider: AWS, Azure, Digital Ocean… and I expect you’ll get benefit of using Molecule for sure. Please, give me a feedback.

All opinions expressed in this post are my own and not necessarily the views of my current or past employers or their clients.



Fabio Marinetti

Creative and passionate Cloud Architect and devOps Engineer. Proselyte of the devOps philosophy as a new way of mixing different approaches and sensitivities.