My first automated workflow from Github to Galaxy

How to write and publish your Ansible collection with modules

George Shuklin
OpsOps
3 min readJul 23, 2020

--

It’s my first success. I can’t say if it’s ‘best practice’ or not. It works and I love it.

Goal: To have automated CI with sanity, unit and integration tests for a module for Ansible with automatic publication to Galaxy when it’s ready to be published.

Result: https://github.com/amarao/collection_ip/blob/master/.github/workflows/ansible-test.yaml

What’s inside?

The main idea I’ve made there is that ‘publish or not’ should be defined by match between content of galaxy.yaml and a tag. If version in galaxy.yaml is 1.2.3, and tag is 1.2.3, than we need to upload to Galaxy (if tests are ok). If there is mismatch, no upload, just testing.

My workflow implements this.

Extracting version (Github actions CI):

outputs:
version: ${{ steps.version.outputs.version }}
steps:
...
- name: Extract version
id: version
run: |
version=$(cat galaxy.yml |grep ^version|awk '{print $2}')
echo "::set-output name=version::$version"
working-directory: ansible_collections/amarao/ip

Condition on operations (artifact upload):

- name: Save artifact
if: ${{ endsWith(github.ref, steps.version.outputs.version) }}
...

and the same for publish job:

publish:
runs-on: ubuntu-latest
needs: build
if: ${{ endsWith(github.ref, needs.build.outputs.version) }}

It works like magic. There is a tag and tests passed? Upload! There is no tag or the tag does not match galaxy.yaml? Skip!

That was a CI grind. Now more on ansible-test.

Sanity, untis and integration

The ansible-test originally was designed as ansible testing utility (that is, the utility to test ansible itself during it’s development). It’s heavily linked with existing CI setup for Ansible (which uses shippable.io) and it has tons of assumptions. Lately it was adopted for collection testing, but it’s really, really crude now.

First: it requires you to have a very specific path for running on collection. It should has ‘ansible_collection’ in the name.

Second: It does not allow you to use random ‘remotes’ (remote is almost exclusively dedicated to shippable integration).

Third: it needs tons of python packages which are almost impossible to set up with proper versions. Luckily, there is a --requirements key to ansible-test to pip-install them.

Nevertheless it gives you the beautiful sanity tests for your modules with tons of super-strict checks you better to listen to.

Also, it allows you to use docker and local execution for tests, which solves CI/local machine dilemma for testing. Combination of --docker and --docker-privileged with Docker on disposable VM (libvirt) is really good way to test root-required things (like mine modules for managing network interfaces and their properties). And --local is really good for gitlab ci disposable vms.

Thirdly, it’s used by Ansible and community.general, which opens a smooth transition to ‘upstream’ (if we can call it this way now).

Lastly, it gives you the best framework for unit tests. It’s really, really hard to write good unit tests for a module because of ‘Aniballz’ transformation of module on the way to a remote system. ansible-test units solves this. Moreover, theoretically, it can test a module for multiple python versions.

The repository contains my findings for testing. Pay attention to names, directories and __init__.py files.

It’s still in a “work in progress” state, so there is a 99% chance to have broken HEAD at any given moment. Nevertheless, it’s a working collection-module pipeline (even if some module has bugs).

--

--

George Shuklin
OpsOps

I work at Servers.com, most of my stories are about Ansible, Ceph, Python, Openstack and Linux. My hobby is Rust.