Writing tests for docs for tests

Wut? It’s less crazy than you think

George Shuklin
OpsOps
2 min readMay 4, 2021

--

We’ve got a heated internal debates about necessity of having QA tests at the end of provisioning on production servers. Should we run infra tests at the end of provisioning or not? I won’t discuss all details here (as it tightly linked to business processes), but there was one argument, which was very sound and, a rather general argument.

What if a QA test failed? What operators should do? Are there a documentation for those tests? How can you be sure there going to be no obscure errors at the end of provisioning which causes delivery stalls and ambiguity for operators?

Part of the answer: make QA tests tightly scoped with well-known solutions and documentation. That’s ‘best efforts’ approach, which is always welcomed, but never enough.

Can we do better? Turned out, there is a one little thing we can delegate to robots.

Testing if there is a documentation for the test

I can’t use robots to check if docs are sane and helpful (at least, not at the current level of pseudo-AI toys we have for now), but I definitively can check, if EVERY test in QA was mentioned in documentation.

This little test assure that if someone adds a new test in QA stage and forgot to write an explanation and instructions about it, that test won’t pass CI. Red mark in a review (by robot), sympathetically nodding reviewers, a new commit for docs, and issue solved before been delivered. Docs are little less rotten. People are a bit less miserable.

That’s a general idea. Now, to implementation details.

Pytest test discovery

QA tests are in qa/ folder. They are written in testinfra. CI tests are in different place. QA docs are in docs/qa.md folder.

We run all tests from the root of the project. The goal is to check if every test from qa folder has a corresponding title in docs/qa.md.

The key issue here is that we can have more than one test in a test file. We need to test that every test is in the headers, not every ‘test file’.

So, we need to collect tests. My first attempt used pytest through subprocess and output parsing, but it’s inelegant and fragile. And slow.

My second attempt is mostly based on a code from this SO answer:

import pytest
import os
class NodeidsCollector:
def pytest_collection_modifyitems(self, items):
self.nodeids = [item.nodeid for item in items]
def qa_tests():
collector = NodeidsCollector()
pytest.main(
["--collect-only", "qa/"],
plugins=[collector]
)
return list(map (os.path.basename, collector.nodeids))

@pytest.mark.parametrize("testname", qa_tests())
def test_all_qa_tests_has_docs(testname):
assert f'## {testname}' in file('docs/qa.md').read()

The expected docs format is

## test_cpu.py::test_no_hyperthreading
(explanation for test_no_hyperthreading test here and what to do if it is)

Caveat

It’s really important to have all strange plugins to be disabled. That means, no testinfra, etc.

The best way to do so which I found is to run qa_docs stage in CI with PYTEST_DISABLE_PLUGIN_AUTOLOAD: 1 environment variable.

Conclusion

Taking in account this article, I just wrote a documentation for a test for documentation for tests.

Should I stop here?

--

--

George Shuklin
OpsOps

I work at Servers.com, most of my stories are about Ansible, Ceph, Python, Openstack and Linux. My hobby is Rust.