How to write a thousand autotests in a couple of days

Mikhail Vasin
IT’s Tinkoff
Published in
7 min readJun 24, 2024

originally written by Vladimir Vasyaev

My name is Uncle Vova, I’m a leading test automation engineer and an unwavering fan of Robot Framework. I even contributed to its source code once and sometimes help newbies in the official slack channel of this tool.

But, as I mentioned in one of my articles, it has one disadvantage compared to pytest — it lacks sufficient parameterisation of tests. To be fair, Robot Framework has an add-on allows you to generate tests based on an external table. But this is not quite what we’re looking for.

Many people have asked me how exactly parametric generation of autotests is done. In this article I will answer that question.

Sample Project

Let’s assume that our project has a class that depends on other classes in it. For the purpose of simplicity, let’s use a school course in genetics: the pea case.

Peas can have a color (Color) and a type (Kind):

from colors import Color
from kinds import Kind


class Peas:

color: Color
kind: Kind

def __init__(self, color: Color, kind: Kind):
self.color = color
self.kind = kind

def __str__(self):
return f"{self.color.name().capitalize()} {self.kind.name()} peas."

Let’s keep it simple: the class overrides the __str__() method and returns full information about the properties of this pea variety. This will be quite enough for illustration.

Let’s add a base class Color and inherit the basic ones from it: Green and Yellow:

class Color:
color_name: str = "unknown color"

def name(self) -> str:
return self.color_name


class Green(Color):
color_name = "green"


class Yellow(Color):
color_name = "yellow"

Now let’s describe the basic type and inherit the basic types from it: Smooth and Wrinkled (aka Marrow (brain)):

class Kind:
kind_name: str = "unknown type"

def name(self) -> str:
return self.kind_name


class Smooth(Kind):
kind_name = "smooth"


class Brain(Kind):
kind_name = "brain"

Basic test

Let’s focus on the method for outputting a variety of information. To cover all variants, we can write four simple tests:

from colors import Green, Yellow
from kinds import Smooth, Brain
from peas import Peas


def test_green_smooth_peas():
peas = Peas(Green(), Smooth())
assert str(peas) == "Green smooth peas."


def test_yellow_smooth_peas():
peas = Peas(Yellow(), Smooth())
assert str(peas) == "Yellow smooth peas."


def test_green_brain_peas():
peas = Peas(Green(), Brain())
assert str(peas) == "Green brain peas."


def test_yellow_brain_peas():
peas = Peas(Yellow(), Brain())
assert str(peas) == "Yellow brain peas."

This all works great, but I live by the principle, “If you write something twice, you’re doing something wrong!”

Test parameterization

Let’s put color and type into parameters and write parameter generation via multiplication (product()) of lists:

from itertools import product
from typing import Tuple

from pytest import mark

from colors import Color, Yellow, Green
from kinds import Kind, Smooth, Brain
from peas import Peas

colors = [(Yellow(), "Yellow"), (Green(), "Green")]
kind = [(Smooth(), "Smooth"), (Brain(), "Brain")]
peas_word = "peas"

sets = list(product(colors, kind))


@mark.parametrize("color_info,kind_info", sets)
def test_peas_str(color_info: Tuple[Color, str], kind_info: Tuple[Kind, str]):
color, color_str = color_info
kind, kind_str = kind_info
peas = Peas(color, kind)
assert str(peas) == f"{color_str} {kind_str} {peas_word}."

Now we don’t duplicate anything and have the same four tests.

If we extend the functionality ( for example, we add a new black color), we don’t have to write much more code. And remember, we are greatly simplifying real testing of real products.

Let’s add the color black:

class Black(Color):
color_name = "black"

Now, to increase the number of tests to six, all you need to do is change the list of colors in the tests module:

colors = [(Yellow(), "Yellow"), (Green(), "Green"), (Black(), "Black")]

I hope the idea is clear: the class can expand in all directions, and you do not need to add the hundreds of tests by hand.

Case names

Perhaps the first thing you notice when running these tests is the inappropriate case descriptions:

To fix this nuisance, we need to generate a name for each case and pass it to the parameterisation. We will also refer to the name of the class under test:

test_names = [
f"{params[0][0].__class__.__qualname__} - {params[1][0].__class__.__qualname__}"
for params in sets
]

Now let’s add their names in the parameterisation:

@mark.parametrize("color_info,kind_info", sets, ids=test_names)

A whole new thing now:

Extended test generation

The best is the enemy of the good. That is why I suffer from using tuples as crutches. By the way, it’s a good reason to talk about the test generation extended by a separate method.

Now we will get rid of tuples in the parameters of the test itself and put the strings into separate parameters.

There is pytest_generate_tests — a reserved method name in pytest. This method is called for each test in the module:

def pytest_generate_tests(metafunc):
args = []
names = []
for color_info, kind_info in product(colors, kinds):
color, color_str = color_info
kind, kind_str = kind_info
args.append([color, color_str, kind, kind_str])
names.append(f"{color.__class__.__qualname__} - {kind.__class__.__qualname__}")
metafunc.parametrize("color,color_str,kind,kind_str", args, ids=names)


def test_peas_str(color: Color, color_str: str, kind: Kind, kind_str: str):
peas = Peas(color, kind)
assert str(peas) == f"{color_str} {kind_str} {peas_word}."

It is essentially a synonym for the above, but a bit more readable. I am sure you will see cases in your practice when this approach will be necessary.

When advanced test generation is necessary

Sometimes, developers, tired of tons of issues that fall like from a cornucopia while writing autotests, ask me to disable some cases so that the build does not fail, knowing in advance that some problems still exist.

To be honest, this is not the most correct practice and you should not adopt that experience. But it is a good reason to show why the “extended” generation approach is sometimes indispensable and can’t be done by the usual @mark.parametrize.

If you haven’t yet encountered how to make the test gray instead of red on failure (it doesn’t fail the build, but it does flag the issue), it’s very simple: you need to use @mark.xfail. If needed, you can pass the issue id in your bug tracker as the reason parameter.

So, let’s assume that according to the Technical specifications we should also have a purple pea color, but developers made a mistake, and tests for it fail. Developers ask us to make the purple color tests pass, but not fail the build.

We just need to add a condition to determine whether the case should be grayed out. Let’s change the code of arguments generation:

params = [color, color_str, kind, kind_str]
args.append(
params if not isinstance(color, Purple) else pytest.param(*params, marks=pytest.mark.xfail(
reason="Not implemented yet."))
)

The entire method will look like this:

def pytest_generate_tests(metafunc):
args = []
names = []
for color_info, kind_info in product(colors, kinds):
color, color_str = color_info
kind, kind_str = kind_info
params = [color, color_str, kind, kind_str]
args.append(params if not isinstance(color, Purple) else pytest.param(*params, marks=pytest.mark.xfail(
reason="Not implemented yet.")))
names.append(f"{color.__class__.__qualname__} - {kind.__class__.__qualname__}")
metafunc.parametrize("color,color_str,kind,kind_str", args, ids=names)

And execution results will be grayed out and won’t fail a build in CI:

When advanced test generation is also needed

If you need to write a base class that uses parametric tests, you may need to pass properties of this class into the parameters.

The familiar @mark.parametrize(“param”, self.params) approach won’t work because it doesn’t know what self is at the time of parameterization. And here we go back to pytest_generate_tests by adding it to the class:

def pytest_generate_tests(self, metafunc):
metafunc.parametrize("param", self.params)

This approach will be working already.

One more little trick. If you need to generate only for a specific test or organize generation for different tests differently inside pytest_generate_tests (remember, it is executed separately for each method starting with test_), you can refer to its metafunc parameter and get the metafunc.function.__name__ property — that’s where the test name will be. For example,

class TestExample:
params: List[str] = [“I”, “like”, “python”]

def pytest_generate_tests(self, metafunc):
if metafunc.function.__name__ == “test_for_generate”:
metafunc.parametrize("param", self.params)

def test_for_generate(self, param: str):
print(param)

def test_not_for_generate(self, param: str):
print(param)

…will generate tests for test_for_generate, but will not do so for test_not_for_generate.

Conclusion

In my current project, there is a service that combines different filters and segments, each of which is a separate class, so using pairwise may give diffused results. As a consequence, it will not be clear which class has an error. By generating tests, I can achieve 100% coverage with tests that give clear results.

In total, there are almost 11,000 tests in this service at the moment. I can’t imagine how I would cover it all without described generation.

Wish you all green tests and 100% coverage!

--

--