Automation Testing with Pytest

Harshil Modi
Tenable TechBlog
Published in
7 min readMay 9, 2019

We live in an era where software is adopted very quickly. This puts lots of stress on software development processes. High adoption rates and faster software delivery can be an intriguing business opportunity, however, it also raises some questions on the quality of the software that is being shipped.

Why automated tests are necessary

There are lots of pros of automated testing, but here are 3 major R’s

Reusability: No need to write new scripts always, not for even a new OS release unless it becomes necessary.

Reliability: Humans are prone to errors, machines are less likely. Its even more quicker when running repetitive steps/tests which we can’t skip.

Running 24/7: You can start your tests at anytime you like or remotely. Nightly runs are testing your software even when are asleep.

A mature full-featured Python testing tool — pytest

There is a variety of testing frameworks and tools available currently. Flavors of such frameworks also vary e.g. data driven, keyword driven, hybrid, BDD, etc. You can choose the one that fits your requirement best.

Having said that Python and pytest are taking a huge share in this competition. Python and its related tools are highly used probably because they are more affordable for people with no or little programming expertise compared to other languages.

The pytest framework makes it easy to write small tests, yet scales to support complex functional testing for applications and libraries.

Some of the major features of Pytest:

  • Automatic discovery of test modules and functions
  • Effective CLI for better control, what you want to run or skip
  • Large third party plugins ecosystem
  • Fixtures — different types, different scopes
  • Works with traditional unit test framework

Automatic and configurable Test Discovery

pytest by default expects to find tests in python modules whose names begin with test_ or end with _test.py . Also by default it expects test function names to start with test_ prefix. However this test discovery protocol can be modified by adding your own configuration in pytest’s one of the configuration files.

# content of pytest.ini
# Example 1: have pytest look for "check" instead of "test"
# can also be defined in tox.ini or setup.cfg file, although the section
# name in setup.cfg files should be "tool:pytest"
[pytest]
python_files = check_*.py
python_classes = Check
python_functions = *_check

Let’s look at the very basic test function.

class CheckClass(object):
def one_check(self):
x = "this"
assert 'h' in x

def two_check(self):
x = "hello"
assert hasattr(x, 'check')

Have you noticed something! No fancy assertEqual or assertDictEqual etc, just plain and simple assert . No need to import these asserting functions for simple operation of comparing two objects. assert is something python already offers, so no need to reinvent the wheel. 😊

Boilerplate ? Don’t worry, Fixtures come to rescue

See test functions that test the basic operations of a Wallet software let say,

// test_wallet.py
from wallet import Wallet
def test_default_initial_amount():
wallet = Wallet()
assert wallet.balance == 0
wallet.close()
def test_setting_initial_amount():
wallet = Wallet(initial_amount=100)
assert wallet.balance == 100
wallet.close()
def test_wallet_add_cash():
wallet = Wallet(initial_amount=10)
wallet.add_cash(amount=90)
assert wallet.balance == 100
wallet.close()
def test_wallet_spend_cash():
wallet = Wallet(initial_amount=20)
wallet.spend_cash(amount=10)
assert wallet.balance == 10
wallet.close()

hmm, interesting! Have you noticed something — lot of boilerplate. Another thing worth noticing is that test is doing a few other things apart from just testing a piece of functionality e.g. Instantiating Wallet and also closing it- wallet.close()

Now let’s take a look how we can get rid of boilerplate with pytest fixtures

import pytest
from _pytest.fixtures import SubRequest
from wallet import Wallet
#==================== fixtures
@pytest.fixture
def wallet(request: SubRequest):
param = getattr(request, ‘param’, None)
if param:
prepared_wallet = Wallet(initial_amount=param[0])
else:
prepared_wallet = Wallet()
yield prepared_wallet
prepared_wallet.close()
#==================== testsdef test_default_initial_amount(wallet):
assert wallet.balance == 0
@pytest.mark.parametrize(‘wallet’, [(100,)], indirect=True)
def test_setting_initial_amount(wallet):
assert wallet.balance == 100
@pytest.mark.parametrize(‘wallet’, [(10,)], indirect=True)
def test_wallet_add_cash(wallet):
wallet.add_cash(amount=90)
assert wallet.balance == 100
@pytest.mark.parametrize(‘wallet’, [(20,)], indirect=True)
def test_wallet_spend_cash(wallet):
wallet.spend_cash(amount=10)
assert wallet.balance == 10

Neat! isn’t it. Test functions are pretty subtle, and doing only what they intend to do. Setting up and tearing down instantiating and closing Wallet are being taken care by fixture wallet . Not only it is helping to write reusable piece of code, it also adds the essence of data separation. If you look carefully, wallet amount is a piece of test data supplied from outside of the test logic, and not hard-coded inside test function.

@pytest.mark.parametrize(‘wallet’, [(10,)], indirect=True)

In more controlled environment, you can have a test data file e.g. test-data.ini in your repository, and a wrapper that reads it, and your test function can invoke another interface of wrapper for reading test data.

However it is recommended to make your fixtures part of special conftest.py file. This is a special file in pytest that lets a test discover global fixtures.

But, I have a test cases to execute against many different data set!

No worries, pytest has a cool feature for parameterizing your fixture. Let’s take a look at it with an example.

Let’s say your product exposes the CLI interface to manage it locally. Also your product has lots of default parameters getting set on startup, and you want to validate default values of all such parameters.

One can think of writing one test case for each of these settings, but with pytest its much easier

@pytest.mark.parametrize(“setting_name, setting_value”, [(‘qdb_mem_usage’, ‘low’),
(‘report_crashes’, ‘yes’),
(‘stop_download_on_hang’, ‘no’),
(‘stop_download_on_disconnect’, ‘no’),
(‘reduce_connections_on_congestion’, ‘no’),
(‘global.max_web_users’, ‘1024’),
(‘global.max_downloads’, ‘5’),
(‘use_kernel_congestion_detection’, ‘no’),
(‘log_type’, ‘normal’),
(‘no_signature_check’, ‘no’),
(‘disable_xmlrpc’, ‘no’),
(‘disable_ntp’, ‘yes’),
(‘ssl_mode’, ‘tls_1_2’),])def test_settings_defaults(self, setting_name, setting_value):
assert product_shell.run_command(setting_name) == \
self.”The current value for \’{0}\’ is \’{1}\’.”.format(setting_name, setting_value), \
‘The {} default should be {}’.format(preference_name, preference_value)

Cool, isn’t it!, You just wrote 13 test cases (each for different setting_value), and in future if you add a new setting into your product, all you need to do is, add one more tuple on top 😌.

How does it integrate with UI test with selenium and API tests

Well, your product can have multiple interfaces. CLI — Like we discussed above. Similarly GUI and API. Before deploying your software piece, it is important to test all of them. In an enterprise software where multiple components are interdependent and coupled, change in one part may affect the rests.

Remember pytest is just a framework to facilitate “testing”, not specific type of testing. So you can build your GUI tests with selenium or API tests with Python’s requests library for example, and run it with pytest.

For example, at a high level this could be your test repository structure.

As you can see above, this gives good separation of components.

  • apiobjects: Good place for creating wrappers for invoking API end-points. You can have a BaseAPIObject and a derived class to match your requirements.
  • helpers: Write your helper methods
  • lib: Library files, which can be used by different components e.g. your fixtures in conftest, pageobjects etc.
  • pageobjects: PageObjects design pattern can be used for creating your classes of different GUI pages. We at tenable use Webium , which is a Page Object pattern implementation library for Python.
  • suites: You can write your pylint code verification suites here, it would be helpful to get confidence on your code quality.
  • tests: You can have categorized test directories based on the flavor of your tests. It makes it easy to manage and explore your tests.

Well this is just for reference, repository structure and dependencies can be laid out to match your needs.

I have plenty of test cases, want to run them in parallel 😉

You may have plenty of test cases in your test suite, and there may be times when you would like to run test cases in parallel and reduce overall test execution time.

Pytest offers an awesome plugin to run tests in parallel named pytest-xdist, which extends pytest with some unique execution modes. Install this plugin using pip

pip install pytest-xdist

Let’s explore it quickly with an example.

I have an automation test repository CloudApp for my GUI tests using selenium. Also, it is constantly growing with new test cases, and now have hundreds of tests. What I would like to do is to run them in parallel, and trim down my test execution time.

In terminal just type pytest in the project root folder / tests folder. This will execute all tests.

pytest -s -v -n=2

pytest-xdist with tests running in parallel

This can also help you to run tests on multiple browsers in parallel.

Reporting

Pytest comes with builtin support create result files which can be read by Jenkins, Bamboo or other Continuous integration servers, use this invocation:

pytest test/file/path — junitxml=path

This can generate great XML style output, which can be interpreted by many CI systems parsers.

Conclusion

Pytest’s popularity is growing every year. Also, it has a wide community support, which lets you gain access to a lot of extensions e.g. pytest-django, which helps you to write tests for your Django web apps integration. Remember, pytest supports running unittest test cases, so if you are using unittest , pytest is worth considering for future. 😊

Resources

https://docs.pytest.org/en/latest/

http://pythontesting.net/framework/pytest/pytest-introduction/

--

--