Test your Financial Model

Jeroen Bouma
4 min readMar 5, 2024

--

As defined in Setting up your Project, the model should always include a tests folder. This utilizes Pytest to run tests. The tests should be structured in the same way as the model. And is a duplication of the model in terms of structure. The only difference is that each individual module will have test_ in front so that Pytest can recognize the file as a test.

This guide is part of a series related to building financial models:

As an example, when the Gross Margin functionality, obtained from the profitability_model.py needs to be tested, a function should be created that contains the same function name as within the profitability_model.py file with the exception that test_ is included. This looks like the following:

This is a test that would be created for the function that was defined in Structure your Model. Graphically the process would look as follows:

The process above shows that the data is saved to CSV on rewrite, this means that if you reinitialise the test, it will overwrite the existing data. This is for example relevant when you have identified that the output needed to be adjusted.

To accompany the process, a conftest.py file is included in the tests folder. Working with a conftest.py is a quite difficult concept to understand at first and therefore, I’d recommend acquiring the conftest.py as found here to automatically be able to execute tests and write them to the relevant file. (read more here).

To be able to see this in action, download or clone the FinanceToolkit repository and install the required dependencies by doing the following:

  1. Install pipx: pip install pipx
  2. Install Poetry: pipx install poetry
  3. Install the dependencies: poetry install

You can run the tests by executing pytest tests. This will run all tests that are defined in the tests folder which shows the following:

Within the conftest.py I have defined ways to rewrite the tests if there are differences that I can explain. This is done by defining the record-mode which can be rewrite in case the tests need to be redefined. Any output will be stored in individual datafiles. This is a very powerful way to ensure that the results remain the same and if not, it can be verified that the changes are correct. This can be done through:

Or for individual tests:

This generates CSV files (tests/ratios/csv/test_get_gross_margin.csv) which are used to compare the result once the test is ran again. So if any changes are made to the calculation, it will automatically give me the difference between the two results and only if I (or a tester) can validate that the new result is correct, the test should be rewritten.

For example, if I change the Gross Margin formula from (revenue - cost_of_goods_sold) / revenue to (revenue - cost_of_goods_sold) / revenue - 1 (which is incorrect), the test will fail and show the following:

--

--

Jeroen Bouma

With Experience and Education in the area of Quantitative Finance, my ambition is to continuously improve in the area of Quant Finance and Python Programming.