How to build Gitlab CI pipeline for python tests reporting to Allure TestOps
As a QA engineer at Exness, creating and maintaining CI pipelines for tests is a part of my day-to-day job. While it’s become routine for me now, writing my first Gitlab CI pipeline involved a lot of googling for answers.
I’ve created this guide to share my experience and hopefully help you with writing your own.
It explains code, dependencies, and environment variables used in a pipeline. After reading, you should be able to create a functional Gitlab CI pipeline that runs Python tests and reports results to Allure TestOps. This guide should answer any questions you might have along the way.
I hope you’ll have fun writing your first pipeline.
The issue we need to solve here is quite simple: we have some tests written in Python with pytest
. There is pipenv
that is being used as a dependency manager.
- We need to run our tests in Gitlab CI,
- The test results should be collected and reported to Allure TestOps.
A brief look at the test code
Let’s take a look at how we are going to install dependencies and run tests in CI.
In our code we use pipenv as a dependency manager. It means there are Pipfile
and Pipfile.lock
files that store information about dependencies used in the project.
Minimum packages declared in our Pipfile
are:
[packages]
pytest = "*"
allure-pytest = "*"
Here pytest
is our test framework. And allure-pytest
is a package that generates the test report we are going to send to Allure TestOps.
In addition to defining dependencies, we could add scripts to the [scripts]
section of the Pipfile
. It’s super handy to run tests, linters, or any other useful scripts.
In our case the [scripts]
section contains tests_ci
script that we will use in the pipeline later.
[scripts]
tests_ci = "pytest --alluredir=allure_results"
As you can see, the only difference from just running pytest is using --alluredir=allure_results
. It tells allure-pytest
package to generate report files and save them in the allure_results
folder.
In CI we are going to install pipenv
and python dependencies using the following commands:
- pip install pipenv
- pipenv sync
pipenv sync
will install all dependencies from Pipfile.lock
. As a result, we can be sure we get a reproducible environment.
And this is the command we are going to use to run tests:
- pipenv run tests_ci
How to tell Gitlab CI to run a pipeline?
Fortunately, it is pretty straightforward: all you need is to create a gitlab-ci.yml
file in the root of your repo! Gitlab will check the file’s contents and run a pipeline.
CI script is a set of instructions that describe what, when and how should be done when a pipeline runs.
A pipeline consists of stages and jobs described within those stages. In our case, we will only have one stage and one job to run our tests.
Here is what we can write so far:
stages:
- tests
test:
stage: tests
script:
- pip install pipenv
- pipenv sync
- pipenv run tests_ci
While this script describes that we need to install dependencies and run tests, it is not enough. We need to add some instructions of where to run our code.
Where tests will run: the runner and the tag
tags
keyword in gitlab-ci.yml
expects a list of runners where the job should run.
We already have a runner with the tag test_tag
we are going to use in our pipeline:
tags:
- test_tag
In case you don’t have your own runner set up, you can use shared Gitlab CI runners. In Gitlab web interface go to
Settings
->CI/CD
->Runners
->Expand
to choose one.
Docker image for tests
Now we know where our tests will run, it’s time to pick a docker image! The image
keyword is exactly about that.
We are going to simply use python:3.10
, as it fits our needs perfectly. In a Gitlab script, it would look like this:
image: python:3.10
Let’s see what the updated version of the script looks like:
stages:
- tests
test:
image: python:3.10
tags:
- test_tag
stage: tests
script:
- pip install pipenv
- pipenv sync
- pipenv run tests_ci
This script should already be working!
Send report to Allure TestOps
We are almost there. Let’s find out what we need to post test results to Allure TestOps.
1. The report itself.
Fortunately, it is already generated by allure-pytest
package.
2. allurectl
— command line tool for Allure TestOps, that will upload the report.
We are going to download the latest version each time the job runs:
- wget https://github.com/allure-framework/allurectl/releases/latest/download/allurectl_linux_386 -O /usr/bin/allurectl
- chmod +x /usr/bin/allurectl
Alternatively, it is possible to create an image with
allurectl
preinstalled, or to keep the tool somewhere in the project.
To upload the test results we will use the following command:
- allurectl upload $TEST_REPORT_DIR
where $TEST_REPORT_DIR
is the folder with the test report.
3. Some environment variables set in the pipeline:
variables:
ALLURE_LAUNCH_NAME: "${CI_PROJECT_NAME}_${CI_JOB_ID}"
ALLURE_JOB_RUN_UID: "${CI_JOB_ID}"
ALLURE_LAUNCH_NAME
— name of the test run in Allure TestOps. We combine it from the Gitlab project name and unique job id.
ALLURE_JOB_RUN_UID
— is a job run identifier inside Allure. We define it as CI_JOB_ID
— a unique identifier of the Gitlab job that is generated each time we run a job.
Defining unique
ALLURE_JOB_RUN_UID
is optional.If it is not set in CI, test reports for several jobs in one pipeline will be combined into one Allure report.
If
ALLURE_JOB_RUN_UID
is unique, then each job inside one pipeline will have a separate test run in Allure TestOps.
4. Several environment variables set in the Gitlab’s Project settings.
Setting environment variables for the project:
Settings
->CI/CD
->Variables
->Expand
.
ALLURE_ENDPOINT="https://your_allure_testops.com"
ALLURE_PROJECT_ID=33
ALLURE_TOKEN="very_secure_token"
ALLURE_ENDPOINT
— the Allure TestOps URL address.
ALLURE_PROJECT_ID
— Allure project identifier. The test run will appear in the project specified here. This ID can be found in the Allure URL: https://your_allure_testops.com/project/13/dashboards
— in this URL it is 13.
ALLURE_TOKEN
— is the token that allows posting data to Allure TestOps. It can be created in Allure.
To create Allure TestOps token, go to your Allure TestOps web interface, click user avatar in the left lower corner of the screen ->
Your profile
->API tokens
->Create
Why is it better to store some environment variables in project settings and not in the pipeline script?
- There is no need to change them often.
- They contain sensitive data: passwords, tokens and so on. Storing them in Project settings allows you to mark them as “masked”, and job logs won’t contain this data.
- If variables are stored on a group level — they can be reused in different projects.
Let’s take a look at a few more things before creating the final gitlab-ci.yml
script.
rules
and allow_failure
Gitlab job can contain some rules that define how and when the job is triggered.
rules:
- if: '($CI_PIPELINE_SOURCE == "push") || ($CI_PIPELINE_SOURCE == "merge_request_event")'
when: manual
- when: always
allow_failure: false
rules
defined here work the following way:
- If the pipeline is created as a result of push or merge request — test job won’t run automatically. It will wait for manual user action.
- In all other cases — test job will run automatically.
According to Gitlab CI docs allow_failure
is true
for manual jobs by default. In our case, it means the pipeline won’t fail regardless of job’s exit code. By the way, we want either a red or green pipeline — so we do not allow failures.
before_script
, script
, after_script
These are essential parts of the job defining strict order of actions and describing what should be actually done.
before_script:
- wget https://github.com/allure-framework/allurectl/releases/latest/download/allurectl_linux_386 -O /usr/bin/allurectl
- chmod +x /usr/bin/allurectl
- pip install pipenv
- pipenv sync --dev
script:
- pipenv run tests_ci
after_script:
- allurectl upload $TEST_REPORT_DIR
The before_script
section is generally used to install dependencies and make some preparations. We’ve installed everything we need in this section.
script
is the main part of the job — if this part finishes running with errors — the whole job will be marked as failed. Only tests are run in this section.
Commands described in after_script
will run regardless of the script
exit code, therefore this part is great for reports, notifications to chats and so on.
The final script
Finally, we can combine all the items and get a shiny working CI.
stages:
- tests
variables:
ALLURE_LAUNCH_NAME: "${CI_PROJECT_NAME}_${CI_JOB_ID}"
ALLURE_JOB_RUN_UID: "${CI_JOB_ID}"
TEST_REPORT_DIR: "allure_results"
test:
image: python:3.10
tags:
- test_tag
stage: tests
rules:
- if: '($CI_PIPELINE_SOURCE == "push") || ($CI_PIPELINE_SOURCE == "merge_request_event")'
when: manual
- when: always
allow_failure: false
before_script:
- wget https://github.com/allure-framework/allurectl/releases/latest/download/allurectl_linux_386 -O /usr/bin/allurectl
- chmod +x /usr/bin/allurectl
- pip install pipenv
- pipenv sync
script:
- pipenv run tests_ci
after_script:
- allurectl upload $TEST_REPORT_DIR
This pipeline has only one stage tests
, and one job test
belonging to that stage.
The test
job consists of defining image
, tags
, stage
, rules
, and blocks describing what should be done before, during and after the script.
And that’s it! I hope this guide covers all your CI pipeline creation needs and saves you precious time on running Python tests.