Which pipeline tool to use?

GitHub Actions for GUI test runs!

Adopt GA and integrate with AWS, Docker & Slack.

Karishma
Technogise

--

Photo by Roman Synkevych on Unsplash

In this article, I will share my experience of working with GitHub Actions (GA) to trigger automated functional tests.

Read on to know the extensive capabilities that GA holds.

Background:

In the project ecosystem that I worked with, GitHub was being used as a version control system and AWS for code deployment. Hence, I evaluated running tests on AWS vs on GA.

The primary reason I went ahead with GA was because I did not need any additional setup; GA setup closely matches with that of my local machine.

GitHub has GA well documented and there’s a strong community of users to help you out if needed.

Refer to the YAML below, while I elaborate it …

Managing secrets :

Foremost thing we need to solve is, managing secrets while running tests in a pipeline. GitHub Secrets can help you with that.

Note the salient features of GitHub vault :

  • Even the creator of the secret cannot see the saved value.
  • Only the owner (admin) of the repository (repo) will see the name of the secret and can edit it.
  • Others cannot even see the name of the secrets present within the repo or the organisation.

Note: Github secrets are case sensitive.

Triggers / Schedulers :

Well known triggers — on push, on pull request, on merge etc. — are used in order to schedule your GA. Let’s get to know of another possible trigger.

The application I was setting this up for was a collection of micro-services, where :

  • changes were pushed directly into master or merged into master at different times of day.
  • part of the changes were pushed into one service and remaining changes into different services at different times of the day.

This created a bit of complexity on when to trigger tests. We know that GUI tests need all services to be in sync for the tests to give credible results. Hence, I looked out for a trigger akin to cron.

Solution : Turns out, scheduling via cron is indeed plausible! GitHub recommends to plan this trigger at an odd interval within the hour. Each 15 minute interval in the hour is a peak period which must be avoided. With this recommendation and acceptance of a 3–10 minute delay in execution start time, the trigger works smoothly. In this example, I have set it to 1335 IST on weekdays.

Caching:

We all now the benefits of caching, ever wondered if this is possible in Github workflows too?
Yes, it is! This is how I did it:

- uses: actions/setup-node@v2
with:
node-version: 'X.X.X'
cache: 'yarn'

Cloning multiple repositories :

Should you need to work with another repo, you can clone it easily.

In our case, the application under test (AUT) was dockerised and located in a separate repo for modularity.

Solution : Provide the directory path to clone into. Give the repo path & branch name to clone. Create a secret which grants permission to clone the repo. Store this secret at organisation (org) or repo level within GitHub.

Configuring AWS profile :

We needed GA to talk to AWS in order to pull (frontend , backend) builds from specified AWS locations.

Solution : You need to configure the access key, secret key and region. Depending on your architecture, store these keys in your org or repo level GitHub Secrets. AWS CLI is pre-installed, hence no additional steps are needed in the YAML.

Running docker containers :

My team needed to run 3 inter-connected containers (in parallel) — frontend build, backend build & database.

Solution : There’s no overhead to configure Docker within GA, like installing and starting docker daemon. You can directly work with docker commands and bring up containers.

Running headless & browser based tests :

Our tests were headless for the most part. A certain aspect of our tests needed the browser to open up i.e. to login. Both of them worked out without any hiccups.

Solution : Headless as well as browser based tests run flawlessly. They do not require any additional setup, like XVFB or an equivalent.

Shell commands :

You need to make sure that GA fails, if any of the tests fail. By default that does not happen, because according to GA its instructions were triggered and finished without any issues. Shell commands help with this. Let’s understand the theory of this implementation :

  • Initialise bash.
  • Capture the exit code of the test run.
  • Use this variable’s value as the exit code for GA.

Accessing test artefacts a.k.a results :

Concepts to note :

  • Make sure to execute this step irrespective of the status of previous steps.
  • Number of days for which the results of a given execution are stored is configurable. Default is 90 calendar days.
  • You can provide paths of multiple files / folders to be exported and made available for download later.

Alerting the team of the test results :

Here, you need to collaborate with the admin of Slack in your organisation.

Create a custom Slack application, which has only “Incoming Webhook’’ configured.

After this app is approved, configure the web-hook to post a message in a private channel. A read-only URL is then generated and must be added as a GitHub Secret in the repo.

Make sure this step too executes always.

You can choose to configure the alerts for success , failure or warning as applicable. I have configured it to post only in case of failure.

I hope this gives you more confidence in embracing GA as your CI tool! Do let me know of your experience implementing GA.

Status badge:

You can add a status badge to your GitHub repository of tests. This gives an idea whether tests are passing or failing.

As you can see in the screenshot above, the code is added to the readme file. Code looks like :

[![Run functional sanity suite](https://github.com/<path>/actions/workflows/<file>.yml/badge.svg)](https://github.com/<path>/actions/workflows/<file>.yml)

You can give the name to the badge as per your requirements.

Alternatively, you can get the sample code from Actions tab within the repo.

PS : If the requirements of your team are different, you must try out adding E2E as a “check” for the PRs raised in your application.

--

--

Karishma
Technogise

QA Architect | Ops practitioner | System Design enthusiast