Quality Assurance, Testing and Tools

Thilo Haas
smartive
7 min readJul 31, 2018

--

In our day to day business as a web development agency we set up a variety of tools and processes to ensure the highest possible quality of our work. In the following sections I will give a short introduction about how we achieved an outstanding quality by following some simple rules.

We can’t blame the technology when we make mistakes.
- Tim Berners Lee

The following sections will cover our tools and processes for quality assurance in different topics:

  1. Development Process
  2. Continuous Integration
  3. Monitoring
  4. Logging and Notifications
  5. Manual Tests
The cultivation of fields as a metaphor for quality assurance

Development Process

It all starts with the development process. To enable efficient development cycles we rely heavily on the scrum framework. Before the development actually starts the requirements are put into user stories with explicit acceptance criteria. Only really small technical tasks or bugs are allowed to go into development without specified acceptance criteria.

While developing we use the git flow branching model to keep feature branches, release and production branches in line while enabling easy hotfixes.

Code Linting, Automated Unit and Functional Tests

Automatic code linting and functional as well as unit tests allow the fast development of new features, massive code rewrites and consolidations while making sure that the changed product still works as expected while preserving a consistent code style.

We do not strive for 100% test coverage but rather add tests for more complicated functionality where applicable. Which tests should be added where, is in the responsibility of both the developer and the reviewer. Also whenever a bug arises, a test for that specific case is added to prevent it from happening again.

With a git pre-commit hook one can assure that only code which passes the code linting can be pushed to the remote. The pre-commit hook could be extended to check for successful unit and functional tests as well.

Merge Requests and GitLab CI

As soon as a code change is ready it is pushed to a remote Git branch and a merge request is opened. Direct pushing to the develop or master branch is disabled, so that merge requests are enforced. All merge requests must be reviewed by a second developer to have at least four eyes looking into the code.

To enforce our quality measures, a merge request can only be merged if code linting as well as all unit and functional tests successfully ran through and if all discussions on the pull request are resolved.

An additional GitLab CI pipeline to test for a successful build can save a lot of time to prevent broken production builds which are not always uncovered by traditional unit and functional test. For example on our React and React Native projects where we share some common business logic a pipeline which continuously asserts that:

  1. Code Styles are respected by performing code linting
  2. All unit and functional tests pass
  3. The final library can be built (e.g. all dependencies can be resolved)
  4. The React Native App on iOS can be built
  5. The React Native APP on Android can be built

Due to this fully automated process, potential problems can be detected early on and are therefore much easier to spot and resolve.

Continuous Integration

On all medium and big size projects we rely on a three environment setup: DEV, QUAL and PROD.

The deployment is aligned to the git flow branching model and is done by GitLab CI to ensure consistency and prevent failures and bugs from local file changes.

Car engine brings you steadily forward.

Only when all quality measures succeeded the given branch is deployed onto the respective environment. The whole process is automated to achieve a continuous integration.

Monitoring

We actively monitor our running applications and APIs, so we can react on problems before the customer notices them.

Monitoring third-party APIs and external services on which our applications depend upon is also very important for fast isolation and early detection of potential problems.

Active, continuous health checks are essential for early detection of problems

For simple uptime monitoring we use pingdom, which only checks if a server is responding and therefore alive. Additionally we set up runscope for an in-depth monitoring of our API responses which checks the structure and content of the API responses. With this we can make sure that the API delivers the expected amount of entities and that the data it is serving is correct.

But for some applications this is not enough. For example on our storefinder API we must not provide any incorrect or missing data or opening hours at any time. Therefore we added additional data verification on the API and data importer level to ensure correct data before it hits production. For this task we use JSON schema validation.

We created a library to reuse the swagger documentation to automatically generate the JSON schema out of the object definition and facilitate validation. The library is open sourced as the giuseppe swagger plugin — have a look at the buildDefinitions function. These definitions are then validated using the Tiny Validator.

With these tools we can efficiently monitor all our applications and facilitate fast problem resolution when an issue occurs.

Logging and Notifications

While our monitoring tools ensure that the applications are running, we have logging in place to:

  • Make sure that application scripts and cronjobs are running as expected
  • Provide additional debug information when an issue arises
  • Log edge cases and application errors
Ensuring to be notified at the right moment if necessary

Our setup consists of application logs, which are sent to a centralized Graylog, or depending on the client, to Logstash and Kibana. Anomalies within the log streams and specific notification rules will issue notifications on Slack and email.

Manual Tests

All previous measures can be highly automated and are closely integrated into our daily development process. This makes sure that the basic quality requirements are always satisfied. To further improve the quality of our work we can manually focus on specific topics.

Regression Tests

In order to make sure that after newly introduced functionality the previously developed and tested software is still behaving as expected, it is advised to run regression tests. They can be compared to a step-by-step click-through of the core functionalities and depending on the size of deviation introduced in comparison to the original implementation, these tests will become more extensive and must always cover all the systems affected. Luckily there are tools like Selenium that can handle most of these testing routines and if new test cases are maintained regularly alongside the evolving software, the whole regression testing will steadily get more extensive as well.

Load tests

Is the application capable of withstanding the productive environment? Performing load and performance tests also in an early stage of the development cycle can detect potential bottle necks early on. A well-proven tool for performance tests Apache JMeter. In combination with a service like flood.io it makes it possible to test the limits of your application and setup.

Accessibility Tests

Although accessibility tests can be automated to some extent with tools like pa11y, it is never a substitute to real accessibility tests performed by experts. But its also important as a developer to have a basic understanding of the accessibility requirements and to keep them in mind while developing. If you have the opportunity to watch an accessibility expert live doing his job, its the best insight you can get. We actively engaged in accessibility training and therefore always performed very well in further accessibility tests.

Check constantly if no unauthorized accesses are possible

Security audits

Does the application or infrastructure really only allow access for authorized users or are there any gaps, bugs or backdoors? Security audits make sense for all medium or large sized projects and especially applications with sensitive data. They should always be executed by a specialized third party to be reliable, independent and also cover what the developers themselves have not thought of.

Summary

From manual processes up to automated checks and validations, quality assurance must cover a broad spectrum of processes and areas.

We have achieved the greatest increase in quality with the dual control principle through code reviews. But security audits and accessibility tests are almost as important, especially for medium and large projects, and can save a lot of effort and hassle by bringing up potential problems before the project is publicly released.

Finally, it is always advisable for a third party to look at the whole case and bring in their other point of view, which is often overlooked by oneself.

--

--