Quality and Software Testing at AT Internet

Alexandre AUBERT
AT Internet
Published in
5 min readFeb 22, 2019

In a previous article, I discussed how the use of automation is essential in order to regularly and effectively conduct tests within an Agile development framework. Today, let’s take a moment to think about one of the fundamentals of software testing: the notion of quality.

Software quality

What does it mean to you?

  • Is it the number of bugs per line of code?
    Wrong. That is a dangerous metric that does not take into account, for example, the bugs that have not yet been discovered, nor the effort invested in the detection of bugs.
  • Is it the amount invested in application testing?
    Wrong again. Some companies would like there to be a direct link between the investments in (often manual) application testing and product quality, but this isn’t always the case. This investment allows you to increase the quality level of a version of a product, but this does not necessarily make it a “quality” product.

We could come up with a list of indicators to measure one specific aspect that is closely (or not so closely) linked to software quality, but this would still not answer the original question. In reality, there are multiple definitions of software quality, each one as valid as the next. The relevance of each definition greatly depends on context.

Nonetheless, some guidelines do exist in order to obtain a standardised model of this notion of quality:

Since 1992, the ISO 9126 has defined a common language and models the different qualities of a software, with 6 major features subdivided according to 27 characteristics:

  • Functionality (ability to meet requirements)
    Suitability, accuracy, interoperability, compliance, security, etc…
  • Reliability (over time and according to certain conditions of use)
    Maturity, fault tolerance, recoverability, etc…
  • Efficiency (cost of owning the technical infrastructure)
    Efficiency of the resources used and of completion times, etc…
  • Usability (ownership effort required)
    Operability, learnability, understandability, etc…
  • Maintainability (cost of development)
    Stability, changeability, analysability, testability, etc…
  • Portability (cost of a platform transfer)
    Ease of installation and migration, adaptability, compliance, etc…

This standard was replaced in 2011 with the ISO 25010 standard, which adjusted several definitions and, most importantly, introduced an all-new characteristic to measure quality: security. (The long-debated question, “Should security be considered a feature?,” has thus finally been put to rest.)

It provides a fairly comprehensive framework to think about different elements that are sometimes unfairly set aside while we focus on features. It then becomes clear that the features of a piece of software constitute only one angle to consider among many, when assessing that software’s quality.

On a final note, could it be that, in the end, the quality of a product is simply its ability to satisfy its users, regardless of the reasons for this satisfaction?

The principles of software testing

Now that we have a clearer idea of what software quality represents, let’s look at the methods we can use to evaluate quality and make improvements. Quality assessment requires the implementation of different kinds of tests, which can take many forms and be of different types. These allow you to cover the full breadth of software quality characteristics.

Despite the variety of possible tests, some principles remain true regardless of the test type or level. The International Software Testing Qualifications Board (ISTQB) has defined 7 basic principles for software testing. I’ll give a few real-life examples to illustrate how these principles arise on a daily basis for software publishers like us:

1. The test shows the presence of defects: it cannot guarantee their absenceWhen it’s time to validate a feature, a PO (Product Owner) will often say, “The tests are green, so there are no more bugs, right?” Um… No, not exactly… 😊

2. Discover defects as soon as possible: the importance of testing during early phasesIn 2016, we had integrated the extra costs caused by late discovery of defects. We then thought we’d found the ultimate solution: We set up a taskforce with a member from each team for one to two days during each new release, in order to limit the impact on our customers. We figured this was more effective than dealing with dissatisfied customers, and lessened the impact on our image. We have since understood that we can anticipate this by investing in good preventive testing in our internal environments. Everyone, from our customers to our developers, can therefore rest easy during releases (and these have also become much more frequent).

3. It is impossible to do a comprehensive test: the need to prioritise/tailor testing efforts

Our product Navigation, released in beta in March 2018, required the prioritisation of certain tests based on the risk assessment of different usage scenarios. We chose to implement only a selection of tests in order to best meet the expectations of our customers, who wanted a quality product but were also (very) eager to explore their data with this new interface.

4. Aggregation of defects: the need to examine the distribution of observed defects in order to target testing efforts

Aggregation of defects 2016 / Aggregation of defects 2018

In 2016, 18% of our production defects were concentrated within one component of the data query engine. We therefore invested in testing this area, and by 2018, the same component generated 4 times fewer bugs. This has a significant impact on our overall volume of operations. Thanks to focused efforts, we have become more efficient and improved the quality of our solution.

5. Pesticide paradox: the need to update/maintain test sets

Features and customer usage are always evolving, so non-regression testing must evolve along with them!

6. The test depends on context: the need to adapt practices and objectives depending on the context

7. Illusion of the absence of defects: finding and fixing bugs does not guarantee customer satisfaction, as the product must also meet their needs

We have developed and continue to maintain our testing strategies with these fundamentals in mind, as well as the specific context of each situation, each project, and each team.

Our approach to the quality of our products

“Quality is everyone’s responsibility,” according to W. Edwards Deming. This rings true at AT Internet, where everyone, at every level, works to improve product quality, the reliability of measured data, and, in turn, the satisfaction of our users.

Developers and testers optimise their unit testing and code analysis practices, refining the automated non-regression tests that evaluate new features. Product Owners continually increase their requirements in terms of quality and test automation. Meanwhile, management invests in and supports these efforts, and I act as a liaison, helping each team move forward in the way that best suits them.

Now that we have pooled our efforts, we can make progress for our clients, and quality is one of our top priorities. It is part of our DNA — an integral part of the inner workings of our company.

Originally published at blog.atinternet.com on February 22, 2019.

--

--

Alexandre AUBERT
AT Internet

Passionate of software testing, i’m implementing test strategies and continuous integration processes in different Agile environments.