The Value of Quality Assurance

Jane Shiau
Theory and Principle
6 min readOct 1, 2019

When my boss first suggested that I write a blog post about the value of quality assurance, I stared blankly at her. Doesn’t every company producing a digital product — an app, a website, a software program — already involve QA? It was as if she’d asked me to justify the existence of puppy dogs or music.

My three chihuahuas in Halloween costumes, looking disgruntled
Quietly hum a medley of Broadway tunes to yourself while Paco, Mina, and Pedro demonstrate how much they hate Halloween.

At Theory and Principle, quality assurance is used to refer to the process of verifying, both internally to our team and externally to clients, that the product we’ve created meets the pre-defined requirements. Some companies don’t have dedicated QA and shift this process to developers. Others wait to test until development is completed. How a company allocates its testing resources depends on their products, the type and size of the markets for those products, the ease of updating the product once released, their budget, etc.

For example, we design and build web and mobile applications exclusively focused on advancing the legal industry. Because our products may include legal commitments, we have a greater responsibility to deliver a bug-free experience than if we were building a free game which collects no personal information or money. In other words, the greater the chance of someone being adversely affected (financially/legally) by a bug in our programming, the more important pre-release testing is to prevent that situation.

Many of our products are also intended to improve access to justice for all. We’re already a company that believes the most critical parts of any product are the user experience and user interface, but this user-centered approach becomes even more important when working to provide legal assistance to those who need it the most. Testing plays an important role in ensuring that our web app not only does what it’s required, but does it as smoothly as possible to deliver the maximum value to users.

Another issue that impacts our overall testing strategy for a specific product would be whether a product is intended to go directly from delivery to wide public release, in which case it would undergo rigorous testing first. On the flip side, a product scheduled to go through an extensive beta testing period with a targeted user group might require less upfront bug testing, with our time focused instead on quickly delivering a minimum viable product for the client.

Though details vary depending on the specific product, our QA process generally involves the following steps.

Requirement Analysis

During discovery, our goal is to gain a full understanding of the problem our clients are trying to solve so that we can map out solutions and build an information architecture to serve as the framework for development. This process also allows us to determine the scope, priorities, and environment for testing the product.

Test Plan Documentation

A test plan is a higher-level document that outlines a project’s general testing procedure and documents the information we’ve gathered about testing during the requirement analysis. The specifics of what we cover in a test plan will change depending on the project, but at a minimum, we identify:

  1. What we plan to test (e.g., the product/platform/version)
  2. How we’ll test (e.g., any required software needed to accomplish the testing, who will do the testing, how bugs will be reported, what accessibility conformance level to test to)
  3. How we’ll define the outcomes (e.g., what deliverables will come out of the testing, how pass/fail will be defined for each tested issue)

A test plan can and should change throughout the project as new information comes in, so that it continues to meet its primary goal of defining the overall test process for everyone involved.

Test Execution

Because we’re an Agile development shop, we develop and release features in two-week sprints rather than in one fell swoop (or giant disaster) at the end of the engagement. So our active test phase is an iterative process: as features are developed, we test, we send them back as necessary to developers, we test again, we move them forward in the process, new features are developed, repeat. There are many names for all the different types of testing, but for the sake of simplifying the discussion, I’ll loosely group the type of tests we run during this part of the development by the job types that perform them — developers and QA testers.

Developer Tests

The developers building our projects run tests on a regular basis, depending on the specific project. Generally these tests are automated, written by a developer and then reused over the course of the project, depending on the programming language and framework. Tests include:

  • Unit testing: testing the smallest piece of code written, such as a function.
  • Integration testing: ensuring that combined units work well together.
  • Load testing: determining how the system will perform under varying conditions. (An app intended for public use that works perfectly well when one QA tester is navigating it is useless if it crashes as soon as more than ten people log on.)

QA Tests

This is what most people typically think of as quality assurance work, even though for our company, the process of quality assurance really runs through the entire development cycle of a product. Prior to development, each feature and its acceptance criteria are defined in a user story, or a brief description of a particular type of user and an action the user wants in order to achieve a goal (example: “As an administrator for a legal referral site, I want to be able to view a list of all the partners who have recently requested access so that I can approve them to use the site”).

Depending on the complexity of the user story, we might write a test case for it which details how to test the feature, any test data necessary, and what constitutes a pass or fail for the feature being tested. Features that fail, along with any other issues that might come up during the testing, are documented and returned to the developers for another round of coding before they go back to QA.

QA testers need to check that:

  • Feature requirements are met, both functionally and visually.
  • Cross-browser/device compatibility exists.
  • Bugs or unexpected outputs are captured and sent back to developers.

Accessibility Tests

Accessibility testing is the process of determining whether software, apps, and websites are usable by people with disabilities such as difficulty seeing or hearing, or physically navigating a keyboard or mouse. We use the Web Content Accessibility Guidelines and its conformance levels as a reference to determine how to adapt our web apps to be useful to the widest range of users.

User Acceptance Tests

At the end of a sprint, the client views and signs off on all the features that have passed QA during the last two weeks and verifies that the developed features have met their requirements. Because we’ve kept the client informed of all we’ve done and given them opportunities to view and test the product themselves, our expectation is that there will never be any surprises when we complete a product and turn it over to the client at the end of an engagement. If you forget everything else you’ve just read in this post, remember this: In the context of product development, SURPRISES ARE BAD.

In the context of Doctor Who episodes, surprises like the Weeping Angels are still pretty bad.

Regression Tests

Regression testing takes place prior to a release, or on a more regular basis for larger projects. During this kind of testing, we run through the software and check that the new changes we’ve made haven’t broken any existing functionality. Even though the older features have already been tested and passed QA, changes made for different reasons may have unintended consequences and can affect seemingly unrelated sections. Because full regression testing can take up considerable time and resources, even when utilizing automated test suites, the specifics of each project determine how often to perform regression testing and which tests should receive priority during it.

Summary

Because we’re a design-first company, we take the user’s needs into consideration from the start of our process. These user needs underlie every single step of our development. QA, done well, is similarly integrated throughout the development process. In a design-first company, the purpose of a QA tester is not to ensure that the design works for the user — that’s what we have a designer for — but to verify that the translation of design, through development and to the screen for the user, is as seamless and accurate as possible.

--

--

Jane Shiau
Theory and Principle

QA Engineer at Theory and Principle, a legal technology product design and development firm in Portland, Maine