QA best practices for cloud and K8s solutions

Vladyslav Zagorodnyi
The Spot to be in the Cloud
4 min readJul 13, 2021

Delivering innovative and reliable solutions in any industry is complex and time-consuming. At my current company, Spot by NetApp, we help our customers (usually DevOps engineers) reduce the time and cost involved in managing their cloud compute infrastructure — whether in AWS, Azure or Google Cloud.

Behind our platform there is a lot of cutting-edge code and machine learning algorithms. Of course, it’s impossible to develop without any errors or bugs and that is where my team, the QA (Quality Assurance) group comes in. Here at Spot the QA team is completely integrated into the development lifecycle as customer obsession and customer satisfaction is baked into everything we build and do.

In this post I’ll share some of the best practices we employ to ensure our products bring our customers the value they expect.

Manual and automated QA — complementary functions

As always, establishing processes and standards is key to success in any project, and most certainly in QA.

At Spot by Netapp, QA consists of 2 segments: QA automation and manual QA.

QA automation works to deliver information about software quality on a regular basis. We receive functional suitability reports 24/7/365 days from various automation tools (e.g. Selenium webdriver together with Jenkins) regarding regression of our UI and APIs. In case of failures there exists a notification mechanism and escalation flow, so we discover the problems before our customers experience any issue, so we can perform remediation immediately.

Manual QA is indispensable when working on new product or feature releases. This is where it makes sense to invest human resources into activities that are not applicable for automation, such as:

  • Unstable versions
  • Retests
  • Edge cases
  • One time tests
  • Urgent tests
  • User experience evaluation

In short, we use automated and manual approaches wherever they are best suited. During planning, we evaluate what tasks are applicable for human-led QA and which are best for machine processing. Our QA team is always aligned with our development teams, with jointly established KPIs to ensure we increase coverage of all products and features in a timely manner.

QA requirement planning

When a task gets assigned to a QA engineer the requirement analysis starts. This is the most sensitive part of a QA job, because the QA engineer must understand all details and propose a foolproof solution in text, tabular or graph representation. Here at Spot we pay attention to coverage, depth of testing, scope, consistency, preconditions and possible risks.

We use the “Test Decomposition” methodology for verifying product requirements (typically based on the PRD) and transforming it into these documents:

  • Test Plans — this is a general structure of the indicated level of testing, any required test design techniques and any estimations
  • Test Cases — this provides details of what will be tested and what are the expected results
  • Test Suits — the list of well-built sequence of test scenarios (e.g. first test login and then test logout)

By detecting issues with the testing process at an early stage, we gain lots of benefits. In development practice it’s easier, faster and even cheaper to fix issues at the very beginning than after official production release. So we value a chance to work with product owners even during the early stage of establishing feature functionality and benefits or gathering requirements stages of the software development lifecycle.

Where QA automation shines

In general software products have a great variety of use cases, different flows and millions of applicable values. This can make manual testing almost impossible. Say we add a new feature to our Spot Console which consists of:
- 3 mandatory drop-downs
- each dropdown has 100 values, all applicable for selection
- “confirm” button
- result window (assume result always “true”)

Straightforward calculation shows us:
All possible test scenarios = 1 million possibilities (100 (values of 1st dropdown) x 100 (values of 2st dropdown) x 100 (values of 3st dropdown) = 1,000,000).

It’s exhaustive to test all of them manually, time consuming and risky due to human error. Fortunately in manual QA theory there exist test design techniques. If you want to know how to decrease the list of all possible combinations for 60+% without negative impact on coverage — read my upcoming article on Pairwise test design technique, that is being used here at Spot by NetApp.

Hand-made excellence with manual QA

To reach maximum quality QA engineers fully use manual stack capabilities of these Test Design techniques:

  • Equivalence classes
  • Boundary values
  • Decision Table
  • Pairwise
  • State Transition
  • Domain Analysis
  • Use case

In general, test design techniques allow testers to be more efficient, decreasing the amount of tests but increasing coverage, so we can perform more testing activities and deliver more customer satisfaction.

Evaluate and escalate

The most interesting things happen when QA detects an issue as that is the moment we start to make our product really better. The bug report is a result of professional test design and it’s not a secret that bug reports must be organised professionally as well. Bug reports in Spot consist of:

- Clear title
- Consistent steps to reproduce
- Clear difference between actual and expected results
- Environment
- Handling person

But one of the most important is to prioritise issues, to ensure that if it’s urgent it will be fixed immediately, as it blocks our clients. If not, it will wait in the queue and won’t block urgent activities. The way to indicate those rules are attributes such as severity which denotes the technical impact of bugs on a system and priority which denotes business impact of bug.

After the issue is fixed we perform a retest and also verify backward compatibility to ensure that fix activities didn’t affect or damage other functionality.

Stay tuned for our next post on QA best practices!

--

--