GOAT app on iOS

QA Testing Process and Implementation for Startups

Rex Tan
GOAT Group Engineering

--

The dynamics of Test Engineering have shifted in the past decade. Most startups now implement Test Driven Development, and a lot of math has been performed to determine the best tester-to-engineer ratios.

Big companies have plenty of QA Engineers assigned per team, with armies of outsourced manual testers behind them. However, being in the industry for over a decade, the one thing I’ve learned is to always cater your QA testing process to what works for where you’re at.

One of the biggest challenges for GOAT and most startups, in terms of QA, is separating the distinction of quality on other people’s terms, and quality according to a QA person. Most startups have to start lean in the earlier phases. Once things get more ambitious, they can scale accordingly.

In early iterations of tech products, there will usually be one principal tester to start. This brings us to the first strategy.

Automate from the Beginning: Should we though?

Automation is a huge benefit. The return on investment can be astronomical. Being able to run multiple processes, and check for errors over and over again at the push of a button without the risk of human error, is a must have. Good QA Engineers can spin up an automation framework and have it ready to add tests in less than a day.

Here at GOAT, we utilize ruby cucumber for web and backend due to high industry adoption and human readability. For mobile, we use native android in kotlin, and native iOS in swift. We can currently utilize these because we have the manpower to support them. But 2 years ago?

The reasons Automation is tougher for early stage startups is due to cost, product iterations, and time. Early stage startups are under heavy time constraints with sparse manpower. Release after release will change dramatically on the design and technical sides. Automation requires the correct data attributes and identifiers to assert against. The more tests you have, the more you have to maintain based on the changes.

You can have 10 test cases (which does not sound like much) that can take people most of the week just to write, only to be out of date in a matter of 1 PR.

Most startups don’t have the time to accommodate the data attributes required for QA to do their job, let alone slow down just to keep things automated. Automation only makes sense if things are stable and don’t change dramatically. When iterations slow down, and more testers are involved, that’s when automation makes more sense.

Starting with a manual tester that can execute tests and update test cases on the fly, while new builds are getting released, can definitely provide a higher level of quality. The cost would be relatively lower with more coverage. They can test things automation cannot typically catch (human eye performance, layout issues, edge case testing).

They can lay the foundation for automation in the future with documentation on how the product works. When automation gets there, they can go over the test cases and automate what they can.

QA to Engineer ratios: Really??

In my time in the QA industry, I’ve been often asked “What’s your QA to Engineer ratio?” This may make sense in larger corporations where people are viewed as numbers, but this does not make sense in smaller startups.

All projects are not equal. One Engineer can be on a project that requires a full regression run on all platforms, which can take days. Three Engineers can work on animations that can be tested in a matter of minutes. Projecting quality based on arbitrary ratios should never be utilized when measuring your quality. It should be done in sprint planning, story time, grooming, or any other meeting where you can dive into a ticket with the knowledge of what it entails.

At GOAT, we calculate the QA cost of testing in sprint planning. We also write test cases when stories get added into a release version. We add those test cases to the stories as well. This way we know the added cost, what it is we’re testing, and ensure that we have coverage.

We schedule test case reviews for major features. We also run a QA triage every time we do full regression after any features go into an integration phase with everything else going into a release. Startups may be small, but communicating issues and visibility cannot be taken for granted. This is especially true when scaling.

ATDD: How do you implement this?

Personally, I am a big fan of ATDD (Acceptance Test Driven Development). The benefit behind this methodology is that QA, Design, and Engineering can collaborate very early in the release process by establishing Acceptance Criteria. This cuts risk, scope creep, and expectation. It can create a funnel around design changes, mitigate miscommunication, and plan around edge cases before it even goes into Engineering. Automation can request its requirements up front. More accurate scope and cost can be established earlier in the process. It allows everyone, including manual and automation QA, to be proactive instead of reactive.

The drawback around this is that the costs up front can get high. While it can theoretically work, it requires a lot of diligence and commitment. Early stage startups tend to run frantically in the beginning. Committing to a methodology like this requires a sense of discipline that is not ideal in early phases.

Around year 2 or 3 of a growing startup, hiring starts to get frantic. Departments tend to start filling up. Projects and initiatives become more ambitious. Growing pains run rampant. The struggle to integrate code, release products, and maintain the same cadence between platforms grows exponentially. ATDD can help with this, but how can you even introduce it at this stage of a company?

Feature Teams! GOAT is at the stage where introducing feature teams makes sense. Breaking out smaller individual features to smaller engineering/design/testing groups is ideal. Engineering Managers can run point on establishing requirements in stories, and writing acceptance criteria as a team. This cuts down on organization-wide discipline.

The organizational challenge is then coordinating the cadence between the feature teams, architecting those projects, and ensuring the backlog isn’t ignored. This can be done by keeping platform leads and architects separate from the feature teams. They don’t need to be completely involved in all of the processes, but can variate their time between the different feature teams. Release strategies can work by hard dates for integration based on the project.

The QA team responsibility in this would be to help drive quality, testing on the feature level, and coming together to run full regression when it gets to the integration level. Automation can be separate from this, as long as they have their requirements. QA Leads can coordinate what gets automated, what should remain manual, and create a testing plan around it. QA Leads can also run the triages and play gatekeeper during the integration testing phase.

GOAT’s Engineering team is hiring!
Visit our careers page to learn more.

--

--