Follow These 6 Steps and Your SaaS Product Quality Will Rock n Roll

Ran Eitan
The VP’s Dilemma
Published in
8 min readFeb 13, 2018
Photo by NeONBRAND on Unsplash

You are a true test driven evangelist in your organization. You think you figured it all out - you have your developers writing unit tests and have sonarqube to prove it. You have automation tests covering all possible end-points. You have built glorious UI tests implementing every story DoD as defined by the Product Owners. But your quality remains low. Customers consistently complain about severe bugs, broken functionality and system’s instability, and you have no clue how you got here.

What is REALLY going on here?

While working in Agile, be it Scrum or Kanban, most startups’ R&D teams will implement automation as follows: with every Story defined, the PO (i.e. product owner) will have a discovery session with the team’s technical point of contact. That can be a Team Leader, a Technical Leader (aka tech-lead) or an expert on the subject matter who provides feedback, asks challenging questions and comes up with Story estimations. All compiles into a well-defined estimated story.

Done right, this is an on-going process done just in time. Working in Scrum, that process typically occurs during the current Sprint as a discovery session, done before the next Sprint starts. Working in Kanban, that process is done continuously. The team will pull the highest “baked stories” in priority from the backlog, and jointly groom them.

What typically happens next is that the team begins working on groomed Stories as part of the new Sprint or by priority when in Kanban — breaking them down to both development and QA related tasks and assigning them to the relevant developers or testers.

That’s when the developers start working on their tasks - quick napkin or whiteboard designing/modeling phase, peer design reviews, code writing, unit tests, pull request, green build and PUSH!

Related: Can I Become a VP R&D Unicorn Applying the 10,000 Hours Rule?

So far so good.

Testers, on the other hand, will typically start with breaking down the Stories into test-cases. If the PO was thorough, the tester’s life may be a bit easier, as the story DoD already considers many for the test-cases. Where feasible, most of the test-cases will be automated leaving manual testing for only few (this adds technical debt that we should try to avoid, and a topic for an entirely new post).

Now, when automation actually runs, it really depends on your startup’s specific continuous integration (i.e. CI) implementation.

Not bad, right?

Well, “not bad” won’t cut it.

Photo by Caleb Woods on Unsplash

Unfortunately, I have learned over the years that while we, VP R&Ds, bring to the table a very strong background in the development world and put our focus on solutions architecture, design, tech-stack, coding practices and quality, we are leaving out all aspects of Quality Assurance architecture, design, tech-stack and automation coding practices.

Let me explain.

A technical-leader working on a story will typically try to see the broader picture and try to understand how that single story falls into place with all other stories. How that additional piece of code written, falls into place with all other moving parts and pieces of the system. How she can design it to match the overall architecture and standards. How she maintains a clean API. How she reduces the overall system entropy with every line of code written.

She would even have a peer review on architecture and design and consult with the architect (or VP R&D if one doesn’t exist) to ensure that no concepts or standards have been broken.

Rarely have I seen the same paradigm applied to testing in startups.

In most cases, that unique point of view is lacking or overlooked.

As VP R&Ds it’s our responsibility to ensure that the same mechanisms are followed across both our production code and the code that tests the production code. CODE is CODE is CODE.

QA should adhere to the same standards. Given a story, and before writing a single line of automation code, we should take a broader view and consider the Epic (i.e. series of Agile stories) as a whole — both the stories we have already automated and those we haven’t yet. As well as what business use-cases we are trying to deliver our customers, and the ways in which customers be using the product.

A Story by definition adds incremental value and increases the overall value proposition. In regards to Quality Assurance, one Scrum or Kanban story doesn’t tell the entire “story”.

Similar to the processes applied by developers, we have to educate our organization to follow the same principles and concepts when it comes to testing. Otherwise, you may have hundreds of tests coming out green every single time, but still have low customers’ use-cases coverage.

The myth of the 100% test automation coverage

Photo by Levi Guzman on Unsplash

Personally, I don’t believe that any type of testing can reach 100% coverage, but it does have a nice tune to it when you write a blog post title. But seriously, even if there were such thing as 100% test automation coverage, I would never set it as a goal.

What does 100% coverage mean anyway?

Does covering 100% of every code line written with unit-tests falls into that definition? Does covering all public API end-points and all possible JSON contract transformations with integration tests apply? Should your team have 100% unit-tests, system-tests, integration-tests and UI tests coverage all at the same time?

The answer is clearly no.

As our main focus here is setting the right automation testing mindset in the R&D organization, I would say that; on a very limited time, high pressure to meet deadlines, scarce resources and constant change priorities, perfect is the enemy of the good, and aiming for that 100% coverage objective is a total waste of your teams’ time, effort and focus.

Photo by Clark Tibbs on Unsplash

What can you DO TOMORROW to improve?

Assuming all automation development, provisioning, monitoring and reporting tools are already in place, here is what I suggest you do:

  1. Initiate Story Kickoffs - When a story, or multiple stories, are ready to be presented and discussed with the team, the PO will initiate a Story kickoff. In Scrum this process occurs continuously in preparation for the next sprint(s). In Kanban, this process occurs just-in-time before starting to work on these stories by priority. While some may think that Planning meetings are the right setting for this, I would argue otherwise. For instance, Story kickoffs typically assign additional inquiries and discovery tasks to be completed before starting work on these stories. Secondly, planning meetings should be short and efficient to allow all stories to be covered, and these meetings should end with a clear plan and team commitment. If you start your stories’ discovery sessions during planning, I assure you nothing good can come of this. And finally, ensure product story kickoffs include both the tech/team-lead and the relevant tester. The QA brainstorming about the story starts right there.
  2. Initiate Epic Kickoffs — Throughout this post, I’ve stressed the Story element of Scrum or Kanban. Keep in mind that Epics come first chronologically. Everything I’ve stated about stories should also be followed and applied to Epics. QA managers or testing leads should be heavily involved early in the process, considering the way the entire testing plan will be architected and designed to answer the epic(s).
  3. Ensure your team has a well defined Acceptance Criteria - Encourage the PO and the entire product team to be more engaged in the overall QA process. Work jointly with the product team to define a better and more elaborated story Acceptance Criteria as part of the DoD. Ideally, elaborated acceptance criteria will include not only multiple success-scenarios but also multiple negative-scenarios which can later be translated into negative testing. Security, performance and load-related criteria and SLA may also find their way into the story acceptance criteria. Having the team review and accept these criteria is a critical step and a great development practice. I would even consider adding a mandatory checkbox to ensure you don’t skip this important step.
  4. Don’t skip Design Reviews step - Encourage automation design reviews by the PO and the team. This can be a meeting in which the tester presents the test scenarios, considerations and assumptions BEFORE story(s) implementation. That way, the entire team is aware of how the system and product are tested. Provide feedback, insights and corrections where needed. I highly regard this meeting, as it’s a cross-fertilization which has developers rethink and consider aspects of the design and development they may have overlooked or missed.
  5. Perform Cleanup - Even though your build system shows GREEN, you know that it’s anything but. There are probably tens or hundreds of “low quality” tests running with every code push and build cycle. These miss the most important use-cases performed by real customers, and result in production bugs. Ideally, a task-force comprised of a tester and a product manager would review all legacy tests, sorting out what is relevant and what isn’t. What should be redefined and rewritten and what should be trashed, saving time on the build process cycle. Start with areas customers complain about the most and move on gradually into other areas. Trust me on that. Done well, it will be worthwhile in the long run.
  6. Double down on Recruiting - some companies aim very high when hiring developers, but when it comes to testers, they settle for mediocre. This is a great strategy if your goal is to FAIL fast. Do whatever you can to avoid that. Hiring great automation-developers (aka testers) is equally important, if not more to ensure your services and solutions quality is high.

If you enjoyed reading this article, please click the 👏 button and share it to help others find it! Feel free to leave me a comment or contact me at @raneitan. I take the time to closely read every comment and answer any question asked.

--

--