Merging Product Engineering and QA

Ürgo Ringo
Inbank product & engineering
7 min readMar 8, 2024

--

Photo by Lance Grandahl on Unsplash

Merging product engineer and QA role is one of the most impactful levers for a product team to improve its delivery efficiency. Just like another role — business analyst —QA is a specialization that more often than not ends up slowing down the team. It creates an artificial segregation of skills and responsibilities that harms delivering high-quality software using high-frequency releases.

In the analysis we did at Inbank we found that teams were able to reduce their cycle time by up to 45% by investing in acceptance test automation and removing handoffs in testing.

There are domains such as gaming and products with a heavy hardware component that this article does not cover due to their specialized nature and also because I have no first-hand experience in them.

Embedded dedicated QA

When I was working in software consulting at the beginning of the 2000s we always tried to have an embedded QA person as part of our development team. At the time it was innovative as many of our bigger clients had a separate QA group working outside the development teams.

The setup we had was the following:

  • lots of unit tests
  • some integration tests — mostly for DB access
  • no component or any other higher-level tests
  • releasing every 1-2 weeks

Note: I’m using the microservice test classification described by Doby Clemson from ThoughtWorks which I have found suitable even outside of microservices context.

With this setup relying heavily on manual testing and having a dedicated QA per team of 3–4 engineers worked out quite well.

Moving to no dedicated QA

When I joined Wise end of 2013 we had 1 QA per ~15 engineers (the whole company was at ~60 people). At one point that QA person left the company and we never hired any replacement.

This setup was initially quite alien to me. I often found myself trying to find reasons why a dedicated QA would have been beneficial. However, over time I realized that a separate QA is just a bandaid. It allows “outsourcing” part of accountability for the quality of an engineer’s work.

Similarly to practices like database-centric design which at some point was quite reasonable a dedicated QA role has lost its value due to changes in the software engineering context.

The main factors supporting the merging of QA and engineering roles are:

  • simpler high-level test automation
  • product engineering culture
  • increase in release frequency

High-level test automation

When I started my career as a software engineer even unit testing was not as ubiquitous as today. My first job as a software developer was using Visual FoxPro. It didn’t have any support for unit testing so we had to build a basic JUnit clone ourselves. Being able to automate some integration or component-level testing was something we didn’t even dream of at the time.

Before the widespread usage of rich web clients, UI rendering was part of the backend logic. Also, most backends were monoliths and there was no API against which to run component-level tests. Pretty much the only option for high-level testing was running tests via the UI. This was quite expensive to maintain, slow to run, and cumbersome to set up (think to a large extent that is still the case today).

Nowadays unit and integration testing are obvious parts of software development. Most developers would not imagine working without writing some kind of tests for their code. However, what has changed more recently is the availability of automating component-level tests (running the whole microservice/application and stubbing out its external dependencies). This is significant because these tests are better suited as a replacement for manual acceptance testing than lower-level tests on integration or unit level.

Today pretty much any backend application has an API. This is due to separating the frontend into its own application and/or splitting systems into smaller separately deployed components (microservices). Because of this, most frameworks provide out-of-box support for component testing. This means the complexity of writing a component test is pretty similar to writing an integration test. In SpringBoot all I need is just add @SpringBootTest annotation to my test and I can start calling my application via its API. In the microservice, our team is currently working on, the whole suite of 155 component-level acceptance tests runs in ~11 seconds (on Apple M2 Pro) so the run time penalty of running tests against the whole application is negligible.

Product engineering — owning the outcome

For me, the main difference between a software developer and a product engineer mindset is that the former focuses on output while the latter thinks also about the business outcome of their work.

I believe the majority of software engineers care about their work and want to do a good job. However, just like working in a high-trust culture or owning part of the business inspires us to put out the best of our potential, it is something truly empowering when you realize that there is no one else who owns the quality of your work.

Having a separate QA role gets in the way of this thinking. Especially if the organization has the baggage of low-trust culture there is sometimes a fear that the person who implements the solution cannot be trusted to test it.

The thinking is somewhere along the lines of:

Developers are lazy or lack sufficient product knowledge and hence cannot be trusted to do a good job at verifying their work.

This is a classic example of Theory X thinking. I agree with the laziness part though. Thankfully there are two solutions — automation and cultivating the mindset that engineering is not only about building things but about solving problems. Verifying that the solution not only works as specced but also solves the problem is part of the same job description.

About lack of product knowledge. If we believe that a separate QA person can gain that product understanding then surely an engineer can just as well? Of course, for that, they cannot be treated as some precious mushrooms locked away in a cellar where nobody can disturb them. They have to be involved in product discovery and business decision-making. Luckily this is something that the organization has a choice on — either encourage this way of working or make it harder.

Finally, coming up with the acceptance test scenarios and automating them are two different activities. Nothing is stopping the team from collaborating on figuring out the scenarios (in fact that is a good idea for covering as many different perspectives as possible). Once this is done the engineer can automate them as part of implementing the new feature (ideally before writing production code).

Release frequency

When at the beginning of the 2000s it was pretty impressive for a team to be able to release every 1–2 weeks then nowadays high-performing teams release multiple times per day.

A faster release cycle means faster learning. It also means higher quality software as we are not accumulating lots of changes/risk in one deployment.

As per research shared in the “Accelerate” book:

Astonishingly, these results demonstrate that there is no tradeoff between improving performance and achieving higher levels of stability and quality. Rather, high performers do better at all of these measures [deployment frequency, lead time for change, mean time to recovery and change failure rate].

Having a separate QA person in the team will inevitably slow down the team’s release frequency. First, it allows the team to invest less in test automation. Second, even if the QA person is writing acceptance tests then this still means introducing additional handoff in the development cycle. Any handoff means loss of contextual information (no matter how much documentation the team writes). In the best case, there is a single handoff per task but if QA finds some bug there will be another back-and-forth between the engineer and the QA resulting in additional context switching for both.

Of course, neither high-level test automation nor having a product engineering culture means that nobody else besides the engineer who implemented the feature should ever review it. Depending on the complexity of the feature and the maturity of the team it can be perfectly reasonable that the product manager or the designer does some final review of the functionality. However, this should be done on a case-by-case basis and not considered part of the regular process long-term.

Another thing to keep in mind is that absolutes rarely make sense. This applies also to test automation. There are areas like e2e testing, UI testing (not including frontend logic testing) or exploratory testing where automation may not fit either because of the cost/value ratio or simply because humans do that type of testing better than machines.

For more reading on this topic, I recommend checking out the following articles from Gergely Orosz. They are part of Pragmatic Engineer newlsetter which is not free but worth the money.

--

--

Ürgo Ringo
Inbank product & engineering

In software engineering for 20+ years. Worked as IC/tech lead at Wise. Currently tech lead at Inbank. Interests: product engineering and org culture.