Why Product Managers Should Do All Their Own QA Testing
When was the last time you heard a product manager say something like this: “I don’t have time to QA, so I need more QA resources if we want to ship faster”? I know I have certainly said similar things in the past.
As I have gotten more experienced and somewhat wiser, I have sinced become a strong advocate of QA-less product development teams. With my current team, we don’t have QA Analysts or Engineers. PM candidates interviewing to join the team often ask how we manage to get anything done without dedicated QA people. There are a ton of reasons why we don’t have them: we’re too small, it’s too much process and operational overhead, or we don’t have the budget.
However, chief among the reasons of why we don’t have a QA person on my team: I just don’t want to.
I feel very strongly about not having a dedicated QA position on my team, because I don’t see “QA” as a role or a person. Too often, “QA” becomes “tester” and what is truly stands for—Quality Assurance—is forgotten.
To me, quality assurance is a way of product development. It is a mindset the team as a whole adopts, not just the action of testing or a single person’s role and responsibility.
Why QA Exists
Arguments for having a dedicated QA function are just as numerous.
Having a dedicated QA person is optimization through division of labor and specialization. QA testers are lower cost per headcount. Testing is a time-sensitive function and therefore should have dedicated bandwidth. Testing is not the most optimal utilization of a PM’s time.
While those are all true statements in their own right, they paint a very singular and specific perspective around a resourcing problem while entirely ignoring the broader implications of having a dedicated, specialized QA function. If viewed only from the lens of resourcing, having a dedicated QA function illustrates one of the most common pitfalls of large organizations: silo-ing.
Think about a 15-person start-up: How often do you find a QA engineer or tester among the team? Very rarely, and that’s because the function is commonly distributed among various folks like engineers, designers, and product managers within a small team.
So why do large organizations tend to silo? Because it’s easy. The larger the team, the more people there are to tend to and the stronger the inertia towards silo-ing becomes. Silos are typically a result of organic growth of an organization, whereby a function that was only occupying someone’s partial bandwidth becomes too much work; this is when PMs start to complain about spending too much time QA testing.
Unfortunately, among a PM’s responsibilities, testing is the dirty work that’s first to go.
Instead of solving potential quality assurance problems at their roots, the easiest solution is often to hire someone to do the testing. This is a short-sighted solution along the path of least resistance.
The Quality Mindset
So you may ask: if I’m in fact spending most of my time testing as a PM, what should I do?
First, you must reset your thinking by doing the following:
Distinguish between “testing” and “quality assurance.”
Quality assurance can very easily becomes an afterthought. You must understand that true quality assurance does not start when something is ready to be tested—it starts at a product’s conception.
A quality mindset must be present throughout every portion of the product development cycle: requirements gathering, design, development, deployment. Every member of the team has to be involved in making quality happen and chip in to do all the things needed to reduce testing overhead: write requirements with acceptance criteria, create better design specs that are easily verified, conduct more unit or local testing before deploying to a testing environment.
Doing the necessary planning across the team allows for smarter, faster, and less tedious product testing. Quality derives not from having certain pairs of eyes on the product for testing once it’s close to deployment — it’s about having everyone’s eyes on the product throughout the process.
Product Management and Quality Management
As a product manager, there is no one who knows the product as well as you do. Therefore, I believe no one can test and ensure quality as well as the product manager.
Product managers, by virtue of their role, have the broadest knowledge across all the components of the product. Even lacking specific depths such as UX design or engineering, good PMs can spot issues across all the functional disciplines. This in turn keeps testing cycles tight and reduces the number of touch-points needed to get something fixed.
Testing also keeps the product manager close to their product.
If the product has simply gotten too big to be tested by the PM owner, consider breaking the greater product into smaller distinct pieces with a fully staffed, cross-functional product development team to support each piece. Alternatively, consider hiring an APM (rather than a QA engineer) into the same team to alleviate the QA test load and take on various products and features.
Ownership and Accountability
While PMs are best suited to do QA work, engineering is still by far the single biggest point of leverage for assuring overall quality. So, where does engineering fit into this paradigm?
Certain organizations put 100% of the accountability and ownership of quality solely on the engineer (e.g. Facebook, see below). Engineers are absolutely required to conduct unit tests and local tests, but at the end of a long sprint to build the product, it’s often hard for someone to be critical of the details.
I make a habit not to speak on behalf of the engineering team I work with, so I’ll speak only in anecdotes based on what I’ve observed between the best and worst engineering teams I’ve worked:
- The worst code requires the most testing. Having a codebase you can be proud of goes a long way in reducing testing overhead. Invest in making the foundational technologies better. Good technologists also build in lots of automated testing into their code as guard rails to flag issues early and minimize potential bugs.
- The best technology organizations don’t rely on testing to catch issues. I believe Facebook runs with minimal QA (if at all) because their culture places accountability on the individual engineers to ship bug-free code with complex processes and safe-guards around the scope of any impact (e.g. with phased roll-out to greater and greater groups of users).
- Bring everyone into the quality process. Throughout the product development cycle, the entire cross-functional team (up to 30 artists, designers, engineers, and producers) would conduct weekly play tests. Everyone will get into a room for an hour, play the latest version of the game, and recorded all the bugs on a big whiteboard wall. Not only did this guide iterative development, it also gave each team member regular feedback on their own work (e.g. that art asset doesn’t look right) and an opportunity to fix issues before the next play test.
- If you have QA people at all, have them test the product like a user would. At the mobile gaming studio I worked for, used a small centralized team of highly-skilled QA generalists that acted as a SWAT team for tactical testing on an as-needed basis rather than always being part of the development cycle. Their job is to try and break the game. However, they play the games as if they were players — without any context of what the requirements are — and pointing out issues as an end0-user would. It’s a very effective way to have “fresh eyes” on the product before you ship it.
Going from 90% to 100%
Through all my observations as a PM, a combination of engineering excellence and product-centric requirements validation covers 90% of all of the quality assurance needs.
The remainder of the testing can be done by either a lightweight, centralized QA team (per above) or other functions within the company that should know the product very well (marketing, sales, customer service) but aren’t involved in the day-to-day building of the product. They can offer the fresh eyes needed to weed out the last kinks.