Software Quality from a PM perspective

Kayvon Ghaffari
Thus Spoke Kayvon
Published in
6 min readSep 28, 2022

So, a colleague reminded me of a term recently: executable requirements.

An executable requirement (also called executable specifications) goes beyond a mere user story — it describes what a new feature needs to do, or how a system needs to work, in such a way that the tech team is able to write automated tests against it easily. This often manifests in gherkin format (given/when/then) or done-when style acceptance criteria for user stories, but not necessarily — the important part is that the team can read and understand the behavior of a requested feature and can create automation to make sure it works as expected.

I am a big fan of thinking of acceptance criteria as a basic test plan per story, because it makes me think like a software quality tester.

Recently, one of the quality engineers on my team gave me one of the best encapsulations of the scope of their software testing work¹ I’ve ever heard:

  1. Does it work the way it’s supposed to?
  2. Can I make it not work the way it’s supposed to?

These points are simple, but extremely important to keep in mind for someone who creates specs and describes functionality. If I can’t adequately describe what a feature is supposed to do, how would someone be able to test if it’s built correctly? If I can’t describe failure states and/or non-happy path behavior, how will we establish guard rails for novice (or malicious) user behavior that might break it?

When I thought more about this, it made me appreciate how much overlap in job function and attitude PMs and QA² have with each other. And so here I am, writing an article about it.

What do PMs and quality engineers have in common?

We represent the voice of the customer

First and foremost, we’re both specifically in the business of shipping high quality software that our customers will love (or won’t hate).

Every good QA leader I’ve worked with has said some flavor of this statement:

Quality is everyone’s responsibility, not just the test team

This doesn’t merely mean that our teams are responsible for testing what we produce; it means quality must be a team and company value, and everything we do should be seen through the lens of providing good experiences, both via software and via process, for our customers and the people we work with.

As PMs, our role is to help define what quality means — what are we measuring against? Do we have real world scenarios mapped out? Can everyone from the CEO, to the marketing teams, to our actual customers understand what we’re asking the team to deliver? Removing ambiguity about what we’re building and why we’re building it is a pillar of high quality software delivery.

We embrace process

I’ve noticed that PMs and QA tend to be in favor of implementing processes. Like, sure, we can all focus on writing quality user stories or testing features in a vacuum, but if we see consistent problems, defects, bugs, and overall trends that hurt quality as a whole, we’re both down to take a step back to see how the system could be set up better in general.

Fixing problems is good, fixing systems to avoid those problems in the first place is great.

And of course, we have our unspoken alliance against the dev team

So, I originally wrote that sentence as a joke, but I think there’s a kernel of truth there that I want to explore. I’ve worked in organizations that seem to think of the delivery team as just the developers. And the rest of us? We’re seen as being here to support the developers instead of participating as full partners on a team.

That feeling sucks, and I’m sure my QA friends also know it all-too-well. But fellow PMs, let me ask you this — despite knowing how bad it feels to be an afterthought, how often have you personally treated quality or testing processes as an afterthought?

I know I’m not the only one raising my hand in shame.

Where do PMs and QA have conflict?

Okay fine, it’s not all rainbows and kittens between PM and QA. Do our roles ever butt heads? Absolutely.

We’re incented differently

I’d say in general, a PM is incented to complete projects and deliver products to customers. QA is incented to call out when a project isn’t actually complete, and block the delivery of sub-optimal products to customers.

In an ideal world, these incentives are in harmony. If I’m the PM on a team that delivers something broken or frustrating, that’s bad for me — not to mention the users. But every PM has felt the intense pressure to hit a date, and the will to say “come aaahhhn… it’ll be fine” or a handwavy “we’ll fix it with a fast-follow release” takes over. Despite what the quality team says — even if we know they’re right — we’ve all attempted to use black magic to get to green status.

And the worst part is this isn’t just the PMs — everyone is guilty of this. Leadership, marketing, the developers, and kids on Discord are all pissed off when a release is delayed.

It takes immense courage to stand up for quality in the face of disappointed coworkers and customers. A PM worth their salt, even if they are annoyed or panicked, will stand with you — we should know that the alternative is worse. As Mr. Miyamoto said, “A delayed game is eventually good, but a rushed game is forever bad.”

Definition of bugs and defects

It feels like I’ve spent literal months of time in meetings discussing whether the discovery of certain software behavior counts as a bug or missed requirement.

Do I cover every possible scenario or edge case in my spec/stories? No; I’ll do my best, but it’s just not possible. Most quality engineers know this, and a healthy team understands that it’s the nature of attempting to build software. But I’ve worked with some sticklers who take joy in pointing out minutiae on unhealthy teams.

This is what I like to call the Air Bud Principle. There’s that scene where it’s pointed out that there ain’t no rule says the dog can’t play basketball.” Are they technically correct³? Yes. Are they ridiculous? Also, yes.

A QA coworker of mine once told me a good tester thinks about testing scenarios in terms of coverage of risks, and that makes a lot of sense — they can’t test everything, either. A good PM shouldn’t waste their time codifying that a dog can’t play basketball as a requirement, and a good tester won’t waste their time testing whether a dog can play basketball. We’re all on the same dogless-basketball team here.

But back to arguing about bugs. Here’s how I think of it — who cares if a ticket is technically a not bug? Remember the principle above about how quality is the responsibility of all of us? If you’re arguing about whether something that sucks is a bug or not, you’re doing it wrong⁴. The path forward for the team should be to embrace the feedback, account for the fix/change in the schedule, and get it done for our users.

Final thoughts

I think I could go on and on (and maybe I’ll make a part 2?), but I’ll stop here with a couple of questions both for PM and QA:

  1. What do you wish your counterparts knew about your role?
  2. What behaviors have you seen from your counterparts that fostered a healthy relationship between PM and QA?

Kayvon Ghaffari is some guy who does his very best to care about quality

Footnotes:

  1. There’s way more that goes into being a quality engineer as a career than just these points, but I’m specifically talking about testing activities.
  2. I’m just gonna use QA/quality engineer/tester someone interchangeably in this article, because I’ve heard multiple preferences. So you’ll either all be happy or all offended. Sorry in advance… or at the end, I guess? But to be fair, I’m using PM to cover product/program/project, and people have a lot of feelings about that, too.
  3. In this case, it is not the best kind of correct. Sorry, Hermes.
  4. Yeah, I know that some places use bug/defect occurrences as serious metrics. Luckily for me (and you) I don’t work on airplanes.

--

--