Eight Signs Your Agile Testing Isn’t That Agile

Questionable testing approaches in Agile development

Blake Norrish
Slalom Build
8 min readJul 27, 2021

--

Agile software development comes in many flavors, and nobody exclusively owns its definition. Unscrupulous consultants love this, because it means they can make bundles of money selling their version, or coaching clients on how to be “more” agile.

I have also been a consultant, and have an opinion on what is Agile and what’s not, so here are eight signs, to me, your Agile testing is not as Agile as you think it is.

Only your Testers Test

What is a tester? As Joel Spolsky famously wrote, they are cheap resources you hire because you don’t want your developers testing. This opinion is anathema to Agile testing — if you hire cheap resources to ‘test’, you aren’t testing with an Agile mindset.

Testing is not a role, it’s an activity. It’s an activity that everyone on a team should participate in. Even if you have a role on your Agile team with “quality” in the title (and I think you absolutely should!), they should not be the only ones who test.

“But developers can’t test their own code!” some might say. Well, they can’t be the only ones that test their code, but they certainly can and should test.

“Developers can’t test, it’s not how they think!” others might argue. That’s an interesting opinion, but not one I subscribe to. While I agree that there is some confirmation bias introduced by knowing how something is built, this does not preclude those who helped build the software from testing it. In fact, I would argue that testing, thinking destructively, finding edge cases, etc. are all critical skills for software developers to learn.

I would even go so far as to say that true Deep Testing, as defined by James Bach and Michael Bolton, is a skill that all roles on a team should develop and practice.

Drawing a hard line between development and test activities by assigning all testing to a special group of testers worked in waterfall development, but is not something that is compatible with agile methodology.

You create defects for everything

When you have a story in a sprint, and you find an issue with that story, what do you do? For many teams, the answer is still “file a defect.”

In waterfall development, test teams would get access to a new build with new features all at once. They would then start a day-, week-, or even month-long testing cycle. Given the amount of defects that would be found and the time duration between discovery and fixing, it was critical to document every single one.

This documentation is not necessary in Agile development.

When you find an issue, collaborate with the developer and get the issue fixed, right then and there, in the same day or at least in the same sprint. If you need to persist information about the defect, put it in the original story. There is no need to introduce separate, additional documentation.

There are only two reasons you should create a defect.

One: an issue was found for previously completed work, or for something that is not tied to any particular story. This issue needs to be recorded as a defect and prioritized. (But, see next topic!)

Two: an issue was found in a story and the product owner feels resolving the defect is significantly lower priority than completing the story, and feels the story can be accepted as-is. In this case a defect is created to fix the remaining work, and the current story is moved to done.

Creating defects for every issue found for in-flight work is a holdover from the waterfall testing days of yore.

PS: this is still true even if you hide your defects as sub-tasks.

You assign a priority to defects

So, you have a defect for a valid reason. (See previous section!) The waterfall tester would immediately assign that defect both a severity and priority. “Just found a pri-1, sev-1!” was a common exclamation in the days of waterfall testing.

What is priority in Agile? It is simply the order the defect is placed in the backlog. Whether that defect is high or low priority, or something in between, is the product owner’s decision and communicated by its relative position among all the other stories and defects in the backlog. Giving every defect a separate and redundant priority, recorded in a special field, is incompatible with the idea of a prioritized backlog.

Severity is less egregious, but still redundant. The severity of the defect should be obvious given the description recorded. If you really feel you need to summarize this into a single numeric value, fine, but it’s probably going to be ignored by everyone except executives reading vanity reports.

You find a significant number of defects for every story

In waterfall development, there was a mindset of “developers build it, testers test it.” Thus, it was expected that a significant number of defects would be found when a new build was given to the test team.

For many, this mentality has seeped into their Agile development. A story is developed, passed to a QA, and many issues are found. The QA returns the story to the developer to fix the issues. This process is repeated.

Finding significant defects in each story is an indication you still think of testing as a post-development activity, and not something that is done continuously as the story is being implemented. A story’s lifecycle across an Agile board should be thought of as a process of continually increasing confidence. If significant issues are always being found in one of the last stages, something is wrong in an earlier stage. Adjust your testing process to find these issues earlier, rather than treating your two-week sprint as a two-week waterfall.

You exhaustively enumerate test cases in a test case manager

When a huge number of features were dumped on a manual test team, it was great to have a plan for executing all those tests. While developers were off building that first deployment, there wasn’t much for testers to do anyway. Thus, big exhaustive test plans.

Agile stories should be small — it’s the ‘s’ in INVEST. The testing of a single story should not require its own test plan or an enumeration of all test cases.

Does this mean no test documentation? Absolutely not. It’s still important to document in the story what was tested, any test infrastructure that was required, testing challenges that were encountered, etc. If you really feel it’s necessary, you can use external management tools (Zephyr, TestRail, etc.) to document some of this, but this is often an indication you are falling back into waterfall test case documentation.

Test planning, documenting test concerns and approach, etc. are important when testing in Agile. Exhaustively documenting each and every test case is not.

You automate test cases

Given that we’ve already said exhaustively enumerating test cases is bad, automating those test cases is doubly bad.

“WHAT! Automation is NECESSARY in Agile!” naysayers will say. Absolutely it is, but you shouldn’t automate test cases.

I’ll repeat that because it’s so foreign to some Agile teams: you shouldn’t automate test cases.

Automating test cases, taking the 152 test cases from your test plan and turning them into 152 new automated test cases to be added to your ever-growing test suite is a surefire way to build an inverted test pyramid. If you didn’t know, inverted pyramids are bad.

The problem with test cases is that they are usually high level (eg: “whole application”) descriptions of expected behavior, whereas we want automation to exist at the lowest level possible.

What should happen is from the handful of stories that are being delivered in the current sprint, several hundred (or even thousand) very low-level unit tests are written, hundreds of component or API (sometimes grouped as “subcutaneous”) tests are written, and maybe a handful of new of existing E2E, high-level automated tests are written. You should have WAY FEWER e2e automated tests than you have test cases.

Agile teams must actively assess their automation holistically—from unit to e2e—to ensure that all combined, the automated tests are providing the necessary coverage and confidence on new features. Teams should aggressively trim test suites by eliminating redundant tests, or pushing tests down the pyramid.

Hearing someone brag about the number of automated e2e tests they have in an agile development methodology is a sure sign they are not testing (or automating) with an Agile mindset.

Another great indicator of over-automated test cases: you find yourself having to run suites of tests overnight, to get a once-per-day feedback. 12-hour automated suites were fine when we deployed twice a year, not so much if we want to deploy twice an hour.

You need significant regression testing before prod deployments

You just finished a sprint! All your stories were successfully completed! Your product owner wants to deploy to prod! Can you?

If you need a “regression sprint” before you are comfortable pushing to production, your testing can’t really be called Agile. The more testing you need, the less Agile it is.

Because of compliance, security, or enterprise bureaucracy reasons, it’s not always possible to deploy on-demand (e.g. Continuous Deployment or even Continuous Delivery), let alone after every sprint. However, the goal of Agile testing should always be to get all completed work production-ready, as part of the story. The more delta you have between completed stories and production ready, the less you can call your testing Agile.

A different way to look at this is to evaluate how done the “done” is in your story Definition of Done. It is very easy to start cutting things out when schedule pressure hits. “Well, we don’t really have to do performance testing as part of each story… let’s do that before deployment,” etc. The more you cripple your “done” the less Agile you are becoming.

You separate testing sprints from development sprints

Developers develop a bunch of stories (in collaboration with QA!) but there are always testing or automation tasks left undone at the end of the sprint. Rather than fixing the root problem (story sizing, estimation, dev-QA collaboration, etc.) the team decides on a strategy of “follow-up” test sprints: the stories are developed in one sprint, then the testing and automation of those stories happens in sprint + 1.

Follow-up test sprints are an admission to failure. They take your process in the exact opposite direction it needs to go: towards a more siloed, serial division of labor between development and test activities.

If you are a proponent of follow-up test sprints I won’t be able to convince you of their lunacy here. I do feel sorry for the developers who get stories returned to them for work they did four weeks prior. I usually can’t remember what I did yesterday.

No, YOU’RE not Agile!

Even if you disagree with some (or most?) of these, hopefully the awareness of alternative thinking at least encourages reflection on how your approach fits within the ecosystem of Agile testing.

If you want even more detailed opinions on Agile, I guarantee that there are people out there willing to give them to you. For $400 an hour.

--

--

Blake Norrish
Slalom Build

Quality Engineer, Software Developer, Consultant, Pessimist — Currently Sr Director of Quality Engineering at Slalom Build.