How I avoid rolling over tickets from one sprint to another due to the ‘testing wall’

If you have ever worked in an agile development team, you may have come across the ‘testing wall’. This is where, at the end of your sprint, you find that the QA engineer does not have sufficient time to complete testing of the remaining tickets, causing them to roll over to the next sprint. You decide to ask your software engineers to do more testing. In this article I examine the factors to consider when making this decision. [Article on software management]

Martin Hodges
14 min readMay 23, 2024
The testing wall

Sprint roll overs

You have probably been there. The end of the sprint comes around and the developers are asking for more tickets to be added to the sprint whilst the Quality Assurance (QA) engineer desperately tries to catch up.

The result is that the tickets that did not complete testing and, along with any last-minute add-ins, all get rolled over to the next sprint. This is demoralising, ineffective and problematic. I call this the ‘testing wall’.

When you face the problem of the testing wall at the end of your sprint, you need to consider a different approach to testing.

It would seem obvious to simply say ‘software developers need to test more and take the burden off the QA engineers’ but, in this article, I will show you why that is not that as easy as it seems and that you need to consider a number of factors when making this decision.

For instance, you must understand the differences between software and QA engineering when it comes to testing, including:

  1. Their roles and accountabilities
  2. Their approach
  3. The purpose of their testing
  4. Their testing scope
  5. Your team

Finally, there may be other options to choose.

1. Roles and accountabilities

The first, fundamental difference between software engineers and Quality Assurance (QA) engineers is their roles and associated accountabilities in the development process.

Quality Assurance

Quality assurance is the process by which a person, with appropriate authority, provides a statement about the quality of the software. This is the role your QA engineer plays.

They provide that assurance through testing of the solution. They may do the testing themselves or review the results of what others have done.

It is by the authority of the Quality Assurance (QA) engineer that the software is deemed fit to be deployed so it can be used. They assure the quality.

Quality assurance does not modify the software directly but may result in defects being raised that the software engineers have to fix.

Ensuring Quality

Unlike assurance, ensuring quality is the task of the development process that creates the code. It makes sure that the developed solution will pass the QA engineer’s quality standards (ie: the QA tests).

This is the process followed by the software engineers and they are accountable for ensuring the quality of the code.

Already we have a clear split in accountability between the two disciplines. Software engineers ensure quality whilst QA engineers assure quality.

2. The approach to testing

The difference in accountabilities is reflected the difference in engineering disciplines.

QA engineers

QA engineers test with the aim of being able to say if the solution is of suitable quality to deploy.

They are effectively putting their name against the stamp of approval. For this reason, they need to make as few assumptions about the actual quality of the code as they can.

This means designing and executing tests that provide the coverage that allows them to make a statement about the quality.

Therefore, the QA engineer approaches their work on the assumption that the code is inherently defective until proven working.

Software engineers

Anyone who knows me will have heard me say ‘I believe software engineers make poor testers’.

This must be put into context. As a software engineer, I do two things more than writing lines of code:

  • I read code (this is so important it should be part of your code reviews)
  • I test code

The point about testing is important. I do not know any software engineer who writes code and does not test it. If they do, they are not a software engineer!

So, software engineers do test as part of their development process as this helps ensure quality.

They write some code, test it and fix it, eliminating all known defects at each iteration.

The problem they face is that, being so close to the code, they make assumptions about that code. They inherently assume that the code works until it is proven defective.

With all this in mind, I will revise my previous statement to ‘software engineers make poor QA engineers’.

We now see another difference in approach to testing between the disciplines; one assumes working until proven defective and the other assumes defective until proven working.

3. Purpose of testing

We have ascertained that both engineering disciplines do testing but they have different approaches to their testing. We now need to look at why they test and how this affects the testing they do.

Verification and validation

Ultimately all the testing that is carried out is done to prove that the solution is fit for use and fit for purpose.

Let’s examine these two terms a little closer.

Fit for use … This means that the code has been developed in a way that it does what the requirements and design say. It has been built correctly and without defects. It works.

Fit for purpose … This means that the code will solve the problem that the user wants it to solve. It does what is needed.

Using quality assurance terminology, I like to think of these as:

Fit for use is verification.

Fit for purpose is validation.

Validation may require testing with the end-user to ensure it solves the problem they have and this may involve User Acceptance Testing (UAT) either by the user or on behalf of the user.

Here is another difference between the disciplines:

  • QA engineers are responsible for both, verification and validation.
  • Software engineers test mainly for the purpose of verification.

Progression vs regression

When a software engineer adds, changes or deletes code, they do so for a specific reason, generally a change in requirements or fixing a defect. Testing that these requirements have been met is referred to progression testing.

When the software engineer makes these changes, those changes can have unexpected and unforeseen side-effects, such as breaking features that appear to have nothing to do with the change.

For this reason, whenever a change is made, it is vital that the existing solution is tested to ensure nothing has been broken. This is referred to as regression testing.

So progression testing assures the quality of the new development and regression testing assures the quality of the existing solution.

Typically, due to the assumptions being made, the software engineer carries out limited regression testing and mainly focuses on progression testing. On the other hand, the QA engineer must focus equally on both.

A final note on regression testing. It is the more problematic of the two and is disproportionally longer. Being a negative test, it needs to consider the majority of features in your solution, rather than the one being progressed.

It is likely that your testing wall is built from the need to regression test.

4. Scope of testing

When a change is made, how much testing is required and who will do it?

The testing pyramid

The testing that is to be performed is often depicted as a triangle or pyramid. Each layer represents a type of testing. The idea is that the width of each layer depicts the expected amount of testing that should be carried out for that type of testing compared to other layers. For instance, there should be more unit testing than Integration.

Testing pyramid

These diagrams can show many types of testing but for discussion purposes, I have just shown the common three:

  • Unit testing … test each bit of code written to ensure it performs as expected
  • Integration testing … test that the bits of code work together as expected
  • Behavioural testing … test that the solution works as expected

The idea is that each layer builds on the results of the lower layers. It does not need to retest the lower layers but needs to test the aspects of the solution that were not tested by the lower layer.

In other words, each layer confirms the assumptions made by the previous/lower layer.

In this way, the amount of testing can be reduced in each subsequent layer.

It can be seen that the top of the pyramid represents validation of the solution whilst the lower layers represent verification.

Who does what?

What this pyramid does not show, is who is responsible for each layer

Because of the detailed technical nature of the lower layers and their purpose in verification, it makes sense that the software engineer does the unit and integration testing, whilst the QA engineer focuses their testing on the behavioural layer. There are exceptions to this but on the whole, for a software development project, this holds true.

Again, the split in responsibilities also reflects the separation of the disciplines. You can actually see that the amount of testing done by the software engineers is, collectively, great than the amount of testing done by the QA engineer.

I think it is important to remember thought, the QA engineer is responsible for assuring quality of the solution. They are not just assuring the quality from their test results but they are also assuring the quality from a ‘whole of project’ perspective. In assuring the quality, they must be equally happy with:

  • the test results obtained by others
  • with the quality framework being used to ensure quality
  • their own testing results

Out of scope

A wise person once told me that ‘scope is defined by that thin veneer of that which is out of scope’.

In all testing, there is going to be a certain amount of testing that is out of scope and that will not be done. May be because the value of the testing is too low, it represents an acceptable risk to quality, that it is too expensive to carry out or a combination of the above.

You should not move testing out of scope because your project is late or that you need to move your tickets to done to complete your sprint.

If anything is to be moved out of scope, it needs to be agreed by all stakeholders with a vested interest in the solution.

You need to be able to justify any decision to move tests out of scope.

5. The team

We started by proposing that ‘software developers need to test more and take the burden off the QA engineers’.

This fits with the agile model in which the team is self-organising and allows anyone to do any task in order to complete the work.

It would seem that ‘testing’ is being done by both software and QA engineering disciplines already and that software engineers are already doing most of the testing. It seems logical, therefore, that we can just ask the software engineers to do a bit more testing.

Hopefully, by now, you should now understand that the differences in testing between the disciplines may impact this decision. Differences such as differences in :

  • accountabilities
  • working assumptions
  • scopes

Breaking down the testing wall requires team members to understand these differences and that, if they take on QA testing, that they do so from the point of view of quality assurance, not ensuring quality.

The fact that there is a QA engineering role at all, shows that this is a distinctly different role to software engineering. In allocating a QA task to a software engineer, this distinction needs to be made clear. It is not ‘more testing’ but actually different testing.

So long as this is clear — and the engineer concerned is capable of the change in mindset and has the skills required — then QA engineering testing tasks can be delegated to a software engineer.

When this happens, the QA engineer must also understand that they are not off-the-hook and that they are still accountable for the quality assurance of the solution. They must understand, guide, review and approve what the software engineer is doing.

Manual testing

So far I have not talked about manual vs automated testing as this is a subject in its own right.

However, we need to consider the case of executing manual testing tasks.

There is one golden rule when it comes to manual testing:

You must always have a different person test to the person who made the change, regardless of whether this is progression, regression, verification or validation testing.

When people create something, their mindset is likely to make assumptions about what they have created (the assumption of working). It is these assumptions that can be avoided with a second set of eyes that are not directly linked to the original work.

6. Other options

Before you charge off and start delegating QA engineering tasks to your software engineers, you should also consider a number of other factors and options that can affect your testing wall.

Change scope

It stands to reason that if the changes being made are smaller, the development and testing required is smaller. This makes it more likely that the height of your testing wall will be reduced as changes can be tested throughout the sprint rather than at the end.

You should always look to try and achieve this but I know that regression testing can be a killer. I have seen a change of a few lines of code require weeks of regression testing but this can be avoided by other options.

When you encounter this regression problem, you need to look at different approaches, such as the ones mentioned below.

Quality is an economic decision

You may not realise that the level of quality in your final solution is actually an economic business decision. One that is typically made subconsciously.

If your quality is too low, your users will be disappointed and may demand you improve the quality, at your cost.

On the other hand, the cost of ensuring and assuring a high level of quality will require additional effort and your costs go up accordingly.

This is a fact.

The higher the level of quality required, the higher the cost.

As with most businesses, getting the balance right between acceptable quality and the cost of ensuring that level of quality is a business decision.

That business decision is played out in your team every day. It comes in terms of project timelines, amount of features being developed (sprint velocity) and team size, including the ratio of QA engineers to developers you have.

The height of your testing wall will be dependent on all these factors and you may find that you simply require additional QA engineers to assure the quality is at the level required.

Impact on quality level

I have shown that there are many differences between the software and QA engineering disciplines when it comes to testing.

Like all activities, you will have team members that are at different stages in their professional development and that have different professional ambitions and skills.

It is possible that delegating the QA testing tasks may result in a sub-optimal set of tests and/or test results that results in lower quality. You may take a business decision to accept the potential risk of lower quality (or you may not).

If you do take the approach of delegating these tasks, you should monitor your quality in production and look out for any signs that quality is deteriorating below acceptable levels.

Effect on team velocity

Breaking down the testing wall should improve velocity (ie: throughput) of the team. Less tickets will be rolled over and more tickets completed within the sprint. It would appear that this is exactly what you are looking for.

However, when you delegate QA testing tasks to software engineers, you will impact your development throughput, lowering your velocity.

Theoretically, your average velocity should not be impacted by such delegation. In practice, the software engineer may not be as proficient at the QA role as a QA engineer and the result will be a reduction in velocity.

Of course, the reverse may also be true!

You may decide that a level of professional development is required to make sure that, when delegating QA testing tasks, the software engineers know what to do and can do it proficiently.

Or you might make the conscious decision to lower your velocity in order to increase your level of quality, possibly by increasing the length of your sprints.

Like quality, you should monitor your velocity for significant and unacceptable changes when you delegate QA testing tasks.

Automation

I have not mentioned automation as that is a subject in its own right and I will address it in a different article.

However, I mention it here as it is a key factor in reducing the effort required in regression testing as well as possibly improving the quality of your regression tests.

It could be that your testing wall is completely demolished by using automated regression testing!

Adjusting the level of quality

I stated earlier that the level of quality is directly linked to the cost of ensuring and assuring that level.

If after taking the actions above (ticket scope, professional development, testing delegation and automation), your velocity is still not where it needs to be to get features to market, then you may need to consider a lower level of quality.

This may mean less focus on extreme edge conditions, rarely used features or regression testing whilst accepting the corresponding reduction in quality in your delivered solution and the associated risk to your business.

It may not seem a logical statement to make when discussing quality but remember that quality is an economic business decision and so that is one of the levers you have.

Breaking down the wall

Hopefully you can now see that breaking down the testing wall is not as simple as asking your software engineers to ‘do more testing’.

You need to consider:

  1. How you break down your work
  2. The difference in skill sets and approach between disciplines
  3. Whether you are delegating QA tasks or accountabilities
  4. Potential impacts on quality
  5. The effect it may have on velocity
  6. Alternative approaches to assuring quality such as automation
  7. The level of quality you are trying to attain
  8. How much quality you can afford

Whether you decide to delegate assurance testing, lower quality targets, introduce automation or lower your velocity, there is one thing you need to remember.

Your QA engineer is ultimately responsible for approving the solution for deployment. They must sign off on any testing done by a software engineer and assure the level of quality.

One final word of advice, make sure that the QA engineer does not delegate all testing to the software engineers. This can easily happen and can actually result in your testing wall being replaced by a development wall.

Summary

If you have been reading this article, it is likely that you have a problem with a testing wall that is causing your tickets to be rolled over from sprint to sprint.

We have looked at whether the solution is simply to ask your software engineers to do more testing.

Hopefully you can see that this is not a simple decision as the QA and software engineering disciplines have distinct differences. There are also side-effects to asking your software engineers to do different testing. Quality can be affected and velocity may be impacted.

So there is a lot to think about when asking your software engineers to do QA testing.

I hope you enjoyed this article and that you have extended your skills by learning something new, even if it is only one small thing.

If you found this article of interest, please give me a clap as that helps me identify what people find useful and what future articles I should write. If you have any suggestions, please add them as notes or responses.

--

--