Many development teams working with software development deliver increments each iteration (or sprint) in order to collect users’ feedbacks as soon as possible. Some stories may are finished with unexpected flows, which does not block the feature, but are postponed to future sprints. In the following sprint, teams are working in parallel with new features and bug fixes, and some new bugs may appear. If the team continue this behavior, they must dedicate a full sprint just to handle bugs.
Some events might cause this:
- Stories are written without show how users will use the feature;
- Developers working without a clear view of requirements;
- Few discussions about possible edge cases of features/system usage;
- Escaped defects in the system after new features implementation.
The effects of those problems are time spent in bug fixes, possible extra time to reach target dates (for instance, defined milestones), extra work to review and fix previous working features. This may become worst when is not possible to include more features due to technical debt.
In order to avoid those events, there are some actions your team can do: write acceptance criteria as scenarios, Quality Assurance (QA) members working close to developers and include automated tests.
Acceptance criteria written as scenarios help to understand what is expected to be done for a user using the system. Coming from BDD, the idea is to use examples to show the expected user actions and outcomes when interacting with the system. It helps development teams to understand expected flows, reducing the amount of rework due to misunderstandings.
QA members working close to developers helps them to visualize which possible flows a user can do using the feature before coding. Using the mix of expertise from the team is a win-win situation: developers will see unseen flows before code a single line, and QA members can visualize possible flows due to how the system was written.
Automated tests for any new code (if applicable), focus on the scenarios defined in stories, help to see if new implementation break any old product behavior. The idea behind creating automated tests based on scenarios is to avoid rewrite test code after changes not related to user flows.
Applying those concepts in teams which I work, we achieve some benefits:
- Less re-work after 30% or 50% feature feedback, due to a better understanding of feature requirements for development team;
- Increase team skills related to QA, once team members are more familiar with the mindset;
- Increase QA member code skills, once they are more involved in code phase;
- Less work to re-write automated tests after implement non-functional feature changes (or refactor) in code.
Those achievements increase team’s throughput and collective code ownership, helping teams in situations like vacations or other when you don’t have all team members during a period of time.