How to keep your scrum agile

John Coleman
9 min readAug 27, 2021

--

When Paul Cowan wrote ‘Why scrum has become irrelevant’, he diagnosed a number of significant issues with agile development(1). I’ve seen them before at businesses that say they are “agile”. So what to do?

The Scrum Framework — Dr ian mitchell — Own work

What you do with your stories

Jira and story point counting. Jira can become a Pandora’s box of over-complication. It can end up looking like a flight deck, burdening teams with unnecessary clutter, and that is not what agile is all about and it doesn’t even suite scrum well anyway. Companies use it to try and measure developer performance by how many tickets developers close, a process that gets gamed and reduces code quality which in turn lowers velocity. Velocity is movement in the right direction.

Have a look for a simple platform to manage your backlog —or if you can get away with just using a spreadsheet or database, that’s fine. Remember, agile teams are supposed to be autonomous, not confined by dependency silos and you will see how Jira is used as a cope for siloing and a block on autonomy.

Test and reward your developers abilities using formal testing approaches where you get to objectively observe both quality and quantity of work done, and through team feedback on an individual basis, stop counting their closed tickets which are confounded by team and project dependent factors. Accept that you cannot easily compare one developers business value to another’s based purely on metrics.

Story Size

Super-size-me stories and overloaded sprints. Start your sprint planning by setting the goal, based on the next business objective(s) that has been prioritised.

User Story is a small (actually, the smallest) piece of work that represents some value to an end user and can be delivered during a sprint.
Andrii Bondarenko (2)

A story should be the smallest amount of work you can define and the story points should reflect relative complexity, not estimated completion times — such estimates are usually underestimated by a factor of 2 or 3. If you are struggling with getting realistic estimates from your story points, then you might need to shift to objective story points. A big story must be broken down into smaller separate stories to ensure that they can fit within a sprint and so that there are no super size stories that no one wants to pick up lest their perceived productivity suffer.

Development of a story should be driven by implementing complete functional units of business capability (modules), rather than being broken down by the details of the technical implementation. Make your software architecture models the business domain. If you can do full-stack development, then so much the better.

The process of story refinement will help to draw out all the details so that smaller well defined and complete stories can be produced that clearly define the acceptance criteria. Sufficient refinement is often neglected because product owners are not available — this will kill your productively from the outset. It is critical for the Product Owner to work with the team to clearly define the stories and their acceptance criteria such that they can be developed and tested without further clarification during the sprint — i.e. the story meets the definition of ready.

Refinement can be wasteful when run as a whole team activity. Instead, you can run refinement breakouts where stories are reviewed, refined and estimated by a pair of developers. This can cut your refinement session times in half and feels much more engaging for all participants, but make sure you have an objective and common method for estimation—so again, refer to objective story points.

The Product Owner should not push up the size of a sprint to pressure developers to achieve more — quality and therefore velocity will reduce and a technical debt will start to accrue as code develops into a ‘big ball of mud’. Do not count on Pull Requests to control this, the team may simply lower their standards. Let the team decide for themselves what they are comfortable with, it’s their sprint.

Why are you doing PRs?

Developing using pull requests and feature branches, even when those are short lived, is easier for individual contributors but overall a suboptimal strategy for a team.
— Mattia Battiston (3)

Pull requests were explicitly designed to create a workflow to handle distrust. They are a gate-keeping mechanism for onboarding code from strangers. So it seems strange to introduce such a mechanism within what should be a close-knit team. Do we really need a process that treats our team members the same way we’d treat strangers?
— David Masters (4)

The Pull Request process is itself anti-agile, especially when combined with the super-size-me stories. At the end of the sprint and right before you want to prepare for a demo, you may be surprised with a non-productive and painful ‘code merge from hell’. Instead, do trunk based development without branches and get each team member to peer-review with a lead developer and go through a one-on-one personal code review before they push code. This will also avoid the committee of opinions that Pull Requests lead to when they are collective that slow your merges. Note that trunk-based-development is best done when you have automated your tests in a CI pipeline.

Make sure your lead developers do know what they are doing, because I’ve seen a few that didn’t know fundamentals such as encapsulation and Single Responsibility Principle!

Why so much testing?

Testing can become unsustainable, so let’s be test lean and not forget that the business is interested in delivering working features, not tests. Automated testing is critical for scaling up a products code, and sustainability is a principle of the agile development. You do not need to unit test everything if your software architecture is well structured. It is unsustainable and futile to unit test that ball of mud code (see above for cause).

…if your code is basically obvious — so at a glance you can see exactly what it does — then additional design and verification (e.g., through unit testing) yields extremely minimal benefit, if any.
Steve Sanderson (5)

Now that I’ve mentioned the “ball of mud”; code should be kept clean through good practices and refactoring, generally it should fit into 3 elementary forms; 1) aggregate functions that only compose lower level functions, 2) business logic which ideally consist only of abstracted and detached algorithms and then 3) trivial units of code like model classes or simple pure functions. Business logic (or complete features) is most worthy of unit testing, and likely this is only a small proportion of the code. If you can isolate and package the business logic into abstractions, that can really help in distributed systems and making testing easier. Because of the preceding points, using unit testing to drive coverage is not a great idea — what is the correct amount of coverage, will developers write tests in a style just to get coverage rather than to really test the code properly?

Whenever reward is tied to measured performance, metric fixation invites gaming.
Jerry Z. Muller (6)

Breaking code into discrete modules (packages/libraries) and keeping the code clean by refactoring is also a great way to prevent a spaghetti monster monolith developing, this will help to keep tests easy to maintain.

The word “refactoring” should never appear in a schedule. Refactoring is not a story or a backlog item. Refactoring is not a scheduled task. Refactoring is immediate and continuous. It’s like washing your hands in the bathroom. You always do it.
Uncle Bob Martin (7)

Aggregate functions and trivial functions should be declaratively devoid of sources of error where possible. Leverage functional programming techniques to reduce sources of error if you can. Focus on developing high level integration tests that exercise the entire codebase and produce the coverage report — achieving 100% coverage with end-to-end feature tests is far more valuable. What you don’t test is where the bugs will be.

Today it is also possible to harness AI to help to generate unit tests and reduce the burden of coding, however the results will likely need polishing. Another promising testing approach would be to automate the testing with test inputs generation at test runtime to check a hypothesis, but with branching values informed from code inspection. Examples of this kind of approach include QuickCheck for Haskell, which has also been ported to other languages.

There may be other things you can automate as well. Is your persistence layer reverse engineered from the platforms models for example? Does your persistence platform incorporate an API already? No one should be writing a data-access layer if it’s avoidable. Similarly automate the builds of client code from a specification to reduce effort and human error and thus maintain focus on delivering valuable business features.

QA as a Source of Delay

It is not uncommon for the development process to be subjected to a separate QA process run by an external or separate team. This can be problematic for various reasons and violates the autonomy of the development team and is therefore anti-agile.

Here are some of the common points of confusion and delay:

  • the QA team invent tests not defined in story acceptance criteria
  • the QA team define tests during a sprint instead of during story refinement - the sprint is therefore not fully planned
  • the QA team introduce excessive tests during the sprint delivery leading to delay
  • if the QA team perform their acceptance testing at the end of the sprint it can delay your delivery, if they do it afterwards it could fail the delivery

The ideal development of each story should; 1) start by automating all the acceptance criteria, 2) writing the application code until it passes all the tests and then 3) review and push code — until story is now done. There should be no to-and-fro between test development and application code development.

Risk of delay and confusion from the above can be reduced.

  • no external teams should break the developers autonomy — QA is a responsibility developers need to own
  • a story should define all the acceptance criteria at the outset (including any acceptance criteria from an external QA team)
  • all acceptance criteria should be converted into automated tests prior to application development
  • the story is not done and code will not be pushed until all automated tests pass
  • the passing of all automated tests green lights the code for release (hence no separate end of sprint QA check & approve is required)

Fully automated end-to-end testing is a scalable solution to verifying your changes do not create any regression issues and that all the layers integrate successfully. Such an approach offers the highest value test outcomes to the business. Be aware that scaling and implementing such a thorough test approach is non-trivial, but it can be done!

It’s not just QA that can be a regular impediment to fully automating the development and release process, it may also be the case that other externl teams such as database admins and “DevOps” teams also act in silos that delay product development and deployment. Wherever possible and practical such roles should be internalised to the agile team, and any external teams should act only as checks if necessary. When doing UI development, make sure designs are finalised prior to starting so that changes do not occur during the delivery — this should be a checkbox on your definition-of-ready.

You don’t have to stand on ceremony

Yes retrospectives can become boringly repetitive. Why not deal with pain-points as they arise in real-time? Identify the problem, note it down and take action to resolve it there and then. At the end of the sprint, let someone assess whether the solutions were effective and identify actions required if they were not. You will then only need retrospectives if solutions failed and require further action. The way you use your agile framework should also be agile — adapt it to suit, choose and evolve your way-of-working to fit prevailing conditions.

Further reading

(1) Why scrum has become irrelevant — Cowan, Paul

(2) How to Write a Good User Story: with Examples & Templates — Bondarenko, Andrii

(3) Why I love Trunk Based Development (or pushing straight to master) — Battiston, Mattia

(4) Are Pull Requests Holding Back Your Team?—Masters, David

(5) Selective Unit Testing — Costs and Benefits — Sanderson, Steve

(6) The Tyranny of Metrics — Jerry Z. Muller

(7) Tweet, Jul 31, 2018— Martin, Bob

--

--