Kanplan Scrumban
After reading a really good article by Atlassian, I felt inspired to share an experience with my previous team’s hybrid approach to Scrum and DevOps.
In any project, there are shifting and evolving needs of users. Think of a snaking path which shifts either closer to productivity speediness or towards longevity.
At times the team members pushing for products are right, at other times, the people advocating for developing better systems are right.
What is important is that the conversation between these two camps keeps going to prevent falling off the snaking path.
Where did this conversation take our team? Let's break this down into the three ways of DevOps.
Our team found it handy to think in these terms because it's simple, direct to outcomes, and direct to how the business moves forward. As a result, we track tasks and stories using three Jira boards — one for each of the three ways:
Board #1: Main Project Board (Systems Thinking)
This is the space for tracking work in progress that is tied directly to feature development and the growing business logic of the product.
This is a normal Scrum board with ship dates and runs off a sprint schedule (as per tradition).
Board #2: Bugs Board (Feedback Loops)
This is our day-to-day Kanban board. Because we are huge nerds, we used One Punch Man Hero Disaster levels instead of T-shirt sizes for bug tasks (Wolf, Tiger, Demon, Dragon, God).
Each threat level reflects a quick estimation of the potential damage to our user’s experience/trust to better prioritize according to user needs.
Wolf: hardly noticeable/non-problematic (cosmetic issues and the like).
Tiger: minor annoyances (small rendering issues, user friction points).
Demon: Issues that can lose customers(functional mistakes, lengthy processes).
Dragon: Issues that can lose large groups of customers (Security holes, core functionality defects).
God: Issues that can lose all of our customers (Security breaches, system-wide shutdowns).
We often have fun with this scheme for bug estimates. When a bug shows up, you might hear one of us say, “Incoming threat-level Demon!” which seems to result in our team wanting to eliminate it quicker (It feels good to say you’ve slain a threat-level demon today).
While this board started off as a normal bugs board, we’ve added different task types expanding the board’s scope to be concerned with feedback more generally.
One task type that increases feedback loops is the “improvement” type task. This task type is for tasks that are not necessarily bugs but are direct feedback from users or for better visibility about how our users are engaging with our system.
For example, an improvement task can be as simple as:
“Allow users to click the signature box to apply signatures directly”
or more heavyweight, such as:
“add view counter to public templates and functions to increase view count to determine the most popular templates”.
If tasks respond to feedback or increase feedback loops, they fit this category.
Board #3: DevOps Board (Continual Experimentation)
This Kanban board was originally dedicated to operations, but over time, while building our DevOps culture, this board became the hub for both operations and experimentation.
Conclusion
What is handy with abstracting the three ways in this fashion is it gives us insight into our team’s overall DevOps health.
From this, we can ask ourselves more interesting questions:
- How much effort is being dedicated to each of the three ways?
- Does increasing efforts in feedback and experimentation provide a higher payback towards overall velocity (acceleration)?
- Is there a sweet spot that provides sustainable team velocity while addressing sustainment activities and maintaining a well-designed architecture?
This has been a lightweight method of determining our DevOps effectiveness, and I hope it is useful for you too!
If you found this valuable or entertaining, please follow the blog, and I’ll continue to post more tech goodness. Thanks for reading!