Lean Management and Monitoring

Digital Transformation Build-out Lessons

Ravi Shankar
5 min readAug 4, 2020

--

Lean Management and Monitoring: Part 2

Continuing from where we left in our previous story...

Beyond a couple of initial sprints, where the focus is on kickoff and gaining the necessary initial momentum, very quickly the priorities change to velocity and quality outcomes w.r.t the overall build-out. This brings in context our next pain point:

PAIN-4: (Context — Inter-sprint/build-out cycle) — Clear, succinct, in-real-time and visible-to-all/consistent answers to the questions:

  • “We might be doing fine within the current Sprint, but how are we doing w.r.t. overall development completion?”
  • “We might be doing good w.r.t. story points, but how are we doing on our MVP features’ set readiness and stability?”
  • “How are we doing on our readiness for SIT/UAT entry criteria?”

Here as well, the initial focus is on fine-grained speed-quality measures like story points/estimate burn-ups. But it needs to shift quickly towards more coarse-grained aspects of features’ completion/stability and UAT readiness.

Getting on-demand and accurate answers to the above questions needs two things — “trustworthy data and incremental automation”. We developed a Google scripts-sheets based tool to automate answers to the above questions, and much more.

Our intent with the tool was two-fold:

“Eliminate status curation toil and reach to a single view of status available in real-time.”

We did the following, pretty much in sequence through the engagement, based on how asks changed/evolved through the build-out:

  • Before starting the build-out, we created an initial view of the target delivery sprints for all the features. This served as the reference for POs-BAs-Demand Leads-Engineering Leads of timelines for grooming milestones and features’ delivery. It was this ordering and timelines that our demand leads ran with, as mentioned in Team Setup and Organization.
  • By the end of Sprint 4 in the 11 Sprints journey, the BAs broke down all features into stories, assigned story points to each story and persisted them in JIRA. This enabled us to extract:
  1. Planned vs actuals of velocity for the past sprints.
  2. Target velocity of the upcoming sprints.
  3. When we matched developers’ efficiency with target velocity, we knew the team requirements for each POD per Sprint
PODs’ completion status w.r.t. story points

The figure above is a pivot, showing in-real-time completion status of PODs w.r.t story points planned vs. developed vs. achieved. This data comes in very handy to revisit efficiency trends for current and past Sprints. How team size changes/people movements impact PODs’ velocities. Developed vs. achieved SPs (story points) to failed stories.

Client stakeholders, at any point, could see the planned and actuals SPs (only for past or current Sprints) for the total build-out, at a POD level.

  • Moving into Sprint 6, the focus shifted towards features’ completion and stability, instead of the story points’ based velocity. At this point, we added another tab to the sheet, displaying features’ status. This data reflected feature completion based on the completion of stories’ and the status of defects mapped to it:
Features’ Completion Status
Features’ Completion Pivot

The views above show features’ completion status. Features at any point could be in either of the four statuses — Ready for UAT, In Stabilization, Development in Progress and Not yet Started. Each status lends to a different set of priorities, reporting of which the tool automated:

  1. Ready for UAT — Clearly the most stable features. Base stability of the platform was a function of a minimum percentage of MVP and non-MVP features being in this state. Something we had to ensure at all times.
  2. In Stabilization — These are features that have been completed from a development standpoint, but still have either failed stories or critical defects in them.
  3. Development in Progress and Not yet Started features are essentially ones that are still being actively worked upon between Product and Engineering teams.

Just as with stories, the number of features “In Stabilization” and “Development in Progress” needs to be minimal, to limit parallel work and ensure tangible progress towards completion and stability.

  • Through Sprints 8 to 11 and into UAT, we did necessary enhancements to the tool:
  1. Features and/or developers with high defect density
  2. Highlighting features stuck with third-party dependencies
  3. Target date for features’ readiness, as development started moving towards its fag end
  4. Features’ re-testing prioritization for UAT testers based on defect fixes
  5. Defects’ find and fix projections

One obvious enhancement to this tool will be to persist data incrementally and then to be able to analyze that over a period of time; to measure and anticipate variations in PODs’ and individuals’ deliverables (across the speed and quality axes) over time.

The logic in Google scripts is extremely simple. All the data is present in stories and defects within JIRA. The Google sheet essentially pulls both of these data points in an automated way, and does data massaging to present the needed status across multiple axes (story points and/or features and/or defects).

Code for this sheet is present at:

https://github.com/ravishank8/InterSprintStatusJIRA

This sheet is what we presented to our executive and client stakeholders in our weekly synch-ups; they obviously also had the link to it, so they could open and check status at any time. Needless to say, this took a lot of the pain away in curating status for various stakeholders.

Everyone, from the POD teams to the executive stakeholders, had the same view of status — “current and real” and automated.

As I think on our purpose to “Eliminate (not avoid) toil, reach to a single view of status that is available in real-time,” we were able to achieve version 0.1 of that.

While I have open-sourced the code for this strawman, same as I did for the Tracking Tool for Intra-Sprint Status, it is not really the tool that is of any consequence; but the mindset to eliminate toil and waste from management functions.

I’d rather have all scrum masters/Release Train Engineers (whichever methodology…whatever team structure... those keep changing quickly btw...) be ENGINEERS who can code themselves out of toil/wastage.

In the next story, we will look at the next biggest pain point, and how we used Visual Management to solve the same:

PAIN-5 — All organizations/teams/engagements are flush with tools to measure and display metrics. Plenty of trends, graphs and reports. But what do they really mean to the Engineers in the PODs? How do we connect each developer to the uber velocity-quality-value goals, so they can clearly see the impact of their work-deliverable on the bigger picture?

More on this in the next story...

--

--

Ravi Shankar
Engineered @ Publicis Sapient

Engineering Leader at PublicisSapient- working with global businesses to ideate, execute and enrich their Digital Transformation Journeys