How SQUAD Drives Efficiency Through Quality of Products

Poorvi Ladha
DBS Tech Blog
Published in
7 min readAug 31, 2022

Consolidate tracked metrics to achieve a superior product emblematic of quality and efficiency

By Poorvi Ladha

While companies have processes and frameworks that measure quality or productivity, most fail to deliver outcomes that add any value to the product quality or software development process. Commonly tracked quality and productivity metrics such as test coverage, test execution status, defect leakage, velocity and frequency of commits will not provide actionable insights if they are analysed individually. Instead, correlation between seemingly disjointed metrics should be determined, continuously measured, and analysed to drive improvements in quality and efficiency.

Some argue that while quality has a strong bearing on delivering an exceptional customer experience, its impact on efficiency or productivity is not always as obvious. On the contrary, I firmly believe that continued focus on quality drives efficiency. Together, they lead businesses closer to the end goal: Building top quality products (or services) that deliver exceptional customer experience, while maintaining optimal levels of efficiency and productivity.

Why Tracking Individual Metrics May Not Be in Your Best Interest

In the case of product development, productivity is usually measured by the number of product features released, or the number of code commits/check-ins done during a specified period. While this measures the velocity of the team, it does not take the number of successful or defect-free deployments or commits into account.

If a deployment or check-in introduces defects in a system, additional and avoidable effort will be expended to resolve these defects. In this case, if the velocity of new feature development is measured in conjunction with the number of new defects introduced, a more accurate insight of the true productivity of a team can be had. This, along with similar simple measures should shift left and be tracked when new features are tested for the first-time during unit testing.

The journey of improvement should undoubtedly start with measuring meaningful metrics that provide information on the quality of an application, leading to data-driven decision making. DBS’ Data-Driven Operating Model (DDOM) means that teams follow a disciplined approach towards maintaining software development related data. This may seem like additional toil, but if teams diligently capture this data as part of their daily rituals, it not only simplifies day-to-day progress tracking, but also creates opportunities to drive data-based improvements including eliminating employee toil, improving quality, and increasing efficiency and overall customer experience.

Many product management and test management tools track day-to-day progress but lack the ability to provide meaningful insights that can be converted into actionable improvement plans. In the world of agile, test automation, and Continuous Integration/ Continuous deployment (CI/CD) — where teams use multiple tools to complete different tasks during the software development lifecycle (SDLC) — it has become vital to collate the data generated in these project, development, and test management tools. Though there is integration available between some of these tools, none focus on quality.

Figure 1:SQUAD’s consolidated dashboard view

Driving Efficiency and Productivity Through SQUAD

To meet our need of a quality control tower that provides insights based on data integrated from different sources, the Middle Office Tech’s Legal, Compliance Secretariat & Audit (MOT’s LCS & Audit) platform quality assurance team created SQUAD — Site Reliability Engineering & Quality Assurance Unified Dashboard. This control tower is an in-house data-driven visualisation tool that provides at-a-glance views of key metrics relevant to the SDLC. Project, development and testing related data is collated from three key sources — Jira, SonarQube and Jenkins. Quality related metrics are tracked and presented sequentially through the different phases of the SDLC. This helps product owners, scrum masters, developers, and testers glean insights on the effectiveness of the software development process and the overall quality of their applications.

SQUAD’s Captured Markers of Quality (Data points)

SQUAD provides insights into an application’s quality through five key metric categories:

Figure 2: SQUAD’s five key metric categories

1) Features Released in Production measures the count of user stories released/deployed to production, which is used for further trend analysis based on product versions released to production.

2) Unit Test Coverage measures lines of codes covered by unit tests and bugs for all source code repositories deployed to production. Through easy-to-follow RAG status coding, teams are alerted when their coverage falls below the 80% code coverage target.

3) SIT & UAT Test Coverage and Effectiveness measures the following metrics

a) % of released user stories covered by SIT & UAT test cases

b) % of released user stories impacted by defects

c) % of functional test cases automated in-sprint

Data collected for the above metrics is further used for:

a) Defect ratio trend analysis, which compares the trend of defects per change over each quarter of the year

b) Calculating defects per change, which keeps track of the production incidents introduced in the system as it goes through changes

c) Defect leakage trend analysis, which shows trend of defect leakage as an application matures/grows with each successful release

d) Defect analysis based on root cause, which shows a classification of defects based on the category and root cause. This is a powerful tool that helps developers and testers zoom into areas of improvement and implement corrective measures

4) Continuous Automated Regression measures the % of regression test cases that have been automated and breakdown by component. Regression defects are tracked to measure effectiveness of regression tests.

5) Toil hours saved through continuous automated regression.

SQUAD Use Cases

Control towers should be a part of regular team rituals to prioritise improvement initiatives based on insights gleaned. Examples of four persona-based insights that depict how SQUAD is used in MOT’s LCS & Audit platform’s rituals are explained below.

Figure 3: SIT Test Coverage

Persona #1 — Tester/Test Lead
Test Leads and testers track all metrics at the end of a sprint or release in weekly QA meetings, stand ups and retrospectives. The very first thing that would catch a tester’s or test lead’s attention is low SIT test coverage (above). Such insights trigger conversations on the squad QA practices to draw the entire squad’s focus on quality that would lead to improvements in test effectiveness.

Figure 4: Unit Test Coverage

Persona #2 — Developer/Developer Lead
Developers track unit test coverage, defect category & defect root cause analysis in sprint stand ups & sprint retrospectives. More than 35% of released user stories had defects (above), which indicates that either the code lacked adequate unit testing, or there was a gap in the developer’s understanding of requirements. In this case we can see that the unit test coverage is low, so the issue could be resolved by increasing unit test coverage in the next release.

Figure 5: Defect Leakage

Persona #3 — Scrum Master
Scrum Masters track SIT & UAT coverage, defects per change at the end of sprint or release in sprint retrospectives. In the highlighted releases, the defect leakage in UAT and production is >60% (above). Even though the count of defects is low, we must investigate and determine the root cause and take corrective action for future releases.

Figure 6: Defect Ratio Trend

Persona #4 — Tribe Lead (Project Delivery Lead)
Tribe Leads should track the defect ratio trend and defects per change in sprint retrospectives and big-room planning. Defect ratio trend compares the trend of defects per change over each quarter of the year. Squads and tribes should aim to reduce this ratio over quarters in a year and in the subsequent year. Tribe Leads should use this insight to set achievable quarterly targets for their squads to reduce this ratio.

The Need for Continuous Measurements

Adopting new processes and implementing new productivity tools will not create a remarkable improvement unless the impact is measured continuously, and adjustments are tested out by conducting small experiments that are no longer than the usual development sprint.

Figure 7: SQUAD Leaderboard ranking based on six quality metrics

Experiment outcomes are automatically visible on SQUAD, and the team can choose to adopt adjustments to enhance their processes. To embed this in our platform’s culture, we have repurposed our primal competitive instinct as a source of motivation — teams in LCS & Audit strive to achieve high scores (the maximum score they can achieve is 9) on six quality metrics, which marks significant improvement in their applications’ quality and overall customer experience.

Conclusion

Aristotle famously said, “Quality is not an act, it is a habit”. With SQUAD acting as the checkpoint authority, the consolidated metrics would be able to paint a picture of higher resolution as the software passes through multiple stages of rigorous testing and debugging at each juncture.

The impact of quality on efficiency and productivity will no longer be transparent. Its correlation is evident, manifesting from viewing metrics together as a whole. Continuously measuring and improving the effectiveness of development processes and practices will build a culture that will empower agile teams to succeed in driving efficiency through quality.

Poorvi is a Senior Vice President at DBS and spends most of her time helping teams improve the quality of their software products.

--

--