3 Practical Engineering Metrics to Understand Quality and Efficiency

Keshav Vasudevan
Product Stories
Published in
5 min readFeb 13, 2020

Many forward thinking product companies are obsessed with customer metrics. This would be anywhere from marketing click-through rates to product engagement metrics.

But operational efficiency within an organization is also important to measure and optimize for. Choosing the right metrics that incentivize good habits and provide meaningful insight can be challenging.

Engineering is the life-blood of a product-driven company. Entire business units in product-centric companies organize themselves around engineering release cycles to do their jobs better. Demand generation teams create go-to market plans for upcoming releases, product marketers and bloggers write articles and determine how to message customer-facing engineering deliverables, and sales tries to prospect new leads and sell to existing ones based on what engineering has delivered.

This is why CEOs and other executives ask engineering leaders for productivity metrics to assess their contribution to the business. Most good engineers know it’s hard to measure productivity, especially given the uncertainty involved in the art of building. There is no “set path” to create, and asking engineering to measure their activity-based productivity may seem bureaucratic and archaic in the modern age.

Engineering Metrics that Matter

You can measure almost anything, but you can’t pay attention to everything. The more metrics you track, the less importance you give each of them.

Instead of focusing attention on activity-based metrics like the time for each ticket, the focus can be on value delivered to the customer. This emphasizes on attributing work and efficiency of delivery of capabilities that improves the life of the customer. This is in line with outcome-based innovation, wherein the entire business team focus their attention on optimizing the value delivered to customers.

At SmartBear, product managers sit with engineering leaders to really figure out what’s important to the business. Based on some of the work we’ve done, we’ve identified key metrics that help us understand engineering efficiency and their contribution to the overall business.

Value of Delivery

The value of delivery focuses on identifying where engineering is spending their most time. You can group engineering work into the following categories:

  • Feature: New product functionality, new product capability, a product enhancement or architectural element.
  • Voice of Customer (VOC): Bugs for issues reported by customers that require further work by the development team
  • Enhancements: Improvements to existing features that deliver value to the end user on a smaller scale
  • Maintenance: Technical engineering tasks and debt to support the product. eg: creating end to end tests before release

Based on the above definitions, you could start tracking time/story points delivered over a specific time period. Below is an example of how this would look like per month.

Ideally, you’d like to maximize time on features and enhancements, keep maintenance related work at a steady rate, and minimize VOC.

Quality of Delivery

The quality of delivery metric focuses on the caliber of product delivered every release cycle. This means identifying the bugs identified by customers over a time period.

You can group these bugs by their priority, going from minor to critical and blocker, and analyze any trends in the number of critical or high priority bugs opened by customers that are not being caught by QA during testing.

Another interesting way to assess quality of delivery, and how quickly your engineering team resolves them is through a cumulative chart showing the number of created bugs vs number of resolved bugs.

A high and growing area between the two lines indicates that the team is not resolving issues as fast as they are opened.

Growing area indicates customer bugs aren’t fixed as fast as they’re detected

On the other hand, a narrowing area between the two lines indicates a healthy appetite for resolving customer related issues and deploying them.

Narrowing area indicates customer bugs are promptly fixed and released

Ideally if you have a continuous deployment process set up with sufficient test coverage, you could be in a better position to resolve and push fixes faster.

Say-Do Ratio

The say-do ratio can measure how good the team is scoping and estimating big feature tickets before committing to their delivery.

Before the start of say a quarter, product management and engineering can determine what valuable features can be realistically accomploished and delivered to customer. The number of such features “Said” by engineering that they’ll deliver vs what they actually deliver would be the SayDo ratio.

For example, if say engineering has committed to delivering 10 features, but could only deliver 8 after the cycle. The Say Do ratio would be 8/10 = 80%.

This is, in my opinion, a much better form of measuring scope estimation of value delivering features than say burn-down charts that seeks to measure scope estimation and delivery of every single ticket committed to. This is because burn down-charts can actually be too granular without providing enough actionable insight. As Errietta Kostala mentions in his blog

Management sometimes use burn down charts to tell people to do more work in the same amount of time without fixing any of the things that slow people down. Yes, even in good companies.

Teams sometimes brag about the number of points they achieved and compete with other teams. Yes, really. This is completely meaningless because one team’s 5 is different from another team’s 5. “Why is X startup doing 100 points a sprint and we only do 70!?”. As good a question as “why does Jane have apples and Chloe have oranges?”

Closing Notes

As you can see, there are quite a few best practices on how to measure software engineering metrics without being bureaucratic and archaic. I may have missed some. In this case, please share your own in the comments below!

Special thanks to the following authors whose work I referenced when writing this post:

--

--

Keshav Vasudevan
Product Stories

Passionate about solving human problems with good tech .Alum of @dartmouth and NIT Trichy. Currently building products @smartbear. Learn more 👉 keshinpoint.com