Running With the Lemmings In Metrics Hell

If you’re going to use a metric to drive an outcome, be sure that the metric 100% reflects the desired outcome.

Jeff Enderwick
3 min readApr 10, 2014

I was doing useless work on a Saturday — writing up an email to my SVP explaining why my software development metrics were bad. With every keystroke, I thought of replacing the entire email with “Fuck it. I quit.” I respected thy guy. He was doing his best to run a $2B business, and he didn’t need the heat that I was bringing in with my bad “MTTR” number.

My employer, a very large company, had a problem with customer-found bugs not getting fixed in a timely manner. Or so they thought. I thought that the problem was actually that customers weren’t getting the bug fixes that they cared about in a timely manner — not the same thing. I was okay with “customer-found bugs” as a proxy for “bug fixes that customers cared about”, and I made sure to deliver those fixes to the customer ASAP.

So why were my “MTTR” for customer-found bugs so bad? Because I didn’t care when the bugs got fixed, as long as they made it out in the next release. You see, the corporate metric tracked when a supposed fix was committed to the source code repository. The corporate metric didn’t track when the fix passed QA and actually reached the customer via a release. There were teams inside the company “fixing” customer-found bugs in a week, but not getting a release out with that fix for more than a year and a half!

When I manage a project, I am driven by dependencies and risks. Make that dependencies and RISKS. If I have a trade-off between a low-risk, customer-found issue, and a high-risk, internally-found issue, the latter comes first. We can just make sure that the customer-found issue gets fixed prior to release.

The email was sent. I got lukewarm agreement, and I was sent to get the metrics people to see if they’d fix their metric. I was more pissed, because I didn’t have spare time to spend chasing goats.

Short, sad story — the metrics team was not at all interested in fixing the metric to reflect the customer’s interests. Pretty much because they were lazy. What really shocked me was that the powers-that-be were ready to get upset about a “bad score”, but didn’t flip out of their chairs when it was made clear that ALL the data was bogus.

What lessons are here?

  • If you’re going to use a metric to drive an outcome, be sure that the metric 100% measures the desired outcome. Be prepared to rectify or discard your metric if it is flawed.
  • Metrics drive behavior. If the behavior isn’t aligned with the desired outcome, you’re wasting time, and the smart people are getting frustrated. Goats are happier, because gaming a metric is easier than kicking ass.
  • Sometimes metrics are better as tools at a lower level in the organization than they are at a higher level. The reason is that at a lower level, a metric can be evaluated in context, to see whether there is a problem or not. Either way, the metric is meaningful in that it drives the examination — it causes you to look. At a higher-level in the organization, the context isn’t apparent, and the examination behavior pattern isn’t scalable.

--

--

Jeff Enderwick

Has-been wanna-be glass artist. Co-Founder & CTO at Nacho Cove, Inc.