Part 2: What you don’t measure might be hurting you

paul hudson mack
Integral.
Published in
5 min readJun 3, 2020

Looking at what’s not measured can be key to managing risk and better using your data

Vladimir Proskurovskiy from Unsplash

A few years ago…{checks calendar}…ok, a lot has happened…it appears it was a few months ago I started as a product manager at Integral, a Detroit-based consulting firm. The organization is dedicated to a solid core group of practices I knew fairly well (lean startup, human-centered design, agile) and a few I was less familiar with (balanced teams, extreme programming, test driven development). As mentioned in Part 1 of this two-part series, I came to this role with a diverse set of experiences — which have taught me about how important “Who?” is in metrics.

The last few months have been a whole different kind of learning — seeing a new combination of practices implemented in organizational leadership & consulting contexts I know well has been eye-opening and fun. It’s also helped me develop increasing clarity on a lot of things I’ve thought for years. Watching communities, organizations, institutions, businesses, and even myself — I’ve often noticed how much what people are avoiding gets in the way of achieving their goals.

Relative to metrics, I often prioritize this question: What isn’t being measured? This isn’t contrarian or wanting to demand we measure everything possible. I think it’s important to pick a limited set of measures that our brainpower and organizational structures can manage well. However, the things we aren’t measuring don’t stop existing and can have significant impacts on success. I find it wise to check in on what is not included in the chosen set of metrics, both to be aware of the risks and to pressure test that this is the right data (and analytics / data-driven processes) to help reach the goals.

Here are a few examples of items often left unmeasured and ignored that I find it useful to pay attention to:

The work of getting the data — This may seem obvious, but what leader hasn’t been guilty of imagining what data would be perfect for an experiment, dashboard, program, etc. — without giving much thought to how to get that data or what to do with it. If the team avoids disagreeing with the leader (perhaps because of personality; more likely because most humans tend to avoid disagreeing with strong leaders) and/or doesn’t have a culture of everyone raising risks and sharing context — the team might not raise how much work is involved in gathering that brilliant data set or how little of the analytics is used to make better decisions. They might keep getting such perfect data/analytics at an exorbitant cost — and leaders won’t know until a lot of damage has been done. I’ve found the best solution for this is making sure that ‘what data,’ ‘how to get data’ and ‘what to do with it’ are all inextricably linked — always talk about exactly where data will come from and how it will be used before agreeing to use any metric.

The work of planning — Entire planning systems (looking at you agile…) have been built to minimize the amount of planning overhead required to get work done. However, if you’ve ever joined a program-wide stand-up with all of its five (12? 25? 42?) teams or sat through a global organization’s program increment planning, you can begin to wonder if planning doesn’t always ramp up into a lot of wasted talking and document creation. It’s valuable to consider how much planning overhead is really needed. If our planning systems aren’t directly leading to better outcomes and/or get in the way of speed to results, it’s worth looking at how to cut the fat. If you’re not sure how valuable your planning systems are, ask a junior team member to give specific examples of the systems have helped them. When I find myself in systems with wasted planning, I will sometimes raise this concern with the group / appropriate authorities. Other times it’s valuable to simply skip the meetings, give that planning tool the minimal attention required to avoid getting fired, focus on immediately needed work, build things, make shit happen — get outcomes, let those results speak.

Low value work — Alright, so I may have just taken planning to the whipping post, but I find planning is often most valuable when it looks a bit less at exactly what work will get done and more at ‘optimizing for the work not done’ (credit to my colleague Jeff for introducing me to this phrase). I helped one client coordinate across dozens of independent projects all focusing on the same social impact issue — they were spending millions of dollars on similar projects and missing out on economies of scale, shared learning across projects, and telling a big collective story about the impact they were having. At another client, there were no less than three large-scale eCommerce projects (and two were already in production) that were all seeking to sell the same product, with no coordination or sharing at all between the projects. Organizations that want to avoid these situations need to:

  • Do enough pre-work/planning to be clear with every team member on what is in and out of bounds
  • Keep communicating as the work proceeds
  • Hold everyone accountable to avoiding rework and double work

If planning and communications focus on optimizing the work not done, organizations can avoid a ton of work that ultimately doesn’t add/capture value (or the value isn’t aligned to the organization’s goals), find more opportunities to collaborate & work more efficiently, and ultimately capture higher value outcomes.

Status quo — anyone that’s confronted a stubborn naysayer will find this check useful. It’s often assumed that the status quo is free and/or minimally harmful. For example, I’ve heard talent initiatives (e.g., Teach for America or a HiPo leadership accelerator) criticized for putting underqualified people in roles, with the (often unspoken) assumption that without the program much more highly qualified people would be found and paid enough for it to be worth it to them to take the roles. In reality, it’s very hard to recruit highly talented people, and high need roles are often instead filled by ‘qualified’ people that don’t ultimately get the needed results or even do more harm than good. It’s also common, particularly at more traditional companies, for executives to balk at the expenses associated with innovation — and then watch while a start-up eats up a market where the incumbent company previously had stable dominance. Don’t compare innovations solely to the risk of failure in the effort — compare them to the risks of the other options, including the status quo. Question the assumption that that status quo is stable or that all its costs and risks are known.

What about you? I’m curious, how have you seen these examples play out? What other things have you noticed don’t get measured, yet can have a significant impact on getting results and reaching goals? How do you account for the things that aren’t being measured?

--

--