Measuring employee impact

Aaron
story-time.io
Published in
4 min readDec 14, 2021
Helping your staff achieve results

While leading product engineering teams in various capacities, one key metric I’ve found challenging to surface is the impact my staff bring. How are we bringing value to the business? Are we working on the right thing that will help move the needle?

One of the challenges around the empowering the team is, outside of Engineering lead organisations, my teams are seen as a cost centres. A mere necessity to do business, not the partnership to build something great.

Have you tried OKRs!

… yes and I’m yet to work in an environment where they truly work.

My experience with OKRs typically fall into 3 categories:

  • the OKR is far removed from customer value making value difficult to measure,
  • they are yes/no measures which dictate the goals precisely. These are demotivating because there’s no opportunity to experiment or try and make a difference,
  • the worst I find are vague enough where any achievement means the teams have achieved success. Participation trophy style and nobody is truly motivate here.

Measure individual output?

When people recommend we measure an individual’s output they often drop the “how many lines of code are they writing?” Any developer worth their salt knows that number of lines does not equal value. Early career I’d comfortably churn out thousands of lines a day, were they bringing the business value? At the time yes. Could I have solved the client problem in fewer lines? With the experience I have now, that’s a resounding yes!

For an example against sales, should we reward people for the number of cold emails they send? Of course we shouldn’t. A sales person just sending a pitch to anyone with an email address is not how we reward value, it’s the work to validate a customer, find the right fit, prepare the pitch, and close where the value is. The quality of the engagement matters, not the number of engagements.

“What about the number of features?” This is defined by the rest of the business. What are the customers asking for? What does sales have in the pipeline that would matter? Do these features deliver customer value or are they captain’s calls? I’ve worked in teams that deliver huge features, taking over a dozen people months to implement, that are left to gather dust or be adopted after years, never recouping the initial outlay. Too many features like this and the team becomes demotivated and leans more to sand-bagging deliverables. “Under commit, over deliver” is the phrase you’ll hear.

Table stakes

Before you start looking at how I measure value you need to understand the Accelerate metrics. In any modern engineering organisation these are the table stakes:

  • Lead Time — Time code is checked in to customer use
  • Deployment frequency — How often code is deployed
  • Mean Time to Restore — How fast can the team recover from incident
  • Change Fail Percentage — How often we’re breaking our production environment

Here’s a short article that highlights them (https://www.holistics.io/blog/accelerate-measure-software-development/) but it’s good to grab the book https://www.goodreads.com/book/show/35747076-accelerate

Having said that:

How do I measure impact?

Being in a cost centre, having the business define direction and need, I look inwards at what we can influence. The top 3 I look for are:

“Velocity”

Is the team ensuring that we are able to deliver the need quickly and creating initiatives that enforce this. I’m looking at things like design systems, minimal value focus (YAGNI), architectural decisions aligned to the “Now, Next” part of the roadmaps. The type of decisions that I know will help us achieve what we need to in the short term and accelerate us through the needs coming up next. The measurement here is that we are completing more business requests of similar difficulty, faster.

Another view is seeing who leverages the work of others because that is work done once, helping others move. Those building pieces that are used by other members of the teams, and other divisions, is a clear metric for impact. (Hint: this is where Story Time plays)

“Quality”

Quality really depends on the organisation. Is the UI/UX paramount for your company? Then I’d focus on this. Are they delivering the right customer experience with minimal need to rework.

There’s also bugs. Are they delivering more bugs than they should be? This is a double edged sword because introducing lots of bugs can be the team moving too fast, making more mistakes due to excess pressure, or just cutting too many corners. If you estimate your tickets in an agile environment I propose don’t point bugs. A bug typically arrises because a feature wasn’t correctly implemented the first time. Your team has paid the cost once, now they need to resolve it.

This ties into “Change Fail Percentage” above but I try to think broader around process to see it. I look at this from customer support requests. The team should have testing in place to catch the issues, I want to see what makes it out.

“Coaching”

When I think of coaching it’s not just in space of “helping other engineers improve” but also “do our stakeholders understand what they’re asking?” If the other areas understand more of what we do and the cost of delivery, they will hopefully make different decisions so we can deliver more value.

I’m looking at the features a PM is asking for. Do they understand the ask will require large scale rework? Have they had other suggestions from engineers to bring value that gets close to the mark? Is the design team asking for pixel pushing perfection? Are they aware of the time cost when breaking from standards?

This measurement requires pulse surveys or stakeholder referrals. If they have confidence in our work then we are delivering.

So have I missed anything?

How do you measure performance? Do you have ever more explicit ways to automate this?

This article was first published on https://story-time.io

--

--