What Are They?
In search for the best developer performance metrics, we share with you our results from a recent campaign.
Measuring developer productivity has long been debated.
A simple search on the matter will yield many results from developers who adamantly oppose any form of quantitive metric for assessing developer performance.
“The best way to be a 10x developer is to help 5 other developers be 2x developers.” — Eric Elliott
“Any kind of quantitive metrics for software developers tend to actually reduce overall productivity. They also have negative impact on motivation, and will eventually drive good people out.” — vartec
While the above views hold some merit, there HAS to be some way of measuring the quality of individual developers. Clearly, measuring developer performance is an inexact science with a lot of controversy, but how do you know where to focus if you don’t measure? And for a CEO, focus is key.
Making matters worse, for non-technical folks, such as myself, we’re faced with even more pushback from developers as they typically feel we don’t have enough knowledge of their work. But, ultimately, this is one reason why clear methods of measurement need to be implemented and clearly communicated. If the entire team is on the same page, understanding the metrics for success, the company is better prepared to excel!
Developers — What say you?
In search for the best developer performance metrics, we reached out to our Twitter followers (developers only) asking them to offer up their favorite performance metrics.
To our surprise, we had over 300 developers respond!
Here are the results:
While I wasn’t surprised with the variety of answers, I was a bit shocked at the order in which these metrics scored — especially the top two! I’m not alone in this sentiment…
I personally don’t think these are great metrics. Compatibility isn’t an actual performance metric. And frankly, cultural compatibility is something that should be tackled before the hire is made.
Breaking it down
With so many different responses (metrics), I decided to take a closer look at the top 3 — pulling out direct responses from our surveyed developers. * Note: We cover “Other” below.
1) Speed of Developer
Speed — Yes and No… It’s important, but #1 on the list? I can’t get behind that.
I was a bit shocked at the overwhelming amount of responses similar to this. I knew speed would be common feedback, but to see it as #1 was a complete surprised to me.
Yes, speed is important, but setting this as the #1 performance indicator is extremely short-sighted. Having your development team be constantly measured by how quickly they get the job done, in my eyes, will lead to burnout — quickly.
2) Developer Personality
I strongly disagree with this metric.
As I mentioned above — I think it’s really important that cultural compatibility is a main factor when initially evaluating a developer for hire. But in terms of performance metrics, personality isn’t a part of the mix. It’s not a quantitative metric.
Moreover, just like any other hire, it’s critical to continue to evaluate if the person is a good culture fit once they’ve started working.
I wasted the first year and a half during my first startup with developers that were great friends, worked 24/7 and always had a smile on their face. But, the code was junk and as a non-technical founder I didn’t know any better. They built our product on a proprietary Java stack and it took weeks for new hires to figure out how to start contributing quality code.
The mobile app kept crashing and the loading button became our company mascot; “Spinny of Death”
So, yes, it’s great to have happy shinny people working with you, but it’s just not a measurable indicator of developer productivity.
3) Lines of Code
We’re making progress with this one.
This metric carries some weight for me (maybe not top 3, but relevant), but not necessarily because it’s a legit metric to measure a developer’s productivity. For me, it’s more about how emotional developers get about it.
People either love or hate this metric — There is no in between!
Though, despite all the haters, there are a lot of people who use this as a basis of developer performance.
Before Bliss, my one secret non-techie hack to keep tabs on developer performance was to dive into the code commits on GitHub to see which developers were active.
I would go to every active repository and manually write down the number of code commits from my developers.
More often than not, multiple times of inactivity or a material drop off in code commit frequency was a good indicator that something was going on with the developer (e.g. girlfriend troubles, moonlighting, fatigue of the business).
So, while I agree that Lines of Code, alone, shouldn’t be the key metric of performance, I believe it is a fundamental piece of the equation.
[Note: I realize some languages require more lines of code then others (e.g. Java vs. Ruby — Java developers are going to have ~10x more lines). I’m not suggesting comparing one developer against another. But, seeing an individual’s total commits and lines of code added/removed over a long period of time (3 months+) can be a good indicator of productivity.]
What’s behind ‘other’?
Good point! But, with such a large number of responses with one-off metrics I couldn’t ignore all of them. If anything, this slice of the pie highlights the inefficiencies and vagueness around measuring developer performance.
Some of the metrics/responses lumped into the “Other” category include: number of features, ability to ship code, QA, unit tests, memory usage, CPU utilization and ability to meet deadlines.
To me, this is a good indicator there will never be one set of metrics applicable to all companies. Every business is different and will want to have its own key developer performance metrics.
My personal favorite — Technical Debt
Tech debt didn’t have a strong showing in this research campaign with just 7.2% of respondents listing it as #1, but for me it is the key metric to developer performance.
Right now Ian & team are running up huge amounts of tech debt and it’s awesome!
Out of Ian’s ~4k lines this month, ~74% of it is technical debt!
Ok, maybe this is a tad aggressive (I’ll have a talk with Ian on Monday AM), but we are testing a lot of new features and we don’t want to over-engineer on these features until we know our customers are clearly engaging with them.
If technical debt continues to rise and we still haven’t proven these new features out, we’ll know to slow down on other new features in our product roadmap.
Quantitative measuring is available and necessary!
While we can debate which metrics are key indicators of developer performance, it’s tough to stand behind the argument that they shouldn’t be measured at all.
In the first article referenced above the author, Eric Elliott, shares his #1 method of measurement — Interviewing peers about how helpful and knowledgable a developer is… Now, his way may not be my first choice, but I do like his methodology. More importantly, though, I appreciate that he has a clear strategic way of measuring performance.
That is key to it all — have a method of measurement!
In order to do so, it’s important to understand the stage of your company. If you’re very early-stage with a short runway, then speed and tech debt may be key metrics. If you’re further along, test coverage and quality might be 1 & 2. Point being, know where you are as an organization, set a plan, implement a strategy and MEASURE YOUR DEVELOPERS’ PERFORMANCE!
The tools and metrics are available and they can and should be tracked. Determine which are the most important for moving your startup forward and begin measuring.