Some great insights here but they don’t tell the whole story I don’t think.Let me share the key practices that I have seen drive success in performance measurement.
Good performance rating systems are incredibly difficult to create. They are in fact a combination of qualitative (subjective) and quantitative (objective) measures. So, we have to first embrace that it’s not a perfect science. Let me offer some suggestions that I have seen work in some sophisticated systems, some of which had been created by exec teams of reputable companies over an 18 month project.
I’m not at all convinced you need to just reward teams because at some point you have to tease out the star performers to reward proportionately. Also if you can only promote one person, how do you choose the right person if the ‘team’ has been successful? Anyway, here are the most important factors in building a great system:
- You have to think very deeply about the behaviors and metrics that drive success. It’s both. What goes with that is knowing what potential counter-productive activity you might cause as a result. That’s your test to know if you’re selecting the right things. Requires lots of time and is a combo of technical and ‘soft’ skill such as collaboration. It’s qual and quant, which makes things tricky right from the get-go.
- You are likely to break performance measurement down into components, rewarding in part for company achieving goals, a component based on achieving team/dept goals and a portion for individual goals. You alter the proportions based on your level. You also have to test that company/team/individual goals do not work against each other as they sometimes do, sadly.
- You have to paint a clear picture of what success looks like on the front end and share it with your direct reports at the start of the cycle. Easy to do on quant stuff. Accuracy and time measures allow for low/med/high performance rates. What we miss is the qual measures. I have built and used a spreadsheet with great success to demonstrate different levels of performance for qual skills. For example, say we think influencing skills are important in your company. You get a certain amount of credit for presenting an idea to an exec and influencing him/her to adopt your thinking. You get MORE credit if the exec had an ingoing bias against your thinking and you changed their mind. You have to think to this level of detail for every soft skill. I have done it. Takes a lot of time but my teams have loved it. I don’t think I have seen one company try this and it’s a huge part of the issue.
- You need systems and discipline to track contributions. Most managers are too lazy and wait until year end…so they have little fodder to evaluate and they capture the wrong things…and not enough things in general. I charge both myself and my direct reports to keep track of all of their contributions, qual and quant based on the system of measurement I devised. So they know minute by minute by their own evaluation if they did something great. Say they are in a meeting and the team is trying to solve a problem and there’s an impasse. And one team member comes up with a way to break the impasse and that leads to forward progress. That needs to be captured. Probably in real time. How are you doing that? If you don’t have such a system, you are not capturing and archiving evidence of valuable contribution. Again that’s at the core of the problem.
- You’re probably thinking “well it’s hard to measure the impact of causing a breakthrough in a meeting”. Yup, this is why it’s not a perfect science. Get over it. Sometimes it’s easy, such that an expression of creativity leads to a new product that generates $XXX in sales in the first year and it’s easy to measure and reward. Usually the qual stuff can’t be measured. This is how you can best deal with it. You need a way to calibrate performance at least by comparing the net qual and quant contributions of person A and person B at the same level. I worked for a large company that did this. Compiled the entire book of work for the year for each person doing the same job at each level with same performance expectations. That info was laid out side by side and the managers got together in a meeting to discuss each person at length to ‘calibrate’. Took two days which is entirely worth it because we believed we needed to invest in our people as an essential resource. Once you look at each person side by side you can get a good sense of the impact of each person relative to each other. So you can sort of bell curve the employees and rate them accordingly. Don’t get caught up on the negative connotations of my ‘bell’ terminology; we didn’t force a percentage into lower tiers if we felt all made substantial contributions. We were allowed to rate everyone exceptional if we could justify it. Great system that kept things fair.
When I hire, I want to know only 3 things and one is ‘what is the impact I can expect of putting you in my team? How will I know you can make the team better as a net result of you joining it?’ We need to be able to answer that about ourselves. I often get people answering STAR questions with some form of “well I was on this team and yadda yadda yadda we grew sales by 20%”, to which I reply, OK, if I removed you from the team and put any one your colleagues on it, would the team still have achieved this result? Pretty much every time I stump the candidate because companies are not insightful enough about individual contribution and thus employees have no idea either how they are contributing. A circular lack of insight that impedes our progress.
My system (not truly mine, but a concoction of stuff I invented plus parts of everything I have seen work) does work as best as a system can.
Hmmmm, maybe I need to turn this into a blog post…