Spiderman, gaming systems and why your measures are ruining your teams
I have a confession to make.
I’m a huge data geek. I love metrics and measures.
In one job I was responsible for pulling together all of the other Test Managers weekly test progress reports and presenting these in a summarized sheet that went off to the CTO and other senior managers.
I spent hours refining and automating a simple spreadsheet to go off and collect data from other managers reports and plotting crazy graphs and trends. It was great fun, until I realized I’d made it so complicated to manage that no one else had a chance in hell of being able to do it if I was away. This meant for almost a year I couldn’t have a Friday off work because I was the only one that could do it… Genius
The longer I work in software, the more I realize the vast majority of metrics we use with teams are ultimately pointless.
They either drive hugely detrimental behaviors in those asked to follow them or simply get gamed by teams who have far more important things to worry about. In the worst cases they make the ability of teams to produce valuable work secondary to the focus on hitting measures and targets.
Many years ago I picked up a phrase from one of those leaders you very rarely have the opportunity to work with. At the time I was working in financial services and this leader was the type of person who tested our car insurance customer journey by buying a cheap car and reversing it into a tree, just so he could map the customer workflow though our systems.
The phrase he used has stuck with me ever since: —
What gets measured, gets done
Take note of this phrase.
What does it mean?
It means people will focus on the things they are being measured on rather than the things that are probably much more important because the measure is usually linked to some sort of recognition, reward or enforcement of rules that govern how a team works. Dan Pink has covered this sort of stuff to death in Drive i.e. what really motivates us.
I’ve seen bad measures introduced that force the focus away from the important to the unimportant countless times. I’ve even done it myself, repeatedly.
In one of the first roles I had out of college the company I worked for introduced a tool called EG_Workmanager. This tool took all of the tasks our admin team did and allocated an average time for the task to be completed.
This meant that a relatively simple task such as a change of a customers personal details (Maybe an address change) should take one of our admin team 10 minutes to do. Whereas an IR2 (Information Request) was expected to take 20 minutes.
So far, so good right?
All these timings had been created by measuring our staff on how long they took to complete the task and then coming up with an average. But here comes the killer punch.
All of us in this department were measured on our efficiency at closing tasks. I.e how much of these tasks we could complete per day. The assumption being we were employed for 7 hours per day so had 7 hours worth of time to be able to complete work in (See the problem yet?), and here comes the coup de grâce.
We had a progression system called ‘Pay for performance’. The very name makes me shudder now but what this meant was that you moved up in progression (And pay as a result) by having high efficiency in the amount of work you completed.
So, to sum up, the situation was this.
- New system introduced that measures how much work each person does based on tasks having an average time allocated to them
- Staff are expected to hit a daily and weekly target for efficiency which gets reported to their line manager
- Line manager uses these reports to decide if someone is hitting their ‘pay for performance’ targets and thus decides if they are warranted a pay increase
So what happened?
Entirely predictably we gamed the system, at least those of us that wanted to progress did. Let me give you an example.
I used to manage large pension schemes. This meant speaking daily to financial advisers who would call asking for information about the schemes we administered for them. Each one of these was logged in the system as an IR2 (Information request) which gave us 20 minutes to process the request. BUT, we dealt with large pension schemes (At one point I looked after a well known supermarkets pension scheme) and when an adviser called asking for information on more than one member of the scheme, a subset or (And this was the jackpot week) the entire scheme membership, you could claim an IR2 for each member of that pension scheme.
In the case of the well known supermarket chain this meant when I was asked to provide the current pension fund value for each member (Something that to be fair would take me a few days) I could fairly claim 20,000 x IR2 requests at 20 minutes a pop.
This equated to 6,666 hours worth of work logged in the system, which then sent my efficiency into the stratosphere and meant my pay for performance discussions were beautiful.
Now of course, this was common knowledge in the teams, which worked great for you when it happened to you.
On the flipside we had a task called an AQ2 (Accounts Query) which allowed you 20 minutes to complete the task. AQ2’s were the horrible tasks no one wanted to take, because they could look like this…
‘Large well known supermarket chain pension scheme has sent us £500,000 this month to apply to each of their 20,000 scheme members but we appear to have £45,000 still left over after applying all of the contributions, can you find out why’
At which point you’d be dragging out the large paper files to track the last six months of paperwork to find out who had left the scheme, how much of their contributions needed to be reversed off the system and try to get it all to add up.
I saw one member of our team spend over a month doing this for a larger scheme on more than one occasion. They were able to claim 20 minutes for the task. Their efficiency looked like they’d been doing nothing for the last month and as a result their pay for performance stayed where it was.
The outcome of this process and focus on measuring these tasks was that for some (Like me) who were able to game the system we had great stats. For others it meant they were stuck in their progression.
So let me ask this question.
Was that a good target to measure?
You may feel it wasn’t — looking back now I know it wasn’t. But for the company in question who I worked for the ability to see the efficiency of all of their staff in a report, and then reward them (Fairly they thought) for the amount of work they did, must have sounded like a godsend.
Instead it led to a horrible environment to work in where people picked the work that would give them the best stats i.e. gaming the system and everyone else was left with the horrible work, or the horrible work was left entirely, leading to more complaints.
Is this the outcome that was expected when this measure was introduced?
I’ve got stacks of these sort of examples of what can happen when you don’t consider the unintended consequences of measuring or asking your team to focus on certain targets.
Remember what Uncle Ben said speaking to Peter Parker (AKA Spiderman). With great power, comes great responsibility.
Think VERY carefully about the things you are measuring your team on, the targets you are setting and the potential consequences of doing so.