How to reward individual performance in Agile teams

Bhuwan Jain
Unboxing Product Management
8 min readJun 13, 2019

When nations go to war, it’s their soldiers who fight in the battleground and put their life at stake for the nation’s safety. Although every soldier undergoes the same training, everyone doesn’t display similar zeal at the forefront of the battle. Some are more fierce, more strategic and more deadly when it comes to killing the enemy soldiers, while others are merely ‘doing their job’. For this reason, whenever a soldier displays extraordinary valor in the field, the officials announce special recognition for them, presenting them with a medal and honor their commitment towards the nation.

Does that mean that the battalion’s effort goes unnoticed? NO.

Does that mean that individual contribution matters more than the team effort? Of course not!

HBR, in one of their posts about Teamwork Works Best When Top Performers Are Rewarded, has talked about recognition spillover effects. It says-

“It is possible to recognize top performers and boost team performance at the same time. In fact, these recognition programs pack a kind of “one-two punch” because they increase the performance of individual team members (not only the one who has been recognized) as well as overall team performance. We call these beneficial results recognition spillover effects because recognizing a single team member seems to have a positive and contagious effect on all the other members in the team.”

This thought challenges the notion that if one rewards an individual it could lead to a hero mindset in the team. A hero mindset, if left unchecked, plagues the team spirit. How? When people know that they all are running for the same award, they may hog on more work, withhold information from each other, put in an insane amount of work hours, refuse to help other team members and prefer watching them fail.

But it’s not the same case always. Just like soldiers, all team members aren’t created equal. Some work harder than others. They work far more fiercely and more effectively. More so, they don’t do it for appreciation or a reward. They do it because they feel a sense of purpose and responsibility towards their work.

That’s why, I strongly feel that the person who contributes more, deserves appreciation as well a reward. The reward is not for more work, but for consistently driving results and pushing the team towards success.

However, there’s a catch.

Most of the times, recognition programs are devised by management. Although they have a fair understanding of a team’s output, they only see a high-level view of a team’s success or failure. So every time one needs to decide a worthy candidate, they would need an understanding of the top performer of the month/quarter. This brings us to one of the major drawbacks of giving rewards-

Management driven awards dilute over a period of time

When the management handles rewards and recognition, there might be some cases over a period of time where they end up alienating star performers.

A reason for this could be that the management feels that they are responsible for keeping every team member engaged and motivated. So, they rotate the award to keep all of them happy.

For instance, if one team member received an award for impeccable performance, the management would think twice before giving it to the same person in the next cycle, irrespective of his/her great performance. This not only results in rewarding the undeserving but also demotivates the hard-working team member.

Illustration showing management driven awards alienating star performers and rewarding everyone in the team

To overcome this, every team needs to form some best practices for implementing recognition fairly and effectively.

Thus, we follow a bottom-up approach at Quovantis. Not that our management can’t appreciate people in the right way but because we feel that the special individual contribution award (which we call Sprint Champion) should be given by the team to the deserving team members. So every quarter we select one team member who has shown great mastery over his/her code, worked over and above the expectations and shown incredible dedication in the driving outcomes of the sprint. At the end of the quarter, one true warrior is selected whom we honor with a certificate of “Sprint Champion award” and a trophy as a reminder of his/her ownership towards work.

In this process of selection of sprint champion, we involve the team members (and not the management). The process is transparent, democratic and fair. Every team member has a right to voice his/her opinion and can even nominate oneself if (s)he feels that (s)he deserves appreciation.

If you’re wondering how you can measure the individual contribution, here are the three factors that we take into consideration- Quantity, Quality, and Timelines.

These 3 factors help us with a broad framework of gauging team member’s contribution and their engagements. Our scrum teams look for answers to these questions-

  • Along with the frequent releases, is the team member focused on quality?
  • Is the team member performing his/her best or needs to improve in terms of the quantity of work?
  • Does the team member keep his/her commitment to timelines?

However, we realized that answering in Yes/No to the above questions wouldn’t do justice to this method.

So, we devised a method to find out an individual team member’s performance. We named this approach the Focus Meter approach (For the love of God, please don’t ask how we ever landed on this name). Just like, in JIRA, team velocity and burndown charts reflect a team’s performance, Focus Meter reflects a team member’s performance over a sprint cycle.

The Focus Meter approach

As we all know, the average of story points burned in the last 2–3 sprints gives the velocity of a scrum team. Two major components that a velocity chart highlights about the scrum team is — Productivity and Commitment.

It highlights the overall story points team has burned but does not show an individual’s story points. It fails to highlight the individual developer’s contribution to the team’s success.

The focus meter is a method that can help gauge individual team member’s commitment towards productivity, quality, timelines and help them get better.

Essentially, we need to know the answers to the following questions-

  • How many story points did the team member burn, individually?
  • Did they deliver the user stories on time?
  • What was the quality of the features released?
  • What are the areas to improve?

Step by step approach to calculating individual performance

For this approach, we’ll befriend JIRA for extracting information that we need.

  1. After the sprint is over, we need access to two excel sheets. The data captured in these two sheets will be sufficient for us to calculate the individual’s performance in the sprint.
  2. The first sheet that we need to download is the Sprint report which is essentially a list of completed issues, both from the Sprint and outside this sprint. From this sheet, we’ll extract information like– list of issues, summary, assignee, story points, due dates, actual release date (date when user story marked to “In QA” — one can set the post function action in JIRA workflow to record the date automatically in the field. It will reduce your manual effort).
  3. The other sheet that we need to download is the Bugs sheet. It’ll have all the bugs reported during the sprint against the testing of user stories. One can write the query to filter out the bugs created during the sprint. From here, we’ll extract information like- list of bugs, summary, assignee, reporter, priority and main user story linked with this bug.
Figure 2- Calculating the total percentage growth in performance.

4. For example, in Figure 2, the total percentage increase in performance is calculated on the basis of four categories.

  • Story points burned (reflecting Quantity, Q1),
  • Story points matched due dates (reflecting Timeliness, T1)
  • Story points that matched code freeze date (also, reflecting Timeliness, T2),
  • And total bugs solved (reflecting the Quality, Q2).

5. This sheet records some additional information also like the total number of developers, the total velocity of all the developers, story points expected by each developer (set on the basis of past sprints, an average of the sprints) and the total number of bugs reported in the team.

Final calculation

I use this simple formula to calculate individual performance.

Performance = [(35% of Q1 + 30% of T1 + 35% of T2)- Q2]

  • Story points burned (Weightage 35%) — We’ll calculate the percentage of story points burned by the developer from the expected story points. This data is present in the Sprint report.

For e.g. if story points burned were 10 and expected were 13, the total percentage of work done is 76.92%. Then 76.92% of 35 is 26.92%.

  • Total story points matched due dates (Weightage 30%) — Story points matching due dates can be calculated from the sprint report. For example, total story points matched due dates were 5 and total story points burned 10. Therefore, the percentage of matched story points will be 50%. And, 50% of 30 is 15%.
  • Total story points matching Code freeze date (Weightage 35%)– For simplification and as per our project demand, we have fixed our last development day. It freezes the build so that we can test the build on different QA environments within the same sprint. So, similar to due dates in the previous section, we can calculate the percentage for the matching dev days.
  • Total bugs — We can get the count of the bugs from those reported in the sprint from Sprint Bugs sheet. In our calculation, we’ve kept it like this- for every minor bug 5% and for every major and above bug 10% will be deducted from the total percentage (Percentage of story points burned + Total story points matched due dates + Total story points matching dev days).

So the concept we’ve applied here is- if the bugs’ count will increase the total percentage will decrease, which will monitor the quality of deliveries.

Addition of all these percentages will reflect the total percentage improvement in the team member’s performance as compared to the previous sprint. We have added one more field “Extra Work done” to appreciate the person who has worked more than the expected story points.

The Focus meter method is working like a charm for us. We’ve arrived at this sweet spot after running several iterations in this method, you can devise your own if you wish to make any changes. Our team is happy, the management is happy and most importantly, we all are in agreement almost every time we choose a sprint champion unanimously. This shows that everyone is aware of the efforts a team member is putting in and doesn’t shy away from sharing his/her appreciation from it.

I would love to know what other ways you guys use to motivate people in your team? Are they effective? As of now, our method of rewarding individual performance is developer centric. I would love to hear your ideas on how we can gauge the QAs individual performance like this. Drop your comments and tell us about it.

I wrote this blog for our Medium Publication- Unboxing Product Management. The publication is a weekly column by people of Quovantis to share their learning.

--

--