The Performance Review That Didn’t Suck
No seriously. It happened. Want to hear about it?
I was wandering the streets of Prague with my good friend Joanne Perold, and we were swapping stories of cultures and practices gone by. We moved into the realm of dreaded performance reviews, and I revealed that I once was part of a performance review that not only did not suck, the two of us involved actually found the experience enjoyable. After telling the whole story, Jo just stopped, pointed at me and commanded: “Blog that”.
So this is me… doing that.
A few years ago, I was employed at a mid-sized software company and was fortunate enough to manage a team of 18 very talented software testers. As a team we functioned well, and I felt that generally the team members were reasonably happy when it makes sense to be happy (I learned early on to never aim for always happy. Instead I strived to ensure that it was understood by all that I was always willing and available to help individuals achieve personal happiness)
There was however that one event that created serious havoc in our team harmony. The structured and highly standardized performance review. Year after year, our company HR department did their very best to morph this inhuman experience into something that would be universally beneficial, fair, enjoyable, and most importantly absolutely quantifiable. Each iteration felt more awkward and artificial than the last. It was in complete contrast to the easy style and communication paths that we had all worked so hard on to build in our team, and tying objectives/targets to financial motivators only made things worse.
Just One Straw…
The final straw happened one year when we set targets for the number of scores expected across the company and within each department. We now found ourselves being asked to modify history in order to justify meeting a curve.
I spoke with pretty much everyone on our team, and it became clear to me. Nobody enjoyed this. Nobody benefitted. It was an evil that was needed in order to complete a procedural step of allocating funding that was provided for financial changes in salary and bonuses.
At some point along the way, I found myself wondering what would happen if I did not actually perform the PR’s. Would they actually withhold the salary increase? Would they fire me? I just did not believe that we were that sort of company. We were a people first company (with the exception of these PR’s). So one year… we just did not execute the performance reviews
I had chatted with everyone on the team to discuss what we felt was fair with respect to salary increases. I revealed to each of them the total salary pool, and what that translated to if we were to divide it equally. We each discussed what we thought was fair, and in the end we had a salary increase distribution that was deemed fair by everyone, and not once was it tied to any targets or objectives. Then I just informed HR that the salary has been discussed, and that the PR’s were not completed yet.
About a month later I was asked about the progress. “Oh, we have been busy. Nothing yet”. Same next month. You probably get where this goes. When PR time came around the following year, I was a bit of a concern (target) in preparation for the upcoming discussions. Please do not get me wrong, the support from my immediate leadership was legendary. My boss trusted me, and understood that our team was constantly communicating and expressing how we felt about personal growth and team priorities. He trusted us, and he had my back. But that poor guy was left having to explain why I was effectively ignoring a procedural nightmare that every other manager in the company was required to complete. This weighed on me. It weighed on our team. We decided to see if there was a way to make this work.
O Brave New World…
I live a charmed life full of imaginative people who are willing to help me explore different ways of navigating the way we work and collaborate. One such fellow is my friend Edward (who is still an unknown entity on twitter but is secretly one of the best software testers I have ever worked with). Together we decided to dedicate some months exploring different ways in which we could run through our existing performance review, iterating often, making alterations based on some hypothesis, conducting retrospectives… learning.
The point of the first review was simple. Let’s go through it, submit it, identify what truly sucked and where we wanted to experiment, and by demonstrating that we have in fact “started” executing PR’s hopefully take some heat off my boss who was still standing strong believing in us and supporting us. The laundry list of things we did not like was deep. Even with the knowledge that we were exploring, learning and possibly improving our own situation, the nature of the review remained completely out of step with what our team held dear and what we felt mattered.
Missions over Objectives…
The first area of disconnect was the structure of the review, which clearly was intended to be entirely quantifiable. Even aspects of interpreted behaviours and interpersonal relationships were intended to be tied to milestones and assigned targets. This simply did not work for us. Our interactions were far too fluid, and as a team we were contributing to an evolving culture and product offering that took years to gel. As individuals, we were asked to fundamentally understand our company’s business goals, and to make responsible decisions daily on where to spend our time, and who to work with in order to help the R&D organization, and the larger corporate entity (as we understood it) succeed as a whole. Setting targets and goals prior to the upcoming period simply did not align with this approach, and it was not something that supported the ongoing culture shift towards motivated individuals, working together, learning and improving. So instead we identified some key missions that we felt spanned projects and features. Here is what we came up with:
“Our team is actively investing time and effort into our portfolio of products and services. We work with every department within R&D and Product Management to:
Assist in Project Planning: Work with Product Management and Software Development to find congruence across our stakeholders and project teams. Goal: Uncover and eliminate shallow agreements as early as possible.
Test Ongoing Software Development: Question product behaviour, and apply critical thinking to every new testing challenge. Work with our teams to provide leads/developers continuous information through testing efforts throughout the project. Goal: Well informed team decision making… help our developers develop.
Contribute to Product Value Protection: Work with Engineering Support to design, build and support sustainable software checking frameworks for our supported products and services. Goal: A sustainable and trusted early warning system about product changes that may threaten the value of our product portfolio.
Continuously Research New Technologies: Technology is fun. As new problems are uncovered, brilliant innovation (across the industry) presents itself to address this need. The world is then changed, and new problems present themselves. Goal: Build everything with the notion that you will find a better way (soon) and prepare for that next opportunity. Learn, Contribute, Innovate.”
Armed with this, we started modifying our performance review to better resemble these missions, and the attributes we felt contributed to each decision to contribute as team members and individuals. With each iteration, we reflected on the changes, and how they contributed and took away from the experience, and tried to formulate a real purpose for the discussion as a whole.
Magic… the Gathering…
Somewhere along the path, we started to find our stride. Ideas for improvements came more naturally, and more personalized to how Ed and I communicated. After about six months, we had an enjoyable 90 minute long session and wrapped up to get back to work for the rest of the day. About 15 minutes later, Ed popped his head into my office and said “THAT was enjoyable, and I look forward to the next one”
“Yeah, me too” I said with an incredibly big smile and equally big sense of hope.
Here is a brief summary of that performance review:
On a whiteboard, we drew two axis:
Ed and I sat down, and we grabbed some sticky notes and independently started writing down everything we could think of that Edward worked on and everything that I worked on since the last time we spoke.
We then went to the whiteboard and started placing the stickies one by one on the board, describing the event to one another, and discussed where it belonged with respect to:
- Was it time spent on something existing and understood, or was it something new and unexplored?
- Was it time spent learning/exploring, or was it time spent on performing known skilled activities
Once the last sticky note was placed on the board, we created a colour legend for our whiteboard markers. Each colour represented one of our missions, and together we started circling and grouping the sticky notes according to which mission we felt they best applied. Many of the notes had multiple missions associated to them
We had a document now ready to be filled in, with two sections:
Section 1 was essentially “Here is what Ed and Martin discussed in this Performance review”
- It was divided into four sections… one for each mission
- Together wrote up each of the groupings into the agreed upon most appropriate mission, and described aspects of our conversation and general thoughts about how it went
Section 2 was very similar looking with four sections again matching our missions, but it was forward looking and titled “upcoming opportunities”
- Here, Edward and I had a great chat about what we thought the two of us might be able to work on over the next period, and described some of the key information about relevant timelines, events and resources that we might need in order to experience some success/joy.
- We also took some time to identify risks and concerns.
We reviewed what we wrote then and there, and signed it.
We smiled, and shook hands.
A day or two later, Edward and I spoke a bit about this experience, and tried to unravel what made this approach so enjoyable, so relevant and so appropriate for the conversation we wanted to have. Here are some of our findings (as best we remember them):
- The review was not a measure of Edwards performance. Rather, what we discussed was our interactions and working relationship during the period in question. We are members of a team, and we depend on and trust one another. We share a common purpose. We strive for (at least) strongly related goals. We share in our success and we support one another when we struggle. The review of the period has to consider that teamwork, and not an individual’s actions
- We used a model that mapped very nicely to the model we use in everyday conversation about our work (It was not a new lens) Where we discovered gaps in alignment with corporate goals and strategy, this information had to be sought out. We immediately attempted to better understand the nature of these gaps both in our work as well as in this review.
- There was a familiar approach for looking back, and equally familiar when looking forward
- We gave ourselves as much time as was needed to complete all the conversations we wanted to have, and agreed on when we would have our next meeting.
- We filled in the document right there and then, and discussed the content throughout the time we spent together (no homework)
- We were both open to adapting the way in which the conversation took place, in order to accommodate each other. The point was having a comfortable conversation. Not filling in the document.
So, what next? Did we then institute a new approach to performance reviews based on this experience? No. This was not something that made sense for everyone. It worked, because it was tailored to suit the conversations that Edward and I wanted to have. It (currently) made sense to us, based on our relationship and our working environment. I most certainly would have been willing to initiate the same sort of approach to identify a good mechanism for other team members, but the most obvious observation at this point is that there were simply too many people on my team to accomplish this approach. I had too many direct reports. I had not seen this before… Specifically because I had tools available to “complete my evaluation tasks” that did not require me to build appropriate relationships with my team members. The Performance Review model that we were asked to use was exactly this. A tool that enabled managers to systematically defend evaluations of individual effort based in very simple quantifiable parameters in a tight and orderly way. But real relationships are not like that, and real teams cannot work this way.
Looking at the Man in the Mirror…
So when all was said and done, Edward and I had achieved what I thought might not be possible. We found a model that worked for us, and one that we could both enjoy and one that complemented our working relationship. My main takeaways are that if ever I am asked to conduct anything like performance reviews again, I will look for an opportunity to involve my teammates in the design, and evolution. I will look for there to be autonomy for each member to evolve the approach as our relationships evolve. And I will try to use the ability to maintain such discussions as a good reminder to ensure that I do not stretch the size of the team so far that maintaining healthy individual relationships is not possible.
Until then, I am okay with just never performing anything called “Performance Review” ever again.
Originally published at developersbestfriend.com on June 3, 2015.
Originally published at developersbestfriend.com on June 3, 2015.