All we need is a system, you see. Carefully calibrated collections of incentives that nudge people in the right direction. The workplace will be far better for it. Everyone will start showing up on time because the system will reward that. They’ll do more work because the system will engender it. This additional work will start the virtuous cycle that makes the company more money which will make our workers more money. A system, man.
Even the worst of our employees will get behind this. That’s the power of the system. It’s like that famous economist said:
The way to solve things is by making it politically profitable for the wrong people to do the right things. — Milton Friedman
So here’s how we do it: define “great performance” for all employees. Get a formal list of behavioral expectations. And results expectations. Then break these ideas of “great performance” into categories. No one can be perfect at everything but they’ll be good at something. So categories will call out the various strengths and weaknesses. Something like this:
- Conceptual and Critical Thinking
- Achieving Business Results
- Organizational Know-How
- Technical Knowledge
- Personal Effectiveness
The rest is simple. Easy-peasy-macaroni-cheesy! We’ll measure every employee accordingly. Most will meet expectations, some won’t, some will exceed expectations. Rewards will be commensurate. Punishments, too. It’s a lot of work so we’ll only do it once a year.
The results will be phenomenal. It’s clear. It’s fair. It’s apolitical. People are going to love this.
No One Likes These
In all of management practice, I don’t know of anything more maligned and unsuccessful as the performance review. Who among us has enjoyed a really great performance review that was also purely derived from a systematic approach?
Manager: I give you a 3.4 out of 5 on Personal Competency.
Employee: Thank you so much. That is exactly what I wanted to hear.
This hypothetical verbal exchange has never occurred at any time in the history of human work.
All the same, I’ve served largish organizations for a while now and have done my very best to deliver performance reviews in a good way. This is true for most managers I know. We’ve all tried to do this right.
Yet we always end up dissatisfied. We lose sight of why we do it in the first place. Assuming we ever had a clear answer to begin with.
So why do we do these reviews? Why do we adopt these performance management systems?
Is it a surrogate for management? A system that keeps people in line so that we managers don’t have to? Not necessarily but one thing is clear: these systems can be effective.
The Rankings Will Continue Until Morale Improves
Much has been written about the painful, horrid stress that came with the stack ranking system at Microsoft. The very best article on this comes from Kurt Eichenwald. It’s an instant classic. Here’s the link. The article is old now, written in 2012, but as rich as ever. The author gives us a view into the steady decline of a maturing company. It a frame-by-frame narrative of a slow train wreck that provides insight into the tech industry, its competitive dynamics, and anti-trust battles. All the same, many people refer to it as “that article about stack ranking”.
For anyone who may be unfamiliar with what stack ranking involves, this quote from the article will help:
“If you were on a team of 10 people, you walked in the first day knowing that, no matter how good everyone was, two people were going to get a great review, seven were going to get mediocre reviews, and one was going to get a terrible review,” said a former software developer. “It leads to employees focusing on competing with each other rather than competing with other companies.”
Horrible, right? And cruelly effective. Which is why Microsoft practiced it. And GE. And Amazon.
And like with so many things, a practice of this sort eventually gets a derisive nickname that really captures how people feel about it. For stack ranking, that nickname is “rank and yank”.
Microsoft reportedly began the practice in 1996. It ended in 2013. But again, Microsoft isn’t the first or only company to practice this. Countless organizations use it today.
And who knows? Maybe it works better in other permutations. There’s no need to categorically reject the idea. We can’t. There is an inescapable logic to it that reminds me of a Charlie Munger quote:
The ironclad rule of the world is that there is always a bottom 50%.
So let’s apply some stack ranking to performance management systems. A taste of its own medicine! Microsoft’s system appears to have been in that bottom 50 percent. Who’s system is in the top 50 percent?
Who Does This Right?
I’ve read a lot of books on how companies do things and Google always impresses me. They’re not perfect but there are few companies that have such a great history of vocal leaders who actually share what they do, why, and how others can do it, too.
After all, without Eichenwald’s investigative journalism, we may not know about the abuses of Microsoft’s system. But Google? They’ll write a book about it themselves. And tell you everything they do (within reason). This is what Laszlo Bock did with his 2015 book Work Rules!
The seventh chapter in the book is all about how they strive to make performance reviews the best they can be. The chapter has a great title worth citing here: “Why Everyone Hates Performance Management, and What We Decided To Do About It.”
What, indeed, did they decide to do?
Quite a bit, actually. I won’t spoil the chapter for those intrepid souls who can, and should, read it (i.e., everyone). I’ll try my best to just give a brief preview and offer some basic insights.
To start, I think what matters most is that Google has perpetually questioned their efforts. Why conduct performance reviews in the first place? The answer is very good and very simple. As Bock writes:
We need people to know how they’re doing.
Makes sense! And this honest, righteous idea is tempered with a bit of ancient wisdom by way of the Latin phrase Primum non nocere.
First do no harm.
This is so important I’ll write it again. First do no harm.
If I had to characterize Google’s entire view on management in five words, I’d quote that line and save us 20% on the word count.
So how do they do that? Is it the real-time, regular-occurrence feedback that’s emphasized in places like Adobe? With the “check-in” process? I think so. I mean, I think every good manager does that in every company, whether it be formally-structured or not.
In fact, if you click the link above for the Adobe “check-in” process, you’ll see their effort is practically just a rebranding of the one-on-one structure that I first detailed with Andy Grove’s book High Performance Management. And Horowitz’s The Hard Thing About Hard Things. And in the article Meetings Are A Manager’s Medium.
Maybe I’m missing something but, again, that idea of the “check-in” is fine. It is also what we should consider to be the base level of standard management practice.
An Evolving System
Bock argues persuasively for a richer system with a bit more retrospection. I think he’s right. To borrow from his own words, he mentions the trap that we fall into when we rely solely on this in-the-moment “check-in” feedback:
Most real-time feedback systems quickly turn into “attaboy” systems, as people only like telling each other nice things.
And among the more formal systems, many of which I’ve experienced or studied, I continue to be impressed with the OKRs approach. It starts with goals. Everyone establishes goals. From there, a broad retrospective assessment is developed. Did you meet your goals/objectives? Did you deliver the results?
There is some lovely nuance in that question: Google considers a 100% achievement rate to be almost as bad as a 0% achievement rate. If you achieve 100% of your objectives, it shows that the goal didn’t stretch you enough. The better idea is to aim high with a Collins-esque BHAG and hope to get, say, 70% of what you reached for.
Whatever is accomplished, those results are weighed with the rest of a person’s performance and their general behavior as a member of the organization. These factors lead to a final assessment that is then carefully tempered with a peer review process.
Bock refers to this peer review process as calibration. It is a very good idea. As Bock explains:
Calibration adds a step [to the process]. But it is critical to ensure fairness. A manager’s assessments are compared to those of managers leading similar team, and they review their employees collectively.
How does it ensure fairness? By combating the pervasive biases that prevent us from being as objective and balanced as we should be. Every manager, myself included, suffers from recency bias (only remembering last month’s work, no the entire year’s worth), extreme judgments (everyone is either awesome or awful), and that dreadful slide into central tendency where everyone gets a “3.1 out of 5”.
Bock persuades me that the only way we can avoid these traps is by collaborating with fellow managers. So as reviews are developed, a formal step whereby you to talk with other managers who are familiar with the employee is a great way to improve your thinking.
Along with calibration to the assessor, there are calibrations made to the assessment. Google has evolved their system many times over. The company reviews their review process. They give it a data-driven performance assessment to see how the data-drive performance system is performing. How recursive!
How do they do it? They examine actual scores, patterns over time, and they poll their staff to see how they feel about the experience. They make modifications, experiment with new styles, simplify and clarify as they go.
It makes so much sense when you read the chapter. And I hate the way it makes me sound like some cheerleader for the company. All the same, what Bock describes is the precise ideal for how these systems should be built.
People evolve, organizations change, and the systems must change with them. To toss them out entirely (e.g., Adobe) makes little sense to me. To hold onto them when people protest (e.g., Microsoft) doesn’t work, either.
There is a third way. Bock provides it in this book. And for once, I like what someone has to say about how we should do this specific element of our work. Grove and Horowitz gave us insight on how to conduct a performance review. Bock shows us how to improve it.
To conclude, there are many flaws in every performance management system. The worst thing we can do is think we can’t or shouldn’t improve them. Abandoning the effort to improve these systems is akin to abandoning the effort to improve our entire practice. If we managers don’t get serious about making it better, should anyone treat us seriously?