“Not everything that can be counted counts, and not everything that counts can be counted” — William Bruce Cameron, sociologist
When we present our Role Maker app and the capability matrix approach that underpins it, one question we are frequently asked is “what do you measure”? People professionals and managers want to know how our system, which is focused on professional development and career mapping, fits within their existing approach to performance management.
The short answer is “we don’t measure anything”. Unsurprisingly that isn’t a very satisfactory response.
So here’s a slightly longer answer…
Don’t Measure for the Sake of Measuring Something
There’s an aphorism in management circles: “if you can’t measure it, you can’t manage it”. This saying is attributed to Peter Drucker, the “founder of modern management”. It’s a nugget of wisdom that seems like such obvious common sense that it drives most of what leaders do. In particular, it underpins (implicitly or explicitly) a lot of mainstream people management, ever since Jack Welch introduced his “vitality curve” to GE in the 1980s.
The first problem with this piece of “common sense” is that, as Paul Zak writing for the Drucker Institute points out, Drucker never said it. Instead, Drucker himself was adamant that measurement, while vital to good management, was not the whole story. Instead, he was careful to emphasise that technical management has to be combined with leadership, including “the relationship with people, the development of mutual confidence, the identification of people, the creation of a community.”
In other words, don’t expect that things worth doing, even things vital to your success, can be measured, or that if it can’t be measured it isn’t worth doing. Of course it is fair to be nervous of black boxes and secret sauces, because their value often can’t be quantified. In the people space another fear must be that people systems lack transparency and fairness. However, perhaps paradoxically, the systems managers use now are actually worse at providing certainty or consistency, and can easily backfire.
What Harm Can It Do?
Quite a lot actually. We’ve argued previously that rating people triggers a fight-or-flight response that lowers morale and commitment, even among those who are rated well. So at a time when people and culture leaders are pushing for greater focus on organisational commitment and engagement, measuring performance becomes counter-productive.
There’s a great example in the near death and resurrection of Microsoft, a company that almost perished under the weight of old-fashioned stacked ranking, but which rediscovered its creative mojo when it ditched evaluations. The Microsoft problem, actually common to many companies, is that individual performance evaluation destroys collaboration, and creates perverse incentives for people to be risk averse or undermine colleagues.
Rating systems have also led to law suits alleging that they contribute to unfair and biased outcomes. Issues arise because managers lack confidence in using the evaluation system because it provides little or no concrete direction. Often managers will give everyone vague and middling evaluations to avoid having to defend high or low ratings, making it difficult to later fire or promote people. Or else, because the basis for ratings are themselves vague, unconscious bias will creep into what purports to be an objective system.
If you are measuring things simply because you think you have to be measuring something … stop now! Step back and take a fresh look at what you are trying to achieve and re-think how you can achieve them. The good news is, by doing this you will transform yourself into an innovator and a disruptor.
More Good News
You don’t have to stop measuring things. You just have to think about the connection between what you measure and what you actually need to know, how the measuring itself affects behaviour, and the alternatives to traditional types of metric.
Often scientists cannot directly observe or measure things they wish to study. Planets orbiting distant suns are impossible to see even with the largest telescope. The famous Higgs Boson is a particle of sub-atomic size we can’t see even with the most powerful electron miscroscope. So instead scientists develop theories about what should happen if a planet is present or the Boson really exists, then set out to detect those effects instead. A well-designed experiment based on a robust theory will ensure that detecting the effect must mean the object exists and so we can know it’s really there.
Trying to manage performance is much the same, except we also have opportunities to see if things we think may affect outcomes really do. While it may seem unethical to “experiment” on workers, where we have good reasons to believe a change will be positive we can try it, and put in place measures (such as rapid employee surveying and statistical analysis of retention trends) to measure impact. We can also measure indirect effects when we make changes to the way workers are managed or incentivised without having to measure individual performance. Since the real goals of talent management are lower costs, better ROI and higher productivity we should measure these. These are all good proxies for performance, since we have good reasons to think they are linked, and in doing this we avoid the downsides of directly measuring individual behaviours. These metrics can be supported by employee surveys that qualitatively measure attitudes such as engagement, perceptions of the organisation, and future employment or departure intentions.
What About Accountability?
One reason that people want to measure individual performance is to ensure individuals are accountable for doing their work. This is often also linked to the desire to encourage greater commitment and harder work by paying bonuses or performance pay.
At CapabilityBuilder we believe in individual accountability. We just think people should be accountable for doing the things that matter, and for things over which they actually have some control. Our approach is all about setting people goals that stretch people’s skills, knowledge or responsibilities, in order to emphasise collaboration, continuous professional development, and quality rather than quantity.
Measuring individual output is liable to undermine rather than enhance productivity. W. Edwards Deming, the American statistician and Godfather of the Quality Movement argued that 85–97% of the barriers to employee productivity arise from the system in which they work and not some fault of the worker. One of the big problems with performance evaluation is that it explicitly shifts responsibility for outputs to individuals who largely cannot resolve their problems alone, and lack the authority to force others to act. It is no wonder people regard evaluation as threatening.
Offering rewards such as bonuses and performance pay is no better in this regard than threatening sanctions for “underperformers”. In fact, if individual workers have little control over outcomes, there is something arbitrary about whether rewards will be received. This can backfire in a lot of different ways. Individuals may regard the distribution of bonuses as unfair, undermining their commitment. Managers may become lax in enforcing what they also know to be arbitrary rules so that people become used to receiving bonuses. This then becomes part of their expected remuneration. At best, bonuses then lose their ability to motivate (if they ever really had any). At worst, they can destroy employee commitment overnight if for some reason bonuses are not paid. Even if the employer has what seem to them good reasons for withholding bonuses, if employees have come to regard these as a part of their basic remuneration package, and feel they could not control the conditions that led to pay being withheld, they will inevitably regard the drop in pay as unfair.
Bonuses are also an unnecessary cost in many cases. There may be exceptions, such as executive pay or commissions, where pay should be linked to the financial performance of the company, but in general employers will have to offer the market rate for the job to attract people. The extra pay can be problematic (as we’ve argued above), or a good example of diminishing marginal utility. This is just a fancy way of saying that once people earn enough to live well relative to their peers, and as long as they feel they are getting what they are worth relative to the market, any extra money they get is less and less important to them.
Employers will generally be better off paying the going rate for the job and letting people get on with trying to do a good job.
Learning and Improving
So instead of measuring the usual things, like outputs or raw numbers of goods sold, clients recruited, etc, focus on measuring things that contribute to success. These will vary depending on your business. It is possible to set useful metrics for individuals, if these really measure quality and drive the delivery of cost-effective, positive customer experiences. A great example of this is a call centre that switches from measuring how many calls individuals take to the time they are available to take calls. Allowing workers to focus on individual callers, and thus give each one a useful and even pleasant experience, improves morale and customer experience.
As a talent management business we focus on several things that Deming argued long ago will drive business success. These include continuous learning and pride in doing a worthwhile job well. We achieve this through creating clear and constantly-evolving role profiles that allow individuals to set development and career goals, while linking these to the needs of their employer. By measuring success in learning outcomes, holding people to account for meeting development objectives, and rewarding them for going beyond the confines of a traditional position description employers can measure what matters, and drive success by investing in people.
So, yes, we do measure things, we just don’t measure the things everyone else does. We measure what matters.
You can also read and comment on this article on Medium
Originally published at capabilitybuilder.com on September 1, 2017.