Collecting Performance Ratings? Make Sure You Have a Multi-Rater System

Alexandra KM
Horizon Performance
2 min readOct 2, 2019

Performance ratings are an integral part of assessing performance and provide us with an easy means to do so. However, should we take these ratings at face value? Can we be certain they are accurate? The answer to that is “no.”

When understanding the data we collect, we should be aware that unintentional factors can influence our ratings. Researchers Wherry and Bartlett* suggested that performance ratings are affected by three factors: actual performance of the ratee, any biases held by the rater, and random measurement error. Additional research conducted by Scullen, Mount, and Goff* tested this framework and their findings provided evidence for this. They found that performance ratings were more affected by rater biases than actual performance. Crazy, right?

So how can we combat this? We can do rater training and design surveys with behavioral anchors in an attempt to identify different proficiency levels; those will help, but we will still likely be left with some bias. What organizations can significantly benefit from is the use of multi-rater systems. This is when multiple raters provide ratings on an individual. The result is a look from multiple people with a chance to provide their perspective. Aggregating ratings together from multiple raters helps to reduce the error and better capture the true performance of an individual.

We realize that having additional raters to evaluate individuals may not always be possible and that it takes additional resources to make this happen. It’s important to realize, however, that without a multi-rater system, your data could be flawed.

References:

Scullen, S. E., Mount, M. K., & Goff, M. (2000). Understanding the latent structure of job performance ratings. Journal of Applied Psychology, 85,956–970.

Wherry, R. J., & Bartlett, C. J. (1982). The control of bias in ratings: A theory of rating. Personnel Psychology, 35(3), 521–551.

--

--