Utilizing Interquartile Range (IQR) for Identifying Outliers in 360 Competency Assessment

Karimatukhoirin
6 min readAug 6, 2023

--

Why we need to identify the data outlier in 360 feedback result ?

Most of the time, when we’re trying to figure out how well someone’s doing using the 360 feedback, we use this method that looks at averages. But there’s a catch — sometimes one person’s really high or really low score can mess up the overall picture for that employee. Yes, concern arises due to the vulnerability of averages to outliers, resulting competency score for employee may lack of robustness.

Detecting data bias ensures the fairness and equity for all individuals involved. Unaddressed bias could lead to unfair treatment or favoritism, compromising the integrity of the PMS (Performance Management System). By removing biased elements, the evaluations become more reliable as a basis for making important personnel decisions, such as promotions and Individual Development Plan (IDP).

So, I recommend giving the InterQuartile Range (IQR) a shot when it comes to handling outliers and getting a more accurate grasp of an employee’s competency from their 360 feedback.

What is InterQuartile Range (IQR)?

Image Source: Wikipedia

The IQR of a set of values is calculated as the difference between the lower and upper quartiles, Q1 and Q3. It provides information about the spread of the central part of the data distribution for identifying the variability of the middle 50% of the dataset.

Q1 is the value below which 25% of the data falls. It is the median of the lower half of the dataset

Q3 is the value below which 75% of the data falls. It is the median of the upper half of the dataset

So, the IQR is the difference between the third quartile (Q3) and the first quartile (Q1). IQR = Q3 — Q1

Outliers are defined as observations that fall below Q1 − 1.5 IQR or above Q3 + 1.5 IQR.

We use the IQR to identify outliers, which are data points that significantly differ from the majority of the data. But carefully take note that outliers themselves do not necessarily indicate bias in the data.

How to determine the IQR and find the outlier in 360 feedback result ?

Let’s use the given example of Employee A, who received 13 scores for the [teamwork] competency assessment (using 1–4 scale) from their leader, team members, and peers/users, as shown in the table on the left. From this data, we can derive the following statistical measures:

Q1 = 3: This value indicates that 25% of the scores received by Employee A are below 3, and 75% of the scores are above 3.

Median or Q2 = 3: The median is the middle value in the dataset when it is arranged in ascending order. In this case, half of the scores received by Employee A are below 3, and half are above 3.

Q3 = 3.5: This value shows that 75% of the scores received by Employee A are below 3.5, and 25% of the scores are above 3.5.

IQR = Q3 — Q1 = 3.5–3 = 0.5: The IQR represents the spread of the middle 50% of the scores, and in this case, it is equal to 0.5.

Lower 1.5*IQR whisker = Q1–1.5 * IQR = 3–1.5 * 0.5 = 2.25: This value represents the lower boundary beyond which scores are considered outliers. Any score below 2.25 is considered an outlier in this context.

Upper 1.5*IQR whisker = Q3 + 1.5 * IQR = 3.5 + 1.5 * 0.5 = 4.25: This value represents the upper boundary beyond which scores are considered outliers. Any score above 4.25 is considered an outlier in this context. But beacuse there is no data point at 4.25, then threshold is at 4.

So, a score of 2 would be considered an outlier below the Lower 1.5*IQR whisker value, which is 2.25.

The data processing to identify outlier: detailed processes and adjustment i did

To identify outlier reviewers (those exhibiting extreme scoring/assessing behavior), I count their total number of outliers. A reviewer is classified as an outlier if their total number of outliers exceeds 5 or is equal to 2/3 of the total number of competencies analyzed. This threshold may be different for you depends on your organization’s standard.

Based on the count of ‘upper outliers’ and ‘lower outliers,’ I classify the reviewer’s behavior as ‘overestimate’ when ‘upper outliers’ are greater than ‘lower outliers.’ Overestimate’ indicates that the reviewer tends to assess the reviewee more favorably than the norm. Conversely, if ‘upper outliers’ are less than ‘lower outliers,’ we label it as ‘Underestimate,’ signifying that the reviewer tends to rate the reviewee more critically than the norm during assessments.

Data interpretation of outliers

As it has been known, the outliers are values that lie at an unusually high or low distance from the central tendency of the data and these extreme values can distort the overall pattern and distribution of the data. In the context of 360 feedback, where an individual receives feedback from multiple sources like supervisors, peers, subordinates, and self-assessment, an outlier can indicate a few potential cases:

1. Personal Bias, sometimes a reviewer may give an outlier score due to personal bias or favoritism, which can lead to an unfair assessment of the individual’s performance. But when a specific reviewer consistently exhibits the same pattern of outliers, it may indeed suggest that there could be issues with their understanding of the scoring system or that their scoring behavior is influenced by personal biases or references.

2. Genuine exceptional performance or spesific dissatisfaction when working with that respective reviewer. This could lead us to an information either a spesific project involved the reviewer and reviewee has a potential conflict.

Treatment Recommendations for Outliers

When your subordinates have been assessed by a reviewer who exhibits outlier behavior (providing scores significantly deviating from the norm), it is essential to handle the situation with sensitivity and objectivity. Here are some suggestions for you to address this issue:

1. Review Feedback Data and Identify Potential Bias

Carefully review the feedback data given by the outlier reviewer. Understand the specific comments given by the reviewer to gain insights into their scoring patterns and potential biases. Look for any signs of bias in the feedback provided by the outlier reviewer. Bias could be based on personal relationships, perceptions, or preconceived notions about the employee being reviewed.
But in cases where a particular reviewer consistently exhibits a recurring pattern of outliers, it is essential to proactively address this issue. We should prioritize educating the reviewer on the scoring system to ensure a more objective competency assessment in the future PMS periods.

2. Engage in Discussion

If appropriate and feasible, have a candid and constructive conversation with the outlier reviewer. Seek to understand their rationale behind the outlier scores and address any concerns they may have about the employee’s performance.

3. Use Data Wisely

While outliers should not be ignored, make decisions based on a balanced assessment, considering inputs from multiple reviewers and other performance evaluation data. You as the leader can decide whether we will proceed the result with the exsisting result or you wanna take out the oulier reviewer data.

--

--

Karimatukhoirin

People and Organization Development Practitioner - love working with data - kaizen, strive for excellence