On-the-Job Metrics for Predictive People Analytics

Ansaro
Ansaro Blog
Published in
9 min readDec 1, 2017
Source: https://www.flickr.com/photos/a2050/16100409243/

Visit us at https://www.ansaro.ai to learn more about how we use data science to improve workforce decisions

Within the nascent field of people analytics, machine learning has garnered lots of attention, much of it focused on predicting employee behaviors and events. The focus on using neural networks and other new, exciting machine learning approaches can obscure a simple question: just what, exactly, are we trying to predict when it comes to employees? All algorithms, regardless of whether they are a linear regression or a random forest or a neural network, are ways of explaining variance in historical observations (“training data”). So it’s critical that our training data reflect employee behaviors and events that are meaningful and accurate representations of reality.

Put simply, this post is about the “garbage in, garbage out” problem. To use data science terminology, it’s about choosing appropriate target labels (dependent variables) for supervised learning models. It’s not about selecting or engineering the right features (independent variables) used in trying to understand the drivers of these outcomes (we’ll cover that in a future blog post).

In a perfect world, the data collected on employee performance is objectively measured, normally distributed, and stored in a consistent, unified system. But employee data is messy: biases and opinions seep into decisions and recorded data, managers use forced distributions for performance ratings, and the data required to get a holistic view of an employee are stored in separate systems. How can we understand the people that keep a company afloat?

Metrics Overview

There’s not a single set of on-the-job metrics that describe employees at every company equally well. Below, we describe the pros and cons of some metrics that should be common to many companies. Briefly, these are:

  • Retention
  • Promotions
  • Performance ratings
  • Direct productivity measurements (e.g., quota achievement)

When the goal is to use these metrics to understand the health of a company from a people analytics perspective, we need to think in terms of data requirements for analysis. We recommend three dimensions to consider when choosing metrics for any kind of people analytics:

  1. Data availability
  2. Business impact of the metric
  3. Measurement issues

Retention

Most companies have at least moderately reliable data on employee start and termination dates in their HRIS, so retention is often the most available metric. In terms of business impact, retaining employees should be one of the top concerns for a company: the fully-loaded cost to replace a hire, inclusive of lost revenue and diminished employee engagement, is normally 100–200% of that employee’s annual salary. [1]

It should be straightforward to calculate retention rate, and to apply filters to understand retention for particular cohorts of hires (slicing by month/quarter or by business unit/functional role). Note that there’s some nuance to retention: terminations should be understood in terms of whether they are regrettable (a good employee left) or not. This data is often not recorded, and when it is, it often involves subjective judgements (many managers don’t want to admit just how regrettable some departures under their supervision are). Even when it exists, be careful of taking less reliable data at face value. We’ll talk more about metric objectivity later.

We can correlate a historical view of retention with business metrics like revenue or P&L, but this is looking backwards. What if we want to understand what will happen in the future? This is where predictive algorithms, trained on past data, but generalizable to future hires, come into play. Given other data about employees (linked via names or other identifiers), we can use machine learning classifiers and statistical approaches like survival analysis to predict the probability of someone leaving in a certain amount of time, as well as predict retention for a not-yet-hired job applicant. Of course, this ability depends on the quality and characteristics of the data available for training predictive models (the “features”) — all the more reason to increase data quality and collection efforts in the HRIS.

Promotions

Promotions occur when an employee moves up the career ladder. Maybe they excel at their job, are taking on more responsibility, are being groomed as a successor, or all of the above. Promotions normally signal a productive employee, although we should be careful to flag promotions that may not — for example, a promotion necessitated by the departure of an employee in a critical role, or a reclassification of position seniority that merely reflects a new nomenclature.

Whether it’s easy to track promotions via the HRIS is an open question. There may be a job title hierarchy to reference, the promotion may come with a location change (a physical transfer, or organizationally within the business unit), and there may be appraisal details required for the promotion. If we want to use promotions to understand quality of hire and company success, we need to carefully construct an operational definition of a promotion using criteria like those just mentioned.

When it comes to analyzing the underlying causes for a promotion, or even predicting whether someone will be promoted, there may be a class imbalance issue. Promotions don’t usually occur at high rates, so there may be much less data for the promoted group of employees compared to those who don’t receive a promotion, especially when considering only one business unit or small period of time. This may make such an analysis difficult. Consider whether there is some other metric to supplement a sparse dataset, like being given “high potential” status in an internal review.

While promotions can be a good way of tracking existing employee productivity, a warning on using promotions as a criterion for a new hire assessment: US regulation only allows promotions as a hiring criterion if (i) the original job is similar to the job after promotion, (ii) the promotion point is within 5 years of hire, and (iii) a majority of individuals in the job at that time are promoted. Points (i) and (ii) tend not to be too much of a barrier, but (iii) can present major issues if promotions are rare. [2]

Performance Ratings

Source: https://www.flickr.com/photos/vathis/15223336584/

Companies may collect employee performance ratings one or two times per year, a task usually performed by a manager or supervisor. While these kinds of ratings can give a sense of how well or poorly people are doing their jobs, performance ratings have some of the biggest quality issues.

First, the data could be sparse in terms of frequency, as in the case of yearly performance ratings. You don’t want to wait a year to understand how a new hire is performing. Is there feedback solicited from supervisors at, say, 90 days on the job to understand a new hire’s performance earlier, or some other proxy for performance?

Second, there may not be a lot of variance in performance rating data, especially when the data is qualitative. If 90% of employees fall into the “meets expectation” bucket, this won’t be informative for anyone and will make predictive modeling difficult, if not futile. [3] Managers may also have to assign ratings according to a forced distribution, which means the ratings won’t reflect reality.

Finally, there’s the reliability of the data again. Rating scales may have changed over the years, and documentation of this may not exist for analysts to reference. Or maybe the raters are highly variable: one manager may be inconsistent in their ratings, or there may be inconsistency issues across the company (managers rate differently from each other).

Any of these gaps will make predictive analytics difficult. While we’ve seen the issues mentioned above in our work, thankfully we’ve also encountered companies that make a concerted effort to do well in terms of performance ratings, or they are at least aware of the issues.

Direct Productivity Measurements

Source: https://www.flickr.com/photos/ajshepherd/6856450657/

When a metric is tied to a company’s bottom line, the more objective it is, the better. Direct productivity measurements are specific to a job’s function: a sales associate either achieved their sales quota or did not; a customer service representative kept this week’s call handling times within a reasonable distribution; customer satisfaction ratings were above median for a help desk employee.

Accessing these metrics often means going outside of typical HR data in the HRIS and tapping systems owned by the business units, like a CRM or ERP. This can require a longer timeline, IT support to join different data sources, and more organizational buy-in overall — but getting that extra support from the business that comes from using “their data” is normally a good thing for an HR analysis.

Direct productivity metrics can be great if you have them, but not every role is quantified with sufficient frequency (e.g., daily or weekly). Direct productivity metrics may have a different kind of sparse data issue: they’re just not recorded. Get creative, and track something! Maybe a programmer’s output is measured in lines of code written or Git commits. This data can then be stored somewhere for future analysis. You can’t improve what you don’t measure!

Other Metrics

Source: https://www.flickr.com/photos/enerva/14296912543/

What about the other outcome metrics your company cares about? Maybe it’s whether deadlines were met, a net promoter score, employee engagement, if a pay raise occurred, or comes in a long-form report from a yearly review. They may be valid, or they may have their own issues. It’s a good exercise to think through how they fare in terms of the three dimensions discussed above.

Long-form reports surely contain a plethora of data points, but they’re meant to be manually digested by a person. If you want to use past behavior to predict a future performance review, it’s important to consider the predictive analytics angle: how easily can the report be turned into quantitative data? It may be necessary to invest in methods like natural language processing.

A pay raise may not always be an indicator of good performance. Hopefully, a merit based raise is denoted as such in the HRIS. But consider that everyone may receive a yearly bump, or hourly employees get higher percentage raises than salaried employees, or a raise is an incentive to retain an employee at risk of leaving (who may leave anyway) meaning it doesn’t reflect their past performance. We need to be careful about how the metric is defined when tying it back to on-the-job performance — predictive models require a relationship to exist between these.

Employee engagement surveys can capture how people are feeling, but are questions about engagement asked in an unbiased way, and are employees being honest in their responses? Subjectivity can show up in a lot of places.

Meeting deadlines and net promoter scores, on the other hand, likely satisfy our need for objective measures and business impact. Now we just need to make sure the data is recorded somewhere!

Conclusion

The metrics discussed above are just some of the outcomes that can be used to assess both individual employees and companies/business units in aggregate. They all come with issues, and you should be thoughtful when applying them.

Remember that these metrics should be considered in terms of business value — as in an expected value calculation. Does tracking and analyzing something help understand where business functions can be improved? What should you be measuring and recording that you’re not?

The predictive aspect of people analytics is a capability that many HR departments are looking for. Take a moment to think about the metrics that are important to you. If you could predict something, would you make different decisions with that information? If not, it’s probably not meaningful to predict.

–Matt & Sam, Cofounders, https://www.ansaro.ai
matt.mollison@ansaro.ai, sam.stone@ansaro.ai

We’re hiring! If you’d like to help companies gain workforce insight, check out our job postings: https://angel.co/ansaro/jobs

References

[1] “How Much Does Employee Turnover Really Cost?”, Huffington Post, Jan 18, 2017. https://www.huffingtonpost.com/entry/how-much-does-employee-turnover-really-cost_us_587fbaf9e4b0474ad4874fb7

[2] “Uniform Guidelines on Employee Selection Procedures”. https://www.eeoc.gov/laws/regulations/

[3] “In The Federal Government, All The Workers Are Way Above Average”, Investor’s Business Daily, Jun 20, 2016. https://www.investors.com/politics/commentary/in-the-federal-government-all-the-workers-are-way-above-average/

--

--