Predictive Performance Analytics at Psychd

by Mandeep Sidana (Founder)

People Analytics by Psychd
Sifium
3 min readApr 22, 2020

--

Background

For some background, Psychd Analytics was a startup I founded in 2015 and one of our flagship recruiting solutions was aimed at delivering predictive performance scores for incoming candidates to allow for better unbiased recruiting. So there are 2 parts to the solution, one is determining what is needed for performance and the other would be testing for those attributes. Now, these attributes can't be the same for different organizations, different cultures, and geographies. So the first part of the solution an attempt to automate competency mapping(explained here) that would determine competencies required to perform well in a given job and the other part was testing candidates to score them on these competencies. So our platform would test and match candidates walking into a company for selection.

If you think that personality traits are not a good indicator of future performance, we might have contrary research for you. While certain personality traits have a low correlation to performance prediction (<0.5), they are still better indicators of performance than say years of experience or education — common metrics most recruiters were using and still use.

The beachhead

Now determining and testing for certain personality traits is technically feasible but as a product manager, I knew that the real value could only be determined based on how good our analytics perform in a real company. So we need to do a POC. For doing POCs with real clients and real people, we needed to validate our performance prediction with actual on-job performance on the candidate. Since this is usually hard to track and our product only used personality data for performance prediction, we decided to use the product only for specific profiles such as sales and customer service, where performance is relatively easier to measure and personality traits have a high correlation with performance.

The Method

To calculate the accuracy of our prediction model, we categorized the predictions into 4 natural categories we defined as follows

True positive — When the candidate hired matched the performance assessment.

False Positive — When the candidate scored high on our predicted performance but was not able to perform on the actual job.

True Negative — When the candidate was rightfully rejected by our system.

False Negative — When the candidate was wrongfully rejected by our system but the candidate was actually a good fit.

Now we can't really find out whether a candidate was true negative or false negative since once a candidate is rejected by our system, they weren't given the chance to interview. I know we could have asked a few rejected candidates to interview but the results would still be statistically insignificant and that would possibly hurt our HR recruiting KPIs like (time to hire, cost to hire, etc) as well.

However, for the roles this tool was deployed, measuring true positives was done by comparing the prediction algorithm and actual performance on the job. Since the actual performance would not be available until 6 months at the earliest, the POCs would take us anywhere between 6 months to a year.

To be continued.

--

--