Recruiting: Lessons from sports drafts [Masey]
I’ve been paying closer attention in recent months to the way sports teams and acting talent agencies are handling talent, for a couple of key reasons:
- These industries are regularly making high-risk multi-million dollar bets on talent. Therefore, their incentives for applying cutting-edge hiring practices (and continuously push the envelope in that domain) are extremely high.
- On the flip side, the relative simplicity of the “definition of success” and the ability to create stronger causal links between talent decisions and outcomes make them rather attractive to study from a research perspective.
I rarely make predictions, but I suspect that in the coming years we’ll see more and more hiring practices that are currently common among elite sports teams and movie production studios propagate out to other industries in which top-tier talent plays a critical component in the success of the business.
None of these industries offer a perfect model for the more common talent market. As mentioned above, they are simpler representations. In sports, the number of “firms” competing for talent is known and rather limited (dozens), measuring overall success is more binaric (games won), and individual performance indicators are more visible, established and straightforward. Movie contracts are relatively short (several months) and this attribute makes that industry significantly different than the broader job market which usually optimizes for longer-term employment.
Masey’s post offers 5 lessons that are fairly applicable to any hiring effort, regardless of industry:
- Understand your goal — “People often don’t understand their decision objectives, but the most successful sports teams are clear about their goal and don’t stray from the principles and attributes they’ve established.” — build a “performance profile”/scorecard before you even start looking for the first candidate.
- Keep your judges apart- “Don’t let people talk to each other or see other’s opinions before providing their own, expose the candidate to judges in different ways and at different points in time, and bring people with different perspectives into the process. More independence is often the biggest improvement an organization can easily make in their hiring process.” — Easily translatable to the way scorecards, debriefs and hiring recommendations should be made.
- Break the candidate into parts… — “ It’s much easier to give one, global evaluation — like or dislike, hire or reject. These overarching evaluations are natural and efficient, but unfortunately, they are often biased. For a more reliable evaluation, you need to break the objective into component parts and evaluate them separately.” — this speaks to the benefit of interviews focused on evaluating just a subset of the overall criteria, and clearly setting expectations with the interview team that they should evaluate the candidate’s performance in their area of focus rather than make an overall hire/don’t hire recommendation.
- … and bring them back together mechanically — “ At the team level it can mean summarizing the group’s collective opinion by simply averaging scouts’ opinions. At the very least this approach provides a more systematic starting point for a group discussion.” — personally, I’d err more towards the latter — using the aggregation as a systematic starting point rather than an automatic determination of the outcome. The full algorithmic approach requires full calibration across the interview team, which is often times not the case.
- Keep score — “We’ve all been animated by the sense we’ve just seen the next star in our field. The trick is to capture those judgments and track them over time to learn how predictive they are. This applies to all judgments. Hiring is best thought of as a forecasting process, and the only way to improve forecasts is to map them against results and refine the process over time.”