You, Dunning-Kruger, and Interviewing

Member of the Dunning-Kruger Club?

One of the topics of conversation I’m almost guaranteed to have when getting into synch w/a Software Engineering team is the Dunning-Krueger Effect. Usually it comes up in the form of a joke, and we all laugh, but it’s also a good to step towards a rigorous recruiting process that helps skirt our cognitive biases.

Newsflash: If you’re human, you have cognitive biases. Daniel Kahneman describes various cognitive biases in his book, “Thinking, Fast and Slow”, and how they impact our decision-making processes. It should not be surprising that cognitive biases can have a negative impact in your hiring decisions.

The basic gist of the Dunning-Kruger Effect is that people who are early in their learning curve don’t know what they don’t know, don’t know how to self-assess their skills vs others in the field, and generally overstate their competence. In time, and with continued learning, most people self-recognize their former lack of skill. Follow-on studies have shown that narcissists falsely assert high capabilities to read others, and extroverts generally overestimate their abilities. The D-K Effect shows up in both interviewers who think they know what they are doing, and in candidates who are not quite up to the role to which they’ve applied.

How do you overcome cognitive biases in the interview process? Use a rigorous evaluation process, including candidate scorecards, and involve several people from your team to provide additional points of view. Note the needed rigor involves assessing indicated performance in the interview v. longer-term success measures, such as employee performance at 90-days, 180-days and one year out, as well as annualized turnover rates for individual teams and the company overall. Additionally, you should also track hire/no hire decisions vs. these performance assessments to track your team members’ overall interview performance.

Starting from scratch, the interview process needs to be a structured flow, from the job description through to the summary assessment. If everyone knows what you are hiring for, and the process evaluates for these attributes, cognitive bias in the decision process is much reduced.

Here are the basic elements of a structured interview process:

  1. Stated org values and mission
  2. Job Description
  3. Candidate scorecards
  4. Set of codified technical questions germane to your team and product
  5. Defined interview loop
  6. Codified candidate decision process

As this post is about assessing candidates, we’ll focus our attention on the two assessment items, candidate scorecards, and the final candidate decision process.

Candidate scorecards have evolved out of work initially done by Paul Meehl, and further refined by Daniel Kahneman for the Israeli Army, and they generally cover six assessment areas, scored 1–5 w/5 being high, plus one generalized question, “Close your eyes, and think about how they’ll do as a team member over the next 18 months?”. For Software Engineers, an example scorecard might evaluate the following six key items:

  • Software Design
  • Programming (including software quality)
  • Organization; clarity of thought & process
  • Communication Skills
  • Teamwork; earns trust
  • Values

In support of the scorecard, the team should be using a small set of questions, no more than three to five, for each of the items being assessed. This helps your interviewers to not repeat the same question for a candidate, bounds the scorecard so it’s consistently evaluating across a number of candidates for the same role, and gives the decision maker a broader view of the candidate’s capabilities with respect to each item that defines success in the role.

Let’s use the software design and programming areas to illustrate how this works. The score is based on how well the candidate’s solution completely answers the question, time to answer, idiomatic coding, and overall performance in terms of computational speed and memory usage wrt to the defined role.The interviewer should score for the specific role role. Thus, a candidate for an SDE II role would be scored based on criteria for that role, and not another, say Principal SDE. One might expect an SDE II to complete a given programming question in 30 minutes, whereas a Principal SDE might complete the same question in 20 minutes, inclusive of a discussion of trade-offs regarding several known optimal approaches.

Note that this approach, where the interviewer is assessing the candidate against known standards is critical in reducing cognitive biases in the hiring process. The dragon of biases start to appear when the interviewer is comparing how a candidate stacks up in relation to the interviewer’s technical self, or the capabilities of others on the team.

The three non-technical areas in this scorecard are there to provide some sense of how the candidate will integrate with the team. What you’re ideally looking for is whether this person is already using the communication skills, teamwork, and especially values that the team is either establishing, or has already established. Importantly, these are also the areas where a candidate who doesn’t look like your current team members will likely fail due to interviewer biases. Put another way, training and details matter here in the decision process if you want a high-performing, varied team.

Communication skill is an area that is frequently flubbed through lack of definition of what’s being evaluated. “Yeah, they’re a good communicator” — WTF does that actually mean? Interviewers need to be shown what is desired and what is not, based on the values of your organization. A good communicator gets their point across to their audience. This can be done in a number of ways. It’s important that the interviewer focus on the detail of what was communicated, that they understood it, and not so much on how the delivery was ‘different’ or that ‘they liked it’.

Initial establishment of trust can be evaluated through careful observation of how well the candidate reacts and responds to the interviewers. Of course, the development of trust starts with the interviewer, so they need to be on-time for their interview w/the candidate. They should also formally greet the candidate, shake hands, barring sickness, as physical touch and looking each other square in the eye is a great way to establish trust because of the oxytocin release that typically occurs in this ritual. Beyond that, does the candidate meet and follow-through regarding asked-for deliverables? Have they researched the company, and the people they are meeting with? Do they speak up when they need guidance to answer a question? Do they react positively to bids by the interviewers? Do the give wishy-washy, non-committal answers or worse, fail to answer? Are they responsive to hints provided by the interviewers when they get stuck on a technical point? Trust is probably the most important factor in a team member’s bonding with the team, and the primary interview loop is one of the initial first big steps in establishing a strong trust bond with a potential new employee.

Values are a bit more challenging because you can’t always assess for a value in the context of what a candidate delivers in a performance-based interview because of time constraints. Behaviorally-based questions may help surface the values someone uses in their work. “Tell me about how you successfully delivered your code when the team was under unusually tight time constraints?”, or “Have you ever told your boss that the team wouldn’t be able to deliver the release because it wasn’t meeting quality standards? How did that play out?”, or “Tell me how you are continuing to learn more about software engineering?”. If you’re looking for something to dig into, take the candidate’s strengths that you see developing through the interview process and ask them the flipside of those strengths to see how flexible they are. For example, a candidate may be highly focused on architecting processes before moving forward with product development, so you might ask, “Tell me about a time when you suddenly had to shift from establishing a base approach for a big release, and jump in to meet a short-term customer delivery. What was the quality of your work like? How did your manager and the team view your work from this period?” If you need ideas for developing behavioral questions, please see this article at Business Insider, or check the books “Hire With Your Head”, by Lou Adler, or “Who”, by Geoff Smart and Randy Street.

After the interview is concluded, the interviewer’s next step should be go write up their notes on the interview, and grade the candidate. In my work with various engineering teams, this generally takes an interviewer between 30–60 minutes. These should always be scheduled for the interviewers so they understand the commitment, and that it needs to be done immediately after the interview. Why? Because the Forgetting Curve would say that your interviewers will lose around 70% of what they remember of the interview within 24 hours.

After all the interview notes are completed that day, the team should assemble and review their experiences with the hiring manager and the recruiter. This should be a short discussion, no more than 30 minutes, and ideally 10 to 15. Mostly, it’s a thumbs-up or down process with verbal highlights. This will help the Hiring Manager and Recruiter in the ultimate decision to make an offer to the candidate.

Like what you read? Give Dave Hardwick a round of applause.

From a quick cheer to a standing ovation, clap to show how much you enjoyed this story.