Structured vs Unstructured Interviews

Joe Thornton
8 min readJun 9, 2016

--

Employment interviews as a selection method are so ubiquitous that it is virtually inconceivable that a company would conduct a hiring process without using them. However, the research clearly shows that interviews are a poor predictor of job performance:

(Hunter and Hunter, 1998)

There is, however, a clear lack of motivation on the part of our industry to change current practices, as well as a lack of clear practical alternatives to interviews. In any case, candidates typically rate interviews more favourably than other selection methods (such as objective psychological testing, which the job seeking community views with particular contempt) and for these reasons, and from the perspective of legal defensibility, it’s probably a safe bet to assume that interviews are here to stay.

And so most of the research of the past century has not focused on developing alternative selection methods (although that research does exist), but rather on improving interviews and making them more reliable and valid predictors of job performance.

One of the big breakthroughs of the last century has been the emergence of structured interviews, shown to be superior to unstructured interviews by almost every measurable standard*.

(*candidates sometime prefer unstructured interviews which can lead to an increase in job offer acceptance rate… but I suggest that this is only really an issue if you’re identifying the right candidates, which you’re probably not if you’re using unstructured interviews.)

The Pitfalls of Unstructured Interviews

Unstructured interviews typically have no set format or predetermined questions and lack a process for scoring candidate responses. Since these interviews are conducted in an idiosyncratic way, the interviewer has discretion over what questions to ask and how to evaluate the answers. This in turn shifts the focus toward the interviewer’s implicit theories about what should be assessed, instead of focusing on those signals that we know are the best predictors of success.

Unstructured interviews provide low reliability and low validity in predicting job performance and therefore greatly diminishing the likelihood of the right candidates being selected.

Additionally, humans make terrible interviewers by being plagued by a myriad of biases and judgment fallibilities. And humans make even worse candidates, by being more likely to lie in an interview than to only tell the truth.

Interviewer Biases

Unstructured interviews are highly susceptible to a variety of biases, including gender, race and disability. These biases typically arise when interviewers gather and evaluate non-job-related information about candidates. In simple terms, unstructured interviews will likely result in you hiring middle-class white males instead of the best candidates.

Structured interviews, on the other hand, are far more resilient to biasing factors. By linking assessment to job-related factors, structured interviews place constraints on the influence of extraneous information. Though biases will always exist in interviews, the research does show that structured interviews can, in certain situations, entirely eliminate their impact.

Impression Management

Impression management (IM) is a process by which a candidate will attempt to influence the image that the interviewer forms of them.

Assertive IM occurs where a candidate proactively constructs a positive image of themselves through self-promotion (claiming responsibility for achievements), ingratiation (using interpersonal attraction) or opinion conformity (claiming values held by the interviewer).

Deceptive IM, which involves describing non-existing accomplishments (i.e. outright lying), was found by some studies to be used by a majority of candidates.

Nonverbal IM was also found to be a common occurrence and includes concepts such as emotional contagion, where the emotional state of the candidate might affect the emotions of the interviewer, causing them to be more lenient in their evaluations.

The research shows that the use of one or several forms of IM usually results in better candidate performance and that unstructured interviews provide candidates with the best opportunity to engage in IM.

Structured interviews, on the other hand, can reduce IM effectiveness by focusing the interviewer’s assessment on job-related information, such that the influence of extraneous information is minimised.

Impression management is extremely common and will occur even in the most structured of interview settings. For this reason reference checking is a valuable tool because it allows for the verification of claims made by the candidate during the interview process. Conducting multiple interviews with different interviewers can also help to identify inconsistencies in a candidate’s claims. In addition, the research shows that IM effectiveness is inversely related to the length of the interview (i.e. longer interviews are better at eliminating IM).

Effects of Rapport Building

Most interviews start with some form of rapport building used to “break the ice” and put the candidate at ease. But humans tend to form impressions very quickly and the research shows that judgements made by interviewers in the first ten seconds of an interview can predict with remarkable accuracy the outcome of the interview.

The plethora of biases at play in the unstructured rapport building stage can negatively affect information gathering and assessment in the main part of the interview. To counteract this problem rapport building could be eliminated or limited. However, eliminating or severely limiting rapport building is likely lead to negative reactions by the candidate who might interpret such limitations as a poor personality trait or lack of interest on the part of the interviewer.

Therefore rapport building should mostly involve the interviewer providing information to the candidate, thereby limiting the opportunity for the candidate to provide biasing information. The candidate should only be given the opportunity to ask questions at the end of the interview.

What Constitutes a Structured Interview?

So how exactly do we define structure and what factors constitute a structured interview? Structure is the degree of discretion that an interviewer is allowed in conducting the interview. In other words, structure involves a process of standardisation by establishing and consistently applying predetermined rules. Typically, such rules apply in two dimensions:

  1. Content (questions asked)
  2. Evaluation (response scoring)

The best study I could find on the subject gives an even greater typology of interview structure (while still using these two dimensions):

Content:

  1. Basing questions on job analysis
  2. Asking the same questions to each candidate
  3. Limiting prompting, follow-up or elaboration on questions
  4. Using better types of questions
  5. Conducting longer interviews or using larger numbers of questions
  6. Controlling ancillary information
  7. Not allowing the candidate to ask questions until after the interview

Evaluation:

  1. Rating each answer using multiple scales
  2. Using anchored rating scales
  3. Taking notes
  4. Using multiple interviewers
  5. Using the same interviewers across all candidates
  6. Not discussing candidate performances between interviewers
  7. Providing interviewer training
  8. Using statistical prediction

By implementing these recommendation you will create structured interviews that are far more likely to identify the right talent. Companies who fail to do this are unnecessarily creating a competitive disadvantage for themselves.

I’d love to cover all of the above factors with practical recommendations for implementing each, but I have a pretty time consuming day job :) Instead I’ll dive into a few of them and (probably) come back someday to complete the rest.

Base questions on job analysis

Before commencing an interview process the role should be clearly defined by performing a job analysis (welcome to Recruiting 101). The job analysis in turn provides the material for creating interview questions.

Asking questions related to the role will increase validity by increasing job relatedness and by ensuring that the interviewer neither includes irrelevant information nor excludes relevant information. What’s more, role related knowledge correlates positively with job performance (who would have thought it?).

In a section below I talk about the types of questions you should use, and one type is the behavioural question. These involve asking the candidate to give specific examples for their previous roles and provide a great opportunity to base questions on role related tasks.

Limit interviewer probing

An interviewer will ask follow-up or probing questions when the candidate’s answer to an initial question is deemed to be inadequate or incomplete. One study found that probing increases a candidate’s likelihood of faking, which usually occurs as a means of pleasing the interviewer even when the candidate simply doesn’t have an answer.

Many researches believe that eliminating or controlling probing is a defining element of structured interviews. On the other hand, of course, probing is seen as a valuable tool that allows the interviewer to delve deeper into a candidate’s responses in order to get more information to analyse.

The key is to strike a balance. Practical recommendations include probing equally across all candidates, using predetermined probes (i.e. follow up questions related to the original question), predicting areas where candidates are likely to give deficient answers and creating predetermined probes, allowing each candidate the same amount of time to answer questions, and only probing if the candidate misinterprets the questions or if the answer is clearly deficient.

Use better types of questions

Questions that help evaluate a candidate’s behaviour in previous situations or their intentions in future situations are the most effective at predicting performance.

Behavioural questions focus on previous work experience and typically take the following form: “Give me an example of time when you _______?” Here are some examples:

  • What is your most significant achievement at your current job?
  • Give me an example of a time when you identified a bottleneck in your previous job and fixed it.
  • Tell me about a time when you took the lead on a difficult project without being specifically asked.
  • Give me an example of a time when you changed a colleagues mind on a decision? How did you convince them? Did you use data? (Predetermined probing questions FTW.)

Situation questions ask candidates how they would act in future situations and typically take the following form: “What would you do if _______?” Here are some examples:

  • Imagine it’s your first day on the job and I told you “we need to hire five more engineers immediately and I need you to take charge”. What would you? Specifically, what steps would you take?
  • It’s your first week as our newest software engineer and you conclude that the tech infrastructure needs to be completely overhauled, at significant expense. What steps do you take to communicate this to the cofounders and convince us it needs to be done?
  • During your first month we have a large potential client scheduled to visit the office for a product demo. Just before they arrive, we inform you that no one else can make it back to the office to meet them as planned. The client is extremely eager to demo the product. You have an ok understanding of how it works. What do you do?

Use Anchored Rating Scales

Anchored rating scales are used to rate answers given by candidates by comparing them to specific examples that are anchored to numerical ratings. Here’s an example of an ARS used by Google for interviewing candidates for a fictional underwater basket weaver role.

To evaluate responses, the interviewer matches the applicant’s responses with the written anchors for each question. The research shows that ARS increase reliability and accuracy and reduce biases. By providing interviewers with objective behaviour standards, rating scales enhance the interviewer’s focus on specific relevant job-related behaviours and diminish the focus on non-job-related factors, thereby reducing the effect of biases.

Rating scales also provide a framework that ensures responses are rated consistency across different interviewers and candidates.

Practical recommendations include using a rating scale with at least 5 points, where each point is defined by a specific anchor. The more detailed each anchor, the more reliable and valid the system will be as a whole so I advise you to make each anchor as detailed as possible.

[more to come… probably]

I’m the Head of Talent at Playfair Capital. I’m particularly interested in hiring, interviewing and building high performance teams. I’m an advocate of evidence-based HR.

--

--