What Makes a Good Job Interview?

brian manley
Effective Engineering Interviews
10 min readFeb 14, 2022

Job interviews are tough. This is probably a near universally agreed upon statement. Typically you have about an hour to convince someone you can and should work with or for them.

But the experience can be just as awkward and difficult for the person across the table. How do you figure out if this stranger is a good fit your position? How can you tell if you would want to work with them for potentially years?

In this article, I’ll walk through my views on what makes a good interview. As someone who’s been interviewed for software engineering positions (and that one time I thought about switching to product management), and has interviewed engineers, product managers, and UX designers, essentially all of my interview(er) experience has been in tech. To be perfectly honest, I don’t know a ton about interviews outside of the tech world, but I would love to learn more.

Is it relevant?

An interview is meant to determine if a person can do a job. Naturally, this means that it should focus on skills that the candidate would actually use. Some clear examples of irrelevant interviews are having a backend engineer build an iOS app or making a UX designer talk through database schema.

Most companies aren’t going to be asking questions that are obviously irrelevant, but they can fall into the trap of designing interviews that are accidentally irrelevant. For example, let’s talk through hiring for a senior-level full-stack software engineering position at a mid-size company. A role like this will usually involve building new services, fitting them into a company’s existing tech stack, to improve existing products and launch new ones and mentoring junior engineers.

Following a commonly used Process, a candidate will typically go through interview rounds where they need to solve an algorithmic problem with code, talk through high-level system design, and talk about their past team interactions. Questions are usually designed so that they’ll scale across roles. So a senior candidate will be asked similar, if not identical, questions that a junior will be asked.

Generic interviews are a common source of irrelevant questions. Asking a senior engineer about higher level system design might be incredibly relevant, but the same can’t always be said for an entry-level candidate. Making sure any engineering candidate can code makes sense, but is anyone at any level really going to need to regularly invert a binary tree?

How do we make it relevant?

The “simple” answer is to ask people about what they have done in their day-to-day to assess if that will transfer well to their new day-to-day. This simple solution will be very hard to implement if we try to cover everything so we have to figure out what we care about most.

Let’s apply this to our senior engineering position mentioned above. A high-level system design interview where the candidate needs to talk through a system they would implement to fulfill some business requirements makes sense. A behavioral interview focusing on mentorship and team interactions also makes sense. A standard algorithmic interview might not make sense though.

This person is going to be writing code, but they’re not going to be regularly implementing custom sorting algorithms. What if we just took what the person would need to do on the job and have them do it in the interview? We could make our coding/technical round more relevant to the job by having the candidate modify existing code to adapt it to their solution from the system design round.

Are skills being assessed by relevant interviewers?

Once we’ve assured we’re asking relevant questions, we need to make sure that they’re being asked by the right people. Ideally, each interview round should be conducted by someone who would be interacting with the skill being assessed.

Often candidates for a given role will be interviewed by others in a similar role, and maybe one or two positions above. For our senior engineering role, it makes sense to have technical skills evaluated by peers. If the candidate is hired, other engineers will interact with their code and designs. Other engineers will also interact with the candidate in non-technical ways (like talking!) So peer engineers are relevant, but only involving technical folks will likely lead to an interview panel which is lacking in diversity (a topic very worthy of its own article by someone more qualified than me). A way to avoid this trap is to include peers outside of the candidate’s discipline that the candidate would regularly interact with when assessing interpersonal skills.

Is it effective?

Once we’re asking questions that cover relevant skills, we need to make sure we’re accurately assessing them. Designing a question that perfectly exercises the system design skills for our senior engineering position doesn’t help much if we don’t have some way of measuring them.

Ultimately, we’re “just” trying to figure out whether the candidate is a fit for the role or not. To that end, we probably don’t need an overly in-depth scoring system. A straight-forward yes/no for each relevant skill might be too simple, but in my experience winds up being much clearer then a complex sliding scale. A common middle ground is the “strong no, no, yes, strong yes” scale. This is a good way to have a straight forward measurement, but adds an option for extreme over(and under)performers.

Is it consistently effective?

So we have a scoring system that will effectively filter candidates, but how do we use it? Ideally, given the same candidate, our interview should always yield the same results. In practice this is impossible, candidates are going to have bad days and interviewers will never be 100% objective (more on that later). While we can’t get perfect consistency, we should strive to get as close as possible.

Building a grading rubric for our interview can help greatly with consistency. If we have a definition of what a candidate needs to do to get a “Yes” or a “Strong Yes” and make sure our interviewers are familiar with it, we’re much more likely to get consistent outcomes.

Returning again to our senior engineering candidate, here’s an example of what a system design rubric could look like:

A non-production ready rubric

This rubric is far from perfect, but should provide a decent starting point. Specifically, we can improve it by clarifying what things like “clear detail” and “well defined” mean. We also do not cover interpersonal skills, which may be evaluated more specifically in other interview rounds, but may be worth assessing here too.

Adaptive Consistency

At the end of the day, rubric criteria are going to vary by company, by role, and over time. This may seem against the idea of consistency, but it’s a reality in tech (and likely other industries) that roles evolve over time. For example, expectations of a backend engineer today are very different than they were five years ago, and will likely be very different five years from now.

Rubric criteria will, and should, change as a result of outcome, aka hires!

Is it reasonable?

The final factor we’ll cover has to do with the candidate’s experience. If we are able to create a near-perfect interview that gives us amazingly consistent results, our effort will be in vain if we’ve built a reputation for having an unreasonable interview!

Interview Reputation

Candidates, especially more experienced ones, are growing more and more savvy about interviews. Posting interview reviews on sites like Glassdoor and Blind is commonplace. As a result, an unreasonable interview can very quickly lead to a decrease in applicants and a blow to company reputation.

ooh big tech interviews got big reputation

Overall company reputation is made up of many factors and the interview process’s impact on it will vary. Google has a notoriously difficult interview process, but still receives millions of applications per year. For the non-Googles of the world, having a famously difficult or time-consuming interview process could be a nail in the coffin.

Defining Reasonability

“Reasonability,” like many of the other factors we’ve discussed, is difficult to objectively define. I think two major points make up how reasonable an interview (or process) is — difficulty and duration.

Unfortunately, tech companies will regularly ask overly hard interview questions. These often fail the relevance test, but also lead to a degraded candidate experience. If a candidate leaves an interview feeling like they just went through the wringer, they’re not very likely to recommend the company to others regardless of if they passed or not.

Issues with interview duration can manifest in a few different ways. Individual rounds can take too long, the overall process could involve too many steps, or a candidate might be subjected to far too many back-to-back interviews in a single day.

Give or take a bit

Focusing specifically on individual round duration, a practice that commonly slips into unreasonable territory is take home exercises where the candidate is expected to modify existing code or build something from scratch. These tend to be somewhat controversial, likely because they’re hard to get right, but also for… other reasons. A take home exercise can lead to a great understanding of the candidate’s skill, but how many candidates are willing to spend hours of their own time building something that’ll be thrown away once the process is over? The amount of time a take home exercise takes can also lead to accidental discrimination against candidates who may not have much free time outside of work like parents*.

I’ve personally gone through an unreasonable take home round somewhat recently. A company that I ultimately decided not to accept asked me to build a full-stack application that would allow users to retrieve and filter some sample data. The company was a fairly early stage startup so the focus on being able to build something functional quickly from scratch was relevant, but the amount of effort required by the candidate was too much. I wound up spending the better part of a weekend building the application. This happened fall on the weekend we adopted a puppy. While I don’t know what it’s like to try to do a take home with children running around, I can certainly say a new puppy made it a far from stress-free situation!

If you’re interested, here’s a link to the code with identifying details removed — https://github.com/bmanley91/take-home-assessment-example.

The Gut Check

No matter how scientific we try to make the interview process, we will not be able to achieve 100% objectivity, and that’s ok. We are imperfect humans, interviewing other humans. This is going to lead imperfect decisions being made.

What is important is that we are confident at the end of the interview process that we’re making the right decision about the candidate. Often, this will involve a gut check question like “Would you want this person on your team?” Unfortunately, this can lead to the exercise of implicit biases. We can hopefully reduce the impact of these by requiring interviewers to elaborate on the why behind their gut check.

I believe the gut check is important because it can capture qualities that the candidate demonstrates which are difficult to measure. Things like perseverance and willingness/want to learn can’t really be objectively measured, but can work in a candidate’s favor.

Let’s take one last look at our senior engineering candidate. Say they perform moderately throughout the interview. The rubrics are filled out and we have a near balance of Yeses and Nos across the board, leaning slightly toward yes (Interviewers, stop me if you’ve heard this one before). The candidate has strong backend skills, but only has surface level front end knowledge. Objectively, maybe we say no because front end work is part of the job, or maybe we say yes because there’s more Yeses than Nos. If the candidate expresses genuine interest in learning about filling their gap by learning about front end development, then I think the best decision is to gut check them to a yes.

Your Mileage May Vary

Every company, role, and candidate is different. I can’t say that the ideas outlined in this article will work universally, but I think they’re a good starting point.

This article is based on my experience in tech. To be a better interviewer and candidate, I want to continue to learn about processes both in and outside of tech. A lot of good ideas come out of the tech industry, but it also borrows a lot as well (cough Toyota manufacturing practices cough). So regardless of where you work, if you want to share your interview experience, I’d be happy to hear it! Feel free to leave a comment or reach out to me on LinkedIn or Twitter.

--

--

brian manley
Effective Engineering Interviews

I am a Software Engineering Manager attempting to wrangle my thoughts into articles. Opinions and views expressed are my own.