How I Interview Software Engineers

Benjamin Tanone
11 min readJul 30, 2022

--

Thanks Katerina L. @ undraw.co!

Recently, I was entrusted with the role of interviewing candidates to join our engineering team. I was assigned as a Round 1 and Round 2 interviewer, which meant that I had to assess whether or not our candidate had the ability to deliver “good enough work”. I had another interviewer ask me on how I’d interview these candidates, so naturally I had to write a Medium article about it.

Why am I qualified to give my perspective?

In my past life (e.g. in my non-tech gigs) I’ve hired quite a few people and managed them directly, as I was running my own businesses. Admittedly, one of those “gigs” is a manufacturing company that is currently being run by my wife (I’m currently working as a SWE) but I did have to deal with the company for quite a bit while my wife was away on maternity leave.

I’ve seen a pretty wide range of — ahem — poor culture fits during those gigs: someone who was too junior joining our company making big promises, to bootlickers who would do everything they can to please their superior despite the very-obvious disadvantage that they are putting our company through. Obviously, we have since let them go, but I carry those experiences with me closely as they are a goldmine for lessons.

Get your mindset right

Let me cut to the chase: you want to provide an environment such that the candidate can shine their best.

Don’t get me wrong: you don’t want to hire people who suck. In fact, by making an environment in which the candidate can shine their best, the line between someone who’s a good fit and someone who’s a bad fit would become much wider than if you had designed an environment that stress-tests your candidate for no particular reason. A candidate that can withstand a stress-test isn’t necessarily a candidate that can do their job well.

What are you looking for?

First things first, you need to figure out what you’re looking for. This sounds easy, but it is actually one of the hardest challenges that a human being faces: figuring out what the hell they want. Once you figure this out, we can then move to the “interview design phase”.

By “things you want” I don’t mean things like “Do they have experience, or do they know framework xyz and SAFe,” because those are, frankly, overrated. Knowledge and experience is often misused as the primary indicator of value, and is often held in such a high regard (and is often used to disqualify an otherwise sound candidate) primarily because these — ahem — metrics are quantifiable and justifiable to their managers. Except, the skills we hold as SWEs are very rarely quantifiable. I mean, how do you precisely quantify the business impact that the candidate made by applying their skills? Does making NodeJS automatically qualify Ryan Dahl to be the CTO of Microsoft? Impact is rarely made by a single person, despite what the life coaches and performance review might suggest.

But that doesn’t mean these things don’t matter: they do, but they’re not the primary thing you should look out for in a candidate. They’re merely “signals”.

Signals.

Signals?

Taking the “experience” and “knowledge” example above: writing a JS runtime that is being used all over the world signifies that the developer is able to design software which appeals to other devs. However, that does not mean that Ryan Dahl should automatically be your next Lead SWE.

What signals do you want to look for? Look at what you want to do, such as by looking at your product roadmap:

  • Do you want a whiz that can design a slick UX for your customers? Signals:
  • Pays attention to UX
  • Can collaborate to get a design language going, knows what to do
  • Brought up ARIA labels — signals that the person knows what to do with accessibility

Notice how these signals aren’t quantitative at all and is open-ended in a way. Close-ended candidate metrics, such as years of experience or — God forbid — monetary business impact, tend to shoehorn the interviewee into boxes, and prevents the “exploration” of the candidate’s fit for the role.

Using quantitative metrics in place of “signals”

Let’s use a real-life example: a person that we hired for our manufacturing business had 10 years of experience in big national companies and had ticked all the boxes we were looking for (e.g. he had knowledge of how to make SOPs, and how to handle rosters). Yet, he was one of the poorest fits that we have hired so far.

It is not their lack of experience, skill or knowledge that made him a poor fit; it was his — ahem — tendency to “suck up to his supervisor” and ignore everything else. To be fair: sucking up to people is fine, but they have to (1) deliver good value; and (2) have the business as their first priority, not their boss’s affection for them. Interestingly enough, we managed to detect this trait of theirs during one of our early rounds. He was sourced through a recruitment org, and he was reported to have low self-initiative and tendency to follow with the status quo.

So why didn’t it click in our head that we shouldn’t probably have him join our team?

It’s simple: we didn’t care, because we were using “knowledge” and “knowing what they do” as the only two metrics that we care about. We were desperate for people knowing “what to do” because we were desperately trying to start a manufacturing company without prior industry experience.

To sum it up, we detected signals that the aforementioned candidate was not a good fit for our company, but we ignored it anyway in pursuit of vain metrics in order to put our minds at ease that we’ve got a senior professional in our team.

Designing “tests” to dig signals

Okay, so you’ve got your signals down. So how do you exactly dig them?

One thing that I noticed is that signals come and go during interviews. Just like real-life signals — such as your partner’s perceived level of boredom when they see that you are reading this article instead of being in bed with them — signals are affected by the environment in which the candidate is placed in.

Signals aren’t always popping up and visible for the interviewer; you have to ensure that the environment is ideal for them to appear.

Take a tech interview for example: if you test the interviewee by giving them trivia questions such as “How do you stack a stack of cards together?” and judging their answer based on its correctness, you’d be primarily getting signals that, well, the candidate is either good or bad at solving trivia questions; technically, a chess player who has never touched programming would fall into this category.

When you design tests, you should be mindful such that these tests are able to give the candidate opportunity to present their lovely signals (or lack thereof) to you. Do you really want to miss out on a star candidate who has failed to tell you how React’s virtual rendering works? Do you actually care about a candidate’s mathematical ability?

Another thing to keep in mind is that your tests should not take too much time, which usually means that it should be easy for the candidate to solve if they are smart enough for the role. While this isn’t a direct correlation, problems that are too hard tend to leave the candidate silent for a long time, which prevents them from displaying their signals (e.g. writing code, communicating their thoughts) and makes your job harder.

To give you an example, LeetCode Hard problems (such as making a balanced heap to merge sorted lists) are horrible at giving me accurate signals. For one, they take forever to do because they tend to be “complicated”, which means that you have less time to actually dig into the candidate’s fit with the role. Furthermore, because the problems are so niche, candidates end up falling into 2 categories: (1) candidates who have heard of this problem before and know the general solution; or (2) candidates who get stuck because they’ve never handled that type of problem before.

I find that problems that you have solved in the past in your job are usually pretty good tests for digging out technical signals. For one, because you’ve already solved it, you know how one might arrive at the solution, and how to give incremental hints if the candidate is stuck. Because you know how to arrive at the problem, you also know which of your own “signals” you have shown to get it solved; perhaps the problem required you to do a bit of requirement clarification to get it solved, or maybe you pulled out your algo-wand and did some algo-magic to solve the problem in O(1) time.

Again, the point of these tests is to get a conversation going and to allow the interviewee to display their signals, not for you to quantify their coding skill per se. No-one is exactly alike, so do keep an open mind, especially if your candidate can’t solve the problem as fast as you.

A sample technical test

My go-to “technical test” to detect whether or not a candidate would be a good fit or not is to just give them problems that I had to solve in my day-job. This allows me to calibrate their overall approach and progress with my experience.

However, a very important thing that I always do is that I make it very clear to my candidates that I do not give a flying fish about the correctness of their solution (except, of course, if they are completely wrong about what they’re solving — but this is rare).

Here are the signals that I usually look for when hiring software engineers, ordered by priority:

  1. Is not a dick — you’d be surprised how many people fail this signal test. Seriously, I don’t care if you’re a rockstar programmer; being a dick slows everyone down. Don’t say that “I know best,” — that’s just too arrogant for people to say, including me.
  2. Can come up with the rough outline for a solution — my interview problems are easy and are everyday problems that we encounter on the job, such as finding the unique elements in an array. They should be easy, because anything harder would take too much time and leave the candidate silent for too long. If they fail to meet this bare minimum, then the team would probably spend too much time to mentor them.
  3. Can communicate well — this is a spectrum that I’d like to see where you fall on. I don’t expect engineers to be able to drive conversation in C-level meetings, but they should at least be able to communicate and discuss their ideas with me. Most importantly, I realise that the imbalance of power rests on me, so I try really hard to put the candidate at ease and to make the first move to encourage discussion.
  4. Experience with best practices — generally I look for whether or not the candidate is well-versed in best practices (e.g. declaring vars in JS before you use them — shudders). This can be a double-edged sword, but generally this is something that I can “smell” by seeing them code. I don’t see a problem with “mindful disrespect of the rules”, but not being able to explain why you branched away from the well-treaded path signals to me that you probably do not know that there was a well-treaded path in the first place.

So how would I design a test around it? If you remember from the previous section: I like tests that are reflective of real-life problems. So, I just pull out my commit history from the past month.

fix: prevent multiple duplicate fetches from occurring and race conditions from happening in a useConfig hook

When I look into the code diff, I saw things that I have implemented in around 30 minutes (excluding the WTFs and waiting for my code to compile): using mutexes to prevent multiple dispatches.

Again, I’m not trying to make you solve super-difficult problems and I absolutely do not care about the itty-bitty implementation details (including which types of mutexes you use): the problems that I pick are things that people would often do in the role that I’m hiring for. More importantly, there are multiple solutions to the problem: you can use mutexes, you can use states, you can use react-query or SWR, etc.

Combating bias

The problem with qualitative metrics such as “signals” is that they are subjective, which means our bias tends to seep into our evaluations.

Bias is good, because they are shortcuts that help us make decisions really quickly. Bias can also be bad, because those shortcuts can lead us the wrong way. I find that digging into your interview results are very helpful to combat bias. For example, why did you feel like your candidate did very well in their interview with you? Did you feel that they presented their ideas clearly? Why did you think the candidate presented their ideas clearly? Is it their accent, or the way they broke down the problem?

To give you an example: my bias is that I tend to favour candidates who can communicate clearly. This means that I tend to forgive any “rough edges” on a candidate’s technical skills if they excel at communicating their ideas. After an interview with a candidate which fits this description, I tend to feel “happy” and “positive”, which can impact my evaluation of those candidates.

In order to combat my bias, I’ve found that leaving my thoughts to settle helped stave off any overly-emotional feelings that I might’ve had (sleeping is wonderful at helping me do this). After I had some time to “cool-off”, I examine why I had a “good feeling” about that candidate, and I tried poking holes in my own rationales. For example, is a candidate good at communicating because they could communicate their ideas well, or is it because they complimented my choice of headphones at the start of the interview?

The best way to combat unproductive biases is to work with those biases, not against them. You have to know what they are and what triggers those biases. We’re humans, after all; it’s better if you take into account your own biases rather than blatantly lying to yourself that you are free of them.

Final words: do not hide behind quantifiable metrics

Look, I get it: putting everything into quantifiable metrics is nice and allows you to say, “But according to the numbers, this candidate is a rockstar!” It alleviates the pressure on your “judgement” and makes things look like they are empirically supported.

Here’s the thing though: software engineers aren’t blue-collar workers. There’s so much that goes on in our day-to-day that aren’t accurately quantifiable by metrics. An engineer may suck at chugging tasks out of the door, but they may be excellent at aligning stakeholders together. Or they may be 20% slower than your average engineer, but produce 20% faster code (colloquially, that’s why story points shouldn’t be used to measure performance, but rather to help plan and communicate effort).

Plus, in my experience, scores and metrics often can’t tell me anything meaningful about a candidate. For example, how am I supposed to know if this candidate is a good fit based on the interviewee’s 7/10 score in balancing a heap? People think scores are objective, but they’re in fact subjective because 99% of the time the interviewers are trying to shoehorn their qualitative evaluation into a quantifiable number.

At the end of the day, I feel like the only metric that matters is, “Can I picture this candidate as being a valuable member of the team?” Sure: it’s subjective and may lead to bias, but at the end of the day, it’s all that really matters. Everything you do serves that purpose; the signals you look for should build up to an answer to that question.

After all, do you want a candidate that can balance a heap, or a candidate that can help your team succeed?

--

--