Interviewing Devs is Broken: Here’s How to Fix it

Daniel Glauser
CodeX
Published in
4 min readSep 10, 2021

--

A bead of sweat falls slowly down Thomas’ cheek. He wipes his palms and puts his hands back on the keyboard.

“It shouldn’t be this hard to sort a binary tree,” he thought, “I solved much harder problems when I was in school.”

Although nothing was on the wall he could almost feel the proverbial ticking of an old clock as if he was back in a classroom. Taking a test. Failing a test.

“I know I’m almost out of time. I don’t think I’m going to get this. I’m not sure what happened, I haven’t done any algorithm work like this in a while.”

The developer administering the live coding test tells Tom they’ll take his partial solution into account. It’s to no avail, he doesn’t get the job.

The rejection hurts. Luckily Thomas hasn’t quit his day job. “I guess I’ll go back to writing code to control a 400 node distributed system with thousands of active users and over six petabytes under management.”

Tech interviews are broken.

Asking people to live code solutions to algorithm problems gets you people who are good at live coding algorithm challenges. If you remove the live component of the equation then how do you know if the developer authored the work? Even if they did author the work then you have someone who’s good at coding algorithms. Does that mean that they’ll be a good developer on your team?

Take home tests are better since you get a more realistic opportunity to demonstrate that you can write code that’s closer to what people really need at most jobs. They can be involved, quite involved, and take up a lot of time. If you are ten job opportunities that all require different take home tests, that’s a lot of work you are doing for free just to help them evaluate you.

As an employer bad hires can tank a team. Folks don’t always have a clear understanding of how good they are. I’ve seen folks with overactive egos fake features and do everything they could to poison a team.

Tech interviews are broken. Here’s a potential fix.

Ask candidates to bring code that they’ve worked on, ideally in the language and using libraries that your team is familiar with. It doesn’t have to completely line up but it certainly helps. The project should be more than just a novel example but doesn’t need to be really big. Pull a handful of engineers and/or tech managers together. Review the code together with the candidate. Go over architectural tradeoffs, error handling, dive into the decision making process around the code. Pick one to three areas and dive in as deeply as you can. Maybe even as far as the implementation of the underlying libraries to see if the author understands the tradeoffs they are making. Eventually you will either reach the limit of their knowledge or your knowledge, either one works. If you hit the limit of your knowledge ask them to explain it to you. Pay close attention to how they explain how the code works and the tradeoffs. If you hit the limit of their knowledge explain it to them as clearly as possible and see how they react.

This technique tells you a lot about how it will be to work with this person. It helps to detect narcissism which can severely damage an otherwise healthy team. You get a window into the candidate’s decision making process and an assurance that they not only know how to write code but they know how to communicate what the program is doing.

As a candidate you get a chance to present code you’ve worked on without having the pressure of live coding a solution in a high stress environment. You get to talk through some of the challenges you’ve faced, how you solved them, what you liked and didn’t like about your solution. It let’s you evaluate the interviewers based upon their questions and ask them questions too. It should be a conversation, and if it’s not that tells you something. If you have sample code you feel proud of then you’ve avoided working for free. If you don’t have sample code then you get to decide how much effort you want to put into the exercise.

This isn’t a perfect system. It’s a relatively short evaluation. Since we can’t typically afford to take a week to work together I think it strikes a good balance between brevity and thoroughness. I’ve used it successfully to build small teams and look forward to employing it again in the future.

--

--