Technical interviewing for people who suck at algorithms

Patrick Collins
HackerNoon.com
Published in
7 min readNov 21, 2016

--

I’ve fairly recently begun doing interviews at Google, and so far I’ve been very struck by the degree to which being able to produce tricky code isn’t a part of what makes me think a candidate is worth hiring.

And I think that’s a good thing. I know there are algorithmic wizards out there, but I personally am not one of them. I think I’m a pretty decent programmer, but for a variety of reasons that include:

  • lack of focused practice on algorithms, and
  • the sort of coursework I did in school, and
  • anxiety from tackling an (inentionally) under-specified problem in < 30 minutes, on a whiteboard, when I very much want to do well,

I tended to underperform in interviews relative to my actual technical ability. I’m sure there are a lot of other people in the same situation. This article is directed to you — the good-programmer bad-interviewee — to outline some things that never ocurred to me when I was sitting on the other side of the table.

I’ll note, first, that yes, the traditional algorithmic interview isn’t a great gauge for actual programming ability. I do think that it’s a reasonable compromise between competing demands of the tech hiring process:

  • avoiding implicit bias vs trusting the interviewer
  • keeping the bar high vs holding on to good candidates
  • testing general programming ability vs exercising practical skills

I’ll avoid getting too sidetracked on that, for now. But the takeaway message is that algorithms aren’t the point of the interview. It’s dummy work to feel out your general SWE ability, tackling unfamilliar, domain-neutral problems in a collaborative environment. With that in mind, your ability to rattle off a solution that “passes the unit tests” is not the be-all and end-all of the interview process, just like your ability to churn out many SLOC/day is not the be-all and end-all of your professional software development life.

(Disclaimer: This is purely my own opinion. It doesn’t reflect any internal hiring guidelines and I’m not qualified to give you any guarantee that it will help you get hired.)

The golden rule of technical interviews

When you write code, walk through it line-by-line on the example input you’ve been given.

That’s it.

This is more or less the same advice as the traditional advice of “think out loud while you solve the problem” — if you’re able to produce any code at all, you’re probably working through the input in your head as you write your solution. But I found it hard to remember to think out loud under pressure during the interview. And it’s hard to distill all of the junk flying through your head while you’re struggling with a problem; walking through the example input line-by-line is something you can do for any problem without having any special insight.

Beyond that, I think that interviewees tend to overestimate how clear their code is. Interviewers don’t have a battery of test cases in their head that they can run on the fly to verify your solution — unfortunately, I am not the HackerRank online judge.

Interviewees tend to churn out a solution and then turn to you and say, “there! done!” and it’s really genuinely hard to tell if the answer is right without running it through on the input. Whiteboard code is often unnecessarily complex. People produce wildly different code for the same problem because they conceptualize it differently. As an interviewer, it’s hard to tell the difference between code that’s wrong and code that’s just idiosyncratic.

If you step through your code line-by-line on the test case that you were given, you convince me that your choice to re-implement addition with bitwise operators produces the right output, no matter how skeptical I was to start with. Beyond that, I think this sort of back-and-forth, convince-the-reviewer-that-your-odd-code-is-best really is representative of the sort of stuff you do every day as a professional. The interview process is working as intended if it lets you showcase your ability in that department.

The last benefit you get from walking through your own code is that it lets you catch your errors before the interviewer does. It’s not awful if you have an off-by-one error in your solution, but if you’re able to catch it yourself it shows that you’ll be able to be productive on day one without supervision.

Walking through an example

In the spirit of this post, let’s talk through the strategy I’m advocating. Here’s an interview question taken from this link:

Given a BST, find the 2nd largest element.

Let’s suppose our candidate is writing Python and we agree that the input is represented with the following class:

class Node:
def __init__(self, value, left, right):
self.value = value
self.left = left
self.right = right

We also agree that the input has at least three nodes, for the sake of keeping things simple, and provide the following example input:

An Example BST

We expect the output 13.

Sitting at home in front of your computer, you can see that the solution is something like: the biggest node of a BST is the rightmost leaf, the definition of a BST guarantees you that the next largest value is the parent of the rightmost leaf, so you walk down the rightmost edge of each vertex until you see one whose child has no child, then return that value.

Code written under calm, reasonable conditions might look something like:

def second_largest_value(root):
child = root.right or root.left
if child.right or child.left:
return secold_largest_value(child)
else:
return child.value

If you write an answer like that, you should still test it on your input for posterity’s sake. But that’s about what I’m expecting based on the problem, it looks like the solution that I have in my own head, and it doesn’t do anything weird that makes me suspcious.

The issue is that whiteboard code doesn’t usually look like that. I know that mine doesn’t. A candidate might describe the completely correct algorithm to you, then produce code that looks like this:

def second_largest_value_helper(root, biggest_vals=[]):
if not root:
return biggest_vals

if root.value > max(biggest_vals):
biggest_vals.append(root.value)
biggest_vals.remove(min(biggest_vals))
elif root.value > min(biggest_vals):
biggest_vals[0] = root.value

if not (root.right is not None):
return second_largest_value_helper(root.right, biggest_vals)
else:
return second_largest_value_helper(root.left, biggest_vals)

def second_largest_value(root):
return second_largest_value_helper(root)[0]

I’m being slightly facetious here because I think this problem is a bit simpler than a realistic interview question, but I would not be surprised to see this code on a whiteboard.

Now it’s your turn — quick, you have 30 seconds before you move on to the next question — is this code correct? It’s wildly different from the natural solution, and it doesn’t take advantage of the BST property we identified as the key (the 2nd largest value is the parent of the rightmost leaf). It commits the mutable default argument sin. It mixes up the way it thinks about biggest_vals — sometimes we need to use min to figure out what the second biggest value is, and other times we rely on it being sorted.

The candidate has been moderately chatty describing thinking that sounds correct while writing on the board, but now he or she has exclaimed “done!” and turned to you waiting for a response. What do you say?

Talking through our problems

If you’ve written this solution, you can salvage it. A good talk through might sound like:

Candidate: We start off with the top, so “root.value” is 8. We hit “if not root” and that’s false, so we keep going. Now if root.value > max(biggest_vals)… oh, oops, that’s an error. Can I assume that the input is positive for now?

Interviewer: Sure.

Candidate: Okay. So I’ll change my first call to this:

def second_largest_value(root):
return second_largest_value_helper(root, [0, 0])[0]

Candidate: Okay. So now root.value is greater than zero, so at the end of that block I’ll have biggest_vals = [0, 8]. Now root.right isn’t None so we go into that first recursive call.

Interviewer: Alright.

Candidate: At the top, root still isn’t None so we don’t return. Now root.value = 10. That’s bigger than 8 so the first condition is true. We update biggest_vals again so now it’s [8, 10]. Then root.right still isn’t None, so we go into the second recursive call.

Interviewer: Okay.

Candidate: We go through the same stuff again and now biggest_vals is [10, 14]. Now root.left is None so we don’t go into the first call. We do go into the second call, and at the top our root isn’t None and root.value is 13. That’s smaller than 14 so we don’t go into the first conditional, but it’s bigger than 10 so we go into the second one. So after that, biggest_vals is [13, 14].

Interviewer: Makes sense.

Candidate: So now we go into the call again with root.left. That’s None so we return biggest_vals. That goes all the way back up to the top where we return the first element, which is 13. That’s what we wanted, so it’s correct on that input.

Now through this conversation we’ve taken a pretty ugly solution that had a bunch of red flags, caught an error without prompting from the interviewer, and we’re both convinced that your answer is correct. We’ve burned some time, but spent it doing what the interview is about — showing off your understanding of the problem and demonstrating that you can get from point A to point B without hand holding. And we didn’t need to spend 150 hours practicing slight variations of the same dynamic programming problem in order to get there.

Hacker Noon is how hackers start their afternoons. We’re a part of the @AMIfamily. We are now accepting submissions and happy to discuss advertising &sponsorship opportunities.

To learn more, read our about page, like/message us on Facebook, or simply, tweet/DM @HackerNoon.

If you enjoyed this story, we recommend reading our latest tech stories and trending tech stories. Until next time, don’t take the realities of the world for granted!

--

--

Patrick Collins
HackerNoon.com

University of Chicago 2015, hired at Google through the Engineering Residency program, now working on Search Infrastructure.