Tech’ Interviewing is STILL Broken
For the last few years I’ve seen so many versions of tech’ interviews and literally all of them are broken. The fact is there is no intentional thinking behind them and there is no real process, despite the label being used to describe interview formats. I’ve yet to see one that is truly effective at removing bias and actually building teams.
There’s a tendency to forget that engineers are not hiring experts. So much so that most organisations don’t even invest in basic interview training for the engineers who are regularly interviewing candidates to do one of the most important jobs in their companies — building the product. Consequently these interviews become social signalling exercises more so than effective evaluations of everyday job performance.
Further, without expertise in interviewing and a lack of organisational process, the questions asked vary based on the experience, abilities, and biases of the interviewer. The result? Ad hoc, unpredictable interviewing experiences.
SanFran based non-profit organisation Code 2040 says:
Process is not just broken, it’s non-existent: There’s little industry or even company consensus about what a useful process for technical interviewing looks like. There’s no universal understanding of what must be tested, why it’s part of the conversation, or how to fairly evaluate the results. This inconsistency makes murmurs about “lowering the bar” even more frustrating to hear.
They also point out the need for “scalable content for measuring technical competency” rather than what is the more common place mix of tests of trivia, tests of performance before groups and even surprise coding challenges given according to the interviewer’s inclination.
According to Code 2040’s think tank, the most unproductive methods are:
- A focus on brand names is a distraction. Whether an elite school, a well-known company, or an obsession with rank at university, relationship to brands hold enormous sway in interviewing conversations. This emphasis is unproductive — and not predictive of a candidate’s potential. Assuming these pedigrees are essential to success disadvantages those marginalized by race, class or gender.
- Algorithm and data structure quizzes are a red herring. Many interviews centre conversations on algorithms and data structures that are unrelated to the day-to-day role candidates audition for. This blanket obsession not only devalues other important skills, it fails to embrace the diverse talents, interests and experience even of senior technologists.
- Artificial, high-pressure tests introduce noise. Solving problems with diagrams and whiteboards is a common tool of collaboration, many organisations take evaluating this practice too far. Very few working engineers prefer a dry-erase marker over their favourite text editor. Despite this, candidates may be expected to code on a whiteboard, under scrutiny and time pressure. Not only is this performance unrelated to the demands of most jobs, it’s also an enormous trigger for stereotype threat.
What can we do?
There’s no quick fix here, however indifference and resistance to progress on this will have a negative influence on a company’s talent potential, not to mention culture health. I would encourage flexibility, honest introspection and the embracing of diversity and inclusion.
Here are 3 practical considerations to get you started:
1. Hire for the fact that you’re building teams
A biased, one-size-fits-all hiring approach will disadvantage anyone trying to build robust teams with balanced skill sets. Some developers are entirely productive despite flunking algorithm trivia. If your interview process evaluates only that and not, say, design thinking or project planning skills, you’ll find yourself creating lopsided engineering teams that struggle to collaborate with other parts of the organisation.
2. Evaluate a previous project the candidate built
Have the candidate tell you about a project they built on their own terms and ask questions about what they learned from the choices and tradeoffs they made. This will show you how candidates think, solve problems and navigate the complexity of a software project. Doing this will tell interviewers much more about candidate abilities than any recitation of computer science fundamentals.
3. Assess a candidate’s ability to learn
Technology evolves all the time. The most successful contributors will be those who are most capable of teaching themselves new languages, APIs and processes for solving problems.
Emma Jones is Founder of Future Of Work, a Sydney based Talent Management consultancy, building people strategies that help organisations be ready for what is around the corner.