Real Talk: The Technical Interview is Broken

Code2040
Cracking the Code
Published in
7 min readJun 8, 2016

The following was written by Karla Monterroso, our VP of Programs.

As anyone at a fast growing startup knows, scaling can show you what is broken (yet hidden) in a system. This summer, Code2040 more than doubled the size of our Fellows Program, to 87 students. As part of this growth, the technical interview landed at the top of our list of broken systems in the tech industry.

We know a lot of you have noticed the same problem.

Companies we love have tried to be thoughtful about this but even self-aware, data-driven, inclusion-oriented teams are struggling to develop scalable, equitable evaluation processes. The defects have grown so obvious that at least five partner companies have told us they’re starting from scratch. They know what we have suspected: current interviewing practices provide mediocre, biased results.

Meanwhile, a cottage industry has emerged that reminds us uncomfortably of SAT prep. An individual can spend thousands of dollars learning the cultural norms necessary to get themselves into a desk at a technology firm. Research tells us the SAT doesn’t effectively predict college or career outcomes, and we believe current interview practices do not currently predict future success within a company.

Candidates shouldn’t be patching this deficiency on their own — much less spending their own money to do it. That’s unfair and inefficient. We need a systemic fix.

We need to refactor the technical interview.

Staging an intervention

This year, after countless conversations with engineers, hiring managers, recruiters and others in our network, we’re getting to the bottom of this issue. We convened an all-day think tank with GitHub, Slack, Medium, Lyft, Pandora, Intuit, Pinterest, LinkedIn, and former Stripe VP of Engineering and Code2040 board member Marc Hedlund.

Most of those in the room made their careers in building software, either as developers or managers of the development process. We also had support from recruiting and sourcing professionals.

Here’s what we figured out together.

Process is not just broken, it’s non-existent: There’s little industry or even company consensus about what a useful process for technical interviewing looks like. There’s no universal understanding of what must be tested, why it’s part of the conversation, or how to fairly evaluate the results. This inconsistency makes murmurs about “lowering the bar” even more frustrating to hear.

One of our partner companies, Medium, came out of our think tank armed with information to build a process document and create consensus inside of their own company. We highly recommend this kind of intentional thinking around an organizational approach.

Even when companies create a rubric, the questions asked vary wildly based on the experience, abilities, and biases of the interviewer. Alongside this lack of organizational process, many interviewers receive limited training on the basics of running a good interview. This creates ad hoc, unpredictable interviewing experiences across a majority of companies.

We need to create scalable content for measuring technical competency: While there isn’t a universal consensus for sound process, there’s surprising consensus around what doesn’t work. We have heard from so many of you about how frustrated you are even when you go out for your own technical interviews.

Current technical interview practice bakes in social signaling. At the least charitable interpretation, technical interviews may be seen as a sort of hazing to establish that a candidate belongs. Rather than discovering the candidate’s skills, evaluating their depth, and mapping them to the job at hand, interviews can devolve into a game show where the prize is an offer letter.

These interviews often have tests of trivia, tests of performance before groups, even surprise coding challenges that spring up according to the interviewer’s whims. We believe these tests of mettle are unrelated to everyday job performance.

Here’s a summary of the unproductive practices our group surfaced:

  1. A focus on brand names is a distraction. Whether an elite school, a well-known company, or an obsession with rank at university, relationship to brands hold enormous sway in interviewing conversations. We believe, and our think tank agreed, that this emphasis is unproductive — and not predictive of a candidate’s potential. Assuming these pedigrees are essential to success disadvantages those marginalized by race, class or gender.
  2. Algorithm and data structure quizzes are a red herring. Many interviews center conversations on algorithms and data structures that are unrelated to the day-to-day role candidates audition for. This blanket obsession not only devalues other important skills, it fails to embrace the diverse talents, interests and experience even of senior technologists we’ve met.
  3. Artificial, high-pressure tests introduce noise. While solving problems with diagrams and whiteboards is a common tool of collaboration, many organizations take evaluating this practice too far. Very few working engineers prefer a dry-erase marker over their favorite text editor. Despite this, candidates may be expected to code on a whiteboard, under scrutiny and time pressure. Not only is this performance unrelated to the demands of most jobs, it’s also an enormous trigger for stereotype threat.

So what can we do?

Code2040 remains committed to untangling this issue and supporting our partner organizations as they do the same. Ineffective interviewing is a problem that impacts too many of our students for us to sit idle. As our friends at the Management Center once told us, when people practices are not explicit, they disproportionately impact marginalized communities.

Community, let’s get explicit.

Our next task will be convening a group of technologists to tackle the problem of documenting scalable strategies for improving hiring. While we assemble this guidance, we can offer some takeaways as you begin your own journey to improve hiring at your organization. For partner companies, we’ll have some of this guidance available at our Code2040 Summit.

In the meantime, here are some things to get you started on the path:

First, consider that you’re building teams and hire according to that reality. A biased, one-size-fits-all hiring approach will disadvantage anyone trying to build robust teams with balanced skill sets. Some developers are entirely productive despite flunking algorithm trivia. If your interview process evaluates only that and not, say, design thinking or project planning skills, you’ll find yourself creating lopsided engineering teams that struggle to collaborate with other parts of the organization.

Next, consider evaluating a project a candidate built on their own terms. Our partners at Slack will take the technical challenge candidates complete and ask them questions about what they learned from the choices and tradeoffs they made. This helps surface how candidates think about problem solving and navigating the complexity of a software project. We believe these observations will tell interviewers much more about candidate abilities than any rote recitation of computer science fundamentals.

Fundamental to any screening is whether your assessment measures a candidate’s ability to learn. Technology evolves all the time. The most successful contributors will be those who are most capable of teaching themselves new languages, APIs and processes for solving problems.

At Code2040, we’ve built an intern-level coding challenge to help qualify incoming fellows. Students write code in the stack of their choice to connect to a simple web API, which gives them real-time feedback on their performance. This challenge not only provides a chance to write code they can talk about during interviews, it also assumes that technical skill isn’t fixed — that even if a student doesn’t know immediately how to solve a problem, a bit of googling, persistence and experimentation can open the door. As the challenge escalates in complexity, starting with string manipulation and building to date/time transformations, both our students and our partner companies can see a growth mindset at work.

Be vigilant about how your current technical screening is a behavioral screening in disguise. While a candidate may have learned certain habits from previous employers or projects, the agreement of those habits with your organizational preferences is not the same as technical aptitude. Be intentional. Rather than measuring identical thinking with your current team, consider how your evaluations can instead measure how well candidates match your organization’s appetite for solving problems collaboratively.

Next steps

We’ll be working on specific, public guidance for improving technical interviewing. Most of our think tank guests, though, went immediately to work on troubleshooting and refining their hiring. This work reveals many immediate bugs, and their initial fixes may reveal plenty more we don’t see right now. Fixing the technical interview will take time — both for individual organizations and for our industry as a whole.

Still, indifference or resistance to progress here is something we now see as a red flag. It may signal an inflexible company culture that avoids introspection and embraces exclusivity. Those qualities have a negative impact on a company’s ability to retain talent from marginalized communities — not to mention its overall cultural health.

A huge thank you to Danilo Campos, who has been ridiculously helpful in our understanding and facilitation of this conversation. This blog post and our work in this arena would be deeply different without you. Your ardent curiosity is vital to this work.

Also, thank you to the participants of the Technical Interview Committee, or, as we are calling you internally, the Tech Justice League.

Jesse Toth — GitHub

Nolan Caudill — Slack

Jamie Talbot — Medium

Anthony Velazquez — Lyft

Maira Benjamin — Pandora

Tracy Chou — Pinterest, Project Include

Marc Hedlund — Code2040 Board Member, former Stripe VP of Engineering

Lucy Mendel — Former engineering manager (managed Code2040 fellow)

Jenny Choo — Intuit

Alexandria Spiva — LinkedIn

--

--

Code2040
Cracking the Code

Activating, connecting, and mobilizing the largest racial equity community in tech.