Unbreaking technical interviews

Bobby @ fiskal.app
7 min readFeb 11, 2016

--

Hiring in tech has had its flaws but the tech industry has always been open to critique and improvement. Does anyone remember the brain teasers of the 2000s? Their are flaws that still exists in using correlation and confirmation bias to unintentionally exclude good candidates. Let’s just take a few phases I’ve personally overheard working in Silicon Valley just this last year. “I can hire any a recent college grad to help out on customer support emails” or my personal favorite, “I personally know the CS program at [University X] is bad. We’re going to pass on this person’s resume.” If those aren’t bias enough, CEOs and venture capitalists have come out saying “We want diversity but we won’t lower our standards”. This is double talk for, it would be nice if we had women and minority candidates but we’re not going to consider that there might be flaws in our hiring process.

Blind Performance Auditions

The goal of hiring is not to reinforce confirmation bias but to hire good people that do good work. Blind Performance Auditions is a process that’s been gaining traction recently in many industries. It attempts to remove large amounts of correlation and confirmation bias from the initial recruitment process. I’ve been working at companies that have been doing minor variations of this process for a few years now. It has helped us get great people who perform very well. Here’s a few guidelines that have been helpful for us in hiring people based on merit instead of pedigree.

Step 1: Focusing on the Core Task

To start the blind performance audition process, we need to understand the core task we’re asking someone to do. A good litmus test is to use the 80% rule. What is the applicant going to be doing 80% of the time? For instance a front end web developer’s core task is to build views using JavaScript, React and CSS Flex Box. Almost all companies I’ve talked to already have architectures and frameworks in place that they are fairly committed to. Testing on a candidates architectural choices doesn’t speak to if they built UI in React in our code base, so we shouldn’t test on it. The other 20% of the equation is to acknowledge that people have different backgrounds and skill sets. We benefit from the diverse backgrounds of co-workers and expect people to learn and improve on the job.

The test we create is to simulate normal day to day work. The task is to create a basic list view using a REST API and adding a very simple one-line filter function (no algorithms). We setup the project with boilerplate code and a small architecture that mirrors our code base. In the folder is a readme with the instructions and some mocks from design.

The main purpose for the test is to see if a candidate can do the job we need. Secondly we want to know how long do they take to do it. A good senior developer will finish the test in about 30 minutes, junior developers take about 3 or 4 hours. In the analysis we do, there is no judgement if they followed our companies internal “best practices”. We look at the choices the candidates make and what types of impact it might have on a larger code base. By looking at both the quality of the output and the time it took we let senior developers shine while leveling the playing field for a junior developer. Here’s an example of the critique we might have from an applicant.

Sample of the assessment we do internally.

Step 2: Level the playing field

The next step was something we only caught onto after multiple implementations. The initial response rates we received weren’t as high as we had hoped. After talking with potential candidates we found out that many were intimidated by the assessment.

The issue was that we use a relatively new framework called React. There are still a lot of developers who haven’t used it before. We updated our job posting to offer some learning materials before the test. This helped potential candidates to have a more relaxed environment to learn and take the assessment when they felt ready. Having people that can learn and adapt quickly is a great skill for a candidate to have.

Step 3: Reaching out

College graduates, US citizens, race, gender, affluent middle class people or other categorizations we might bucket people into are not proper factors in assessing the quality of a person work. Maybe you grew up in a house where you went to bed hungry or walked to school being bombarded by bullies, drug dealers and gangs. Maybe everyone you know dropped out of high school or your parents’ poor finances stopped them from co-sign for your student loans. There are so many challenges for people in the world. We need to enable and empower people from all backgrounds to achieve more.

Instead of companies only recruiting Stanford graduates, we need to also reach out to a wider group of people. Bootcamps are great educational environments. They help people who don’t have 4 years and $100,000 to gain the ability to write clean, quality code. Yes, there are some skills that a 4 year university will always be better at. But the system we have today penalizes people who have the passion and ability learn on their own. By recruiting only at universities and scrutinizing candidates based on degrees, it penalizes people who figure this out later in life or come from disadvantaged backgrounds.

In our process we reached out to bootcamps and other groups. Computer Science teaches about the underlaying fundamentals but doesn’t always correlate to a proper application when coding. Just looking at the trend a few years ago it was about making many abstractions like adapters, factories, visitors, delegates and flywheels. Talk to engineers in some large corporate applications and they would have a new hire learn the entire code base before being able to complete even simple tasks. Understanding CS fundamentals to write clean code is very helpful but doesn’t guarantee better code in the end. We should reach out to more diverse groups and focus on the practice of writing code over knowledge the underlaying science.

Closing the loop

Up to now we’ve only discussed the initial recruitment process. We should briefly touch on the rest of the process. We want good people, that can write good code and learn as quickly, not people that have a correlation of X.

  • Write down all positive/negative/neutral feedback from every blind performance audition. This helps to level set larger groups of developers to decide what core skills are required and what biases the group is excluding on.
  • Behavior interview questions do not tell you how good someone is. They tell you if the person has worked in a corporate (or possibly university) environment before. A bad score does not mean they’re a bad communicator. It means the candidate might not have had the opportunity to learn or apply those skills before.
  • Phone interviews and on-site technicals are not an opportunity to rehash coding challenges. They are to understand the breath, depth and technical opinions of a candidate. In our process, we ask the candidate bring in a piece of public code they have written. We want to understand the chooses they have made and why. We are NOT looking for conformity to our process. We are looking to see if they are making logical choices and what their strengths are in underlaying fundamentals, application of patterns and the strengths of a developer. When building UIs, understanding how a user might react to an interface is just as important as if the code is stable/flexible.
  • Our process was implemented in just a few hours with very little structure. In the past, we have used an auto email responser and a single web page to explain the process. Less sink cost in the setup allows for quicker iteration. Be willing to constantly reassess and rapidly change things that don’t meet the mark.
  • If 80% of a developers time isn’t spent on white boarding session or algorithms or architectures then don’t test on that. If you’re hiring for a data scientist then you’ll have a higher propensity to test on algorithms than for a front-end developer where it’s less impactful. Specialization is a good thing.
  • The last thing that I’ve done on multiple projects is to focus on a clearer separation of concerns with the architecture for my apps. This enables a better specialization of developers. Our goal is to find a better balance of having stable releases while allowing our design team to iterate faster, fail sooner, learn quicker and experiment more. This structure has the added benefit of being able to separate concerns data and the users. A side effect have been that it allows us to use junior developers more effectively by reducing the complexities. If you’d to learn more about the latest architecture we’ve been building there are details at bit.ly/fluxless and bit.ly/triforce-of-power.

Changing attitudes inside one company is hard enough. Changing an entire industry seems insurmountable. We should strive to be critical of what our processes do well and poorly. We should be aware of confirmation bias. A correlation that previously was proven true can and will change. If we always exclude people based on perceived correlations then we will continue to get more confirmation bias.

--

--