Lifting the fog of bias

Judgement is made through the application of one or more biases.

Problematic judgement, including those indicating racism or sexism, comes when we apply irrelevant biases to the question at hand. Take a moment to skim Wikipedia’s list of cognitive biases, and you’ll wonder if it’s possible to make a well-informed judgement. Biases may be correlated but not causal, nakedly false stereotypes, weigh on our fear of risk, pull on our preference for familiarity; biases that don’t drive good judgements.

No one lives in a void of bias. Worse, we can’t even enumerate which biases we hold. Asking people to disregard spurious biases when we don’t even know what biases are in play is folly.

Smart people have thought about this already, and invented “blind auditions.” By increasing the signal and reducing the noise provided to a system, the easier the task it is for that system to amplify the signal and not the noise. By obscuring a musician’s appearance during an audition, judges can focus on signal, the performance, and not be distracted by noise, like the appearance of the musician. The impact on diversity by using blind auditions is impressive.

When I interview candidates, I’m trying to answer just two questions:

  1. does the candidate have the aptitude necessary to do the job at hand, and
  2. can they enthusiastically apply that aptitude?

Aptitude can be gleaned in part by

  • what they’ve built in the past, both professionally, and on their own time
  • what majors they’ve completed
  • what classes they’ve attended

Aptitude can also be hinted at by working through coding problems, but really only hinted at. If you can balance a binary tree or the longest common substring, that does not tell me whether you can craft systems that are robust and plastic to change. Discussing the pros and cons of prior work, and what they would do differently next time, can also give hints at aptitude.

Given this, how can we remove noise from tech interviews, so irrelevant biases aren’t applied? What if we could amplify the signal to determine aptitude and enthusiasm?

What signal would we lose if we elided a candidate’s name, which may imply sex and race? Or the names of the companies they worked for? Certainly there is stereotype bias around companies, but are those reliable biases? Would you increase SNR by even eliding school names (or do you think your biases there are defensible?)

But how do we only “listen to the music” that the candidate can make?

With text-only chat and web-based code editors. Pure online interviews.

With no extraneous information, we simply can’t apply irrelevant biases.

But telling software companies to do away with the “in-person interview” won’t happen in the foreseeable future.

So, meet me halfway. What if half of of your technical interviews were text-only online-chat-and-editor interviews? At least you could start collecting data around what biases your company has.

I’ve interviewed several hundred engineers and worked with several thousand engineers over the past many years.

Like hires like. Familiarity wins.

If you value diversity, lift the fog of your biases. Blind auditions can help you hire for job aptitude — not just familiarity.