We can be better
I’m male, white, and speak with a British accent. The odds are stacked in my favor. Yet even I have felt that running the gauntlet of stressful interviews was random and did not probe for my true ability. I can only imagine what it must be like for under-represented groups, dealing not only with these stress-inducing and arbitrary assessments, but systemic biases as well.
At Medium, we’ve always tried to distance ourselves from “traditional” tech interviews. Favoring problem solving ability and ability to learn, over programming language trivia and so called CS fundamentals. This helps us avoid penalizing people with different experiences, and makes for a flexible team who are more likely to grow and develop on the job.
Earlier this year, as hiring ramped up, we realized there was increasing confusion about how to talk about the traits and abilities we look for in a colleague. This ambiguity was starting to lead to inconsistencies and some difficult hiring calls.
So we embarked on an effort to define what we screen for, what we don’t screen for, and to create an objective grading rubric for interviews.
Today we’re open sourcing our work. To find out more, I encourage you to read Jamie’s intro post to the project.
This is only one piece of a much larger puzzle to improve diversity and inclusion at Medium, and also to simply make interviewing make more sense.
I’m incredibly thankful to Jamie Talbot and the team who worked on this, as well as our friends at CODE2040 who have been wise and helpful guides. We look forward to hearing feedback from the community, and iterating further on our interviewing practices.