Take Action: Cross-Functional Partnership in Revamping Engineering Interview Processes

Meredith Marks
Teachable
Published in
16 min readMar 22, 2023
Take Action: Cross-Functional Partnership in Revamping Engineering Interview Processes, by Meredith Marks and Gabby Salmeron

Hello! Our names are Gabby Salmeron (she/her), Senior Tech Recruiter, and Meredith Marks (she/her), Senior Engineering Manager, and we’re here to discuss our collaborative process of revamping the engineering interview process at Teachable. In this post, we’ll walk you through the context we started with, how we went about tackling this lofty project, details on the changes we made and why, as well as some of our learnings so far. We are very proud of the work our team has done, and we’re excited to share it with you!

Problem Context (Gabby)

Our interviewing process for individual contributor engineers certainly did the job in a way that brought on great members to our team. However, it didn’t always provide the best candidate or interviewer experience.

Our prior interview process was:

  1. Recruiter Call
  2. Paired Programming Session
  3. Onsite:
    a. Another paired programming session (front or backend)
    b. A systems design and architecture challenge (which only recently replaced an ancient BINARY TREE problem which is problematic and exclusionary for reasons I won’t go into)
    c. Conversation with Product and Design
    d. Conversation with the Hiring Manager
A diagram of the previous engineering interview process, described above.
A diagram of Teachable’s previous engineering interview process

Internally, we knew anecdotally that there were a mix of problems. I’d receive Slacks from engineers about the system crashing during a paired programming session, file a ticket, and hope the candidate wasn’t turned off. Interviewers were unclear how to assess the candidate based on the specific round, and I noticed that they tended to over- or under-indexing during a candidate debrief discussion. We had prompts and questions but everyone, unfortunately, had a flavor of delivering and assessing that wasn’t necessarily standardized, thus unfair and often frustrating.

My gut, in tandem with conversations with interviewers (and even candidates), signaled we needed a revamp. However, given our ambitions to always be data-driven at Teachable, I started out by sharing an anonymous survey with our engineering, product, and design teams to get feedback from an interviewer and candidate perspective (if they were recently hired), all anonymously. Thankfully, I was also able to tap into exit interview feedback from engineers who voluntarily left Teachable and see if there was anything there we could use to better this process.

After pooling qualitative and quantitative feedback, two main problem areas became clear: process and operations.

Our current end-to-end process for interviewing had a number of issues. Not only did it seem redundant, but the content of the questions seemed irrelevant to what we were actually aiming to look for in a candidate (that is, from a values perspective, the work we’re doing here, etc). Not to mention that the rubrics to evaluate the rounds were non-existent, outdated, or had 3 different versions, and our tool for paired programming sessions would crash at any point, resulting in us needing to reschedule or even just lean on our engineers to give their best guess on the candidate’s competencies based off how far they got. None of this set the candidate up for success, or provided a good impression of Teachable. Change was overdue.

From an operational standpoint, we had a limited pool of interviewers and no engineering-specific interview training during our onboarding process, which often put new hires in a tough position where they’d be thrown in to shadow a session without any preparation, or just never interview at all. It’d cause interviewer burnout to say the least, and contributed to the inconsistencies in candidate evaluation. One of our dedicated interviewers and senior engineers even tried setting up an informal training program to get more people ramped up as a brief solution, but it wasn’t scalable.

But the data did also show that it’s not all doom and gloom — we were doing some things right! What information we did have (job description, interview plan, Confluence documentation, leveling information, etc.) was provided to interviewers beforehand. Candidates also felt comfortable by our interviewers creating a safe and supportive atmosphere (shout out to all of our interviewers across engineering, product, and design!), something that can be very difficult to achieve in interviewing environments, but has a ton of value in the candidate experience. Lastly, our recent hires who took the survey said that our process still gave a positive attitude towards Teachable, which is really important to preserve.

I’d argue we had the hardest stuff figured out — the right mindset and a nice and collaborative team. The culture and DNA was great. Where the challenge lay ahead was in the process and providing the training and tools to weave in our values at Teachable and empower our interviewers to best assess candidates and run any part of the interview process. Like many technology companies, you sometimes have to “build the plane while flying it.” And so, recruiting and engineering eagerly collaborated from the start.

Getting Started

As a mere Tech Recruiter, I could not do this myself. I announced in our Technology All Hands meeting that I was building an Engineering Hiring Guild and eagerly asked for participants. I wanted to create buy-in with my partners in engineering to build a process they were excited about and proud to drive. Enter: Meredith. Meredith had just joined Teachable in the previous month, so her recency to the process, combined with her past experience revamping interview processes set her up as my perfect co-lead on the Engineering side. Not to mention, in her first month, she already heard first-hand from her team about how burned out they were from interviews! She also happened to be one of my hires, which as a recruiter, is pretty darn exciting. I might be biased but she was (and is) awesome!

I also leaned on Tech Leadership to tap into their teams and eventually got eight individuals from engineering, from mid-level engineers to senior managers, and two in recruiting. Scheduling a weekly meeting, building a Google Drive, and creating a Slack channel — we were off to the races!

During our first meeting, my colleague, Savanna, Associate Tech Recruiter, shared a presentation going into more detail about the anonymous feedback received about each interview stage. As a group, we analyzed the feedback and took a step back — do we want to start entirely fresh and not just refresh the biggest problem areas? We opted for yes, let’s start fresh.

And now I’ll hand it over to Meredith to walk you through the primary changes we made in detail.

Phase 1 — Hiring Manager Phone Screen (Meredith)

The first thing I had noticed about our previous interview process was how quickly the paired programming session came — right after the recruiter call, and before the candidate had any interaction with someone closer to the role. When I interviewed as an IC candidate throughout my career, I was always turned off by this pattern, as I felt it showed a lack of engagement with me and prioritized the technical screen in order to “fail fast.” It was also clear from the data Gabby gathered that the paired programming sessions caused the most problems and interviewer fatigue, not to mention that exit interview data more often showed misalignment in cultural areas (i.e. growth, development, expectations, etc.) than technical aptitude.

How might we show engagement with the candidate and get early signals on core competencies for the role and alignment to Teachable’s values, to best utilize both our and our candidates’ time? I had an idea to insert a Hiring Manager phone screen in between the recruiter call and the technical screen, seeing that it could fit the bill perfectly. If there’s not a mutual fit for the role, it can save a lot of time for both parties to avoid going deeper into the technical screening. It also enhances the candidate experience; by having the candidate speak to their potential future boss early on in the process, it shows a commitment to them and their growth, and provides them more detailed information about the role and the team. If it’s a good fit, this should in turn, get them excited and motivated about Teachable and the rest of the interview process, which is a win-win for both parties.

I got buy-in from the rest of Engineering leadership for this idea, who were excited about the potential of the new step, even though it meant additional time spent for them, as the benefits were clear. A few of them volunteered to help brainstorm the content of the interview, and a few meetings later, we had a set of questions ready to go. We made sure that they weaved in Teachable values while also giving space for the candidate to talk through what they were looking for in their next role.

Gabby reviewed the questions and got support from the Recruiting side to launch this right away into our hiring process, even while we were still figuring out what other changes we wanted to make. This agile approach allowed us to move quickly where it made sense in order to have an impact sooner, but also take the time on the meatier parts of the process, like the technical assessments.

Phase 2 — Take-home Technical Assessment & Review

The next major change the Guild took on was replacing the two live coding sessions in the old process, one before the onsite and one during it. Why? Well… Suffice to say that the live coding interview is notorious in the engineering world, and there is a lot of research demonstrating that it is not effective at evaluating competencies relevant to the job. For example, from a scientific study from North Carolina State University and Microsoft:

The technical interviews currently used in hiring for many software engineering positions test whether a job candidate has performance anxiety rather than whether the candidate is competent at coding. The interviews may also be used to exclude groups or favor specific job candidates. … For example, in our study, all of the women who took the public interview failed, while all of the women who took the private interview passed. Our study was limited, and a larger sample size would be needed to draw firm conclusions, but the idea that the very design of the interview process may effectively exclude an entire class of job candidates is troubling.

In order to meet our goals of having a fair process that optimizes for the candidate and interviewer experience, plus helps us build a diverse team in line with Teachable’s values, it became clear that it was worth exploring other options. Another common technical interview is the take-home assessment, which similarly will stir up opinions in any engineer you ask. In order to evaluate these two options, the group made a pro-cons list. I won’t go into all of the details of the thorough and nuanced discussion (it would be worth its own entire article!), but we came to the conclusion that the primary concerns with a take-home assessment are solvable in the way it is implemented. There’s less research on the efficacy of take-home assessments, but a Twitter search unearthed informal polls where at least 3x as many respondents preferred take-home challenges over live coding interviews:

So we got to work. A subgroup of engineers within the guild, spanning multiple levels and skills, volunteered to band together and formulate Teachable’s new take-home assessment. First, we collaborated on table-stakes, including a reasonable time expectation (2–3 hours), clear and consistent evaluation criteria, and a commitment to providing the candidate feedback, including a dedicated take-home review session with a Teachable engineer during the onsite round. We also debated the bigger questions, like how strict to be on the timing and whether the assessment should require specific languages or not. Our brainstorming led us to an idea to have a language-agnostic assessment that prompts candidates to solve a real-world problem utilizing Teachable’s own product — our Public API.

I began by exploring the capabilities of our API and how I could craft an exercise that didn’t require specific technologies, but rather posed a set of required output that the candidate needed to produce in whatever way they chose. This proved quite challenging, but I knew the flexibility of allowing a candidate to use their own familiar language and tools was worth it. I went back to problems I’d solved over and over again and found a good jumping off point: a simple pattern of data fetching and output manipulation is universal. Now I won’t go into any more details of the assessment itself for obvious reasons, but once I had a draft that felt like a good start, I went back to the group for their opinions. After a few rounds of iteration in the requirements and prose of the prompt, we felt like we had hit a sweet spot of complexity, while also being a little fun and teaching the candidate a bit about Teachable along the way. Next, because the guild had been involved in this process, I wanted to find other Teachable engineers to test-drive the assessment without any context. I recruited 10 wonderful volunteers to spend two hours taking the assessment and providing anonymous feedback. Their feedback in the more real-world position that a candidate would be in proved invaluable, and a few more drafts and iterations later, we had a final version of the assessment itself.

Next came perhaps the most difficult part: creating the evaluation criteria. The primary reason to have a language-agnostic assessment is to allow candidates to code in whatever technologies they are most comfortable with, affording them the opportunity to focus on showcasing their strengths. The prompt even reiterates this and ensures the candidate that the goal is not to “check all the boxes.” But the question becomes: how do you uphold that promise and also formulate an evaluation criteria that ultimately produces a yes or no answer? In the end, we came up with a framework that emphasizes the fundamentals of code quality as well as provides a more open-ended space for the caliber of the implementation, both functionally and technically. We think it’s a good start, but we know this will be a key area for us to further refine as it gets put into use by engineers.

One of my favorite benefits of utilizing a take-home assessment within an engineering interview process is the opportunity to do a take-home review session with the candidate during the onsite round. I’ve done a lot of interviews over my career and the take-home review sessions have always been my absolute favorite. They provide a unique chance to have a technical discussion about code that both the candidate and interviewer have had time to digest. It also seeks to reduce stress by mimicking real-world technical discussions most closely. Engineers discuss solutions together and perform code reviews for each other on a daily basis — this should feel exactly like that. It also provides a signal about how a candidate responds to feedback. The interviewer uses the time to ask questions about specific areas in the candidate’s submission. For example, “how could you rename these variables and methods to provide more clarity to what their purpose is?” Or “how did you consider performance when deciding to use this data structure?” A successful candidate will productively engage with these questions, while it’s an important negative signal if a candidate becomes defensive or combative when provided this type of feedback. The last part of the take-home review interview prompts the candidate to elaborate on what they would do with more time and how their solution might need to change subject to additional constraints or with a new feature requirement. (I’m being intentionally hand-wavy here to avoid spoilers.) Overall the take-home review interview is extremely signal-heavy for both the candidate and the interviewer: they both get a real-world experience in technical collaboration with their potential future teammate.

Accurately and fairly assessing a candidate’s technical skills is very difficult, but through the group’s careful consideration, we are happy with our switch from two live-coding exercises to a take-home assessment and review. We are confident that we will see better outcomes with this approach and look forward to learning and iterating on it.

Phase 3 — Technical Project Walkthrough

The other technical interview in the previous process was called Systems Design and Architecture. Another common practice in the industry, this type of interview is often given to more senior candidates and sometimes only to candidates with backend experience. Teachable, however, was conducting this interview for all candidates across seniority level and stack-lean. While I advocated for that strategy for the take-home, with the Systems Design and Architecture round, it led to unclear expectations both for the candidate and for the interviewer and ultimately was not allowing a candidate to showcase their skills. These sentiments were highlighted in the qualitative feedback we gathered from the team. Additionally, the take-home actually covers a number of the criteria of the SD&A round, like overall architectural design and ability to manage ambiguity. So instead, I suggested an approach that emphasizes different skills, ones that are often unaccounted for in a technical interview process, but that I believe are some of the most critical to the success of an engineer. Those skills are: technical communication, critical thinking and problem solving, making appropriate tradeoffs, and ability to deliver. This takes the form of an interview I call the Technical Project Walkthrough.

This interview prompts a candidate to walk two Teachable engineers through the technical details of a project they worked on recently, with an emphasis on the four skills outlined above. It may seem simple, but with targeted follow-up questions to the candidate’s response, the interviewers can foster an organic technical discussion much like the take-home review. From my experience, by just asking an engineer to elaborate on something they built that they’re proud of, you extract those more nuanced skills like their critical thinking ability, not just whether they can design a database schema. (These days, AI can do that for you anyways.)

Ultimately, the simplicity and targeted nature of this interview is also reflected in its evaluation criteria. Yes, I can spoil this one for you: it measures those same four skills — technical communication, critical thinking and problem solving, as well as the candidate’s ability to make appropriate trade-offs and ultimately complete a successful project.

Bringing It All Together

Besides the three significant changes outlined above — the hiring manager screen, take-home assessment, and technical project walkthrough — we also refreshed the questions for the cross-team interview with product and design and the final hiring manager screen to make them more relevant and intentional. With that, we had left no interview untouched. In doing so, we crafted a cohesive process from start to finish, and we were really proud of it. Illustrated out, it looks like this:

Diagram of new engineering interview process
A diagram of Teachable’s new engineering interview process

As Gabby highlighted previously, one pain point of the previous process was a lack of centralized documentation. So for our shiny new process, I created a brand new overview document that for each interview includes who the interviewers should be, the goals for the interview, and the next steps with set timelines. There are also subpages for each interview outlining the agenda, questions for the interviewer, and details on evaluation — all in a consistent and digestible form. These documents are the artifacts of our efforts, and will serve as a valuable source of truth for all things engineering interviewing.

Of course the value of all of this work is only realized when the process is put into practice. In the spirit of iteration, instead of rolling it out to the entire organization, we opted to start with an open role that existed at the time, which was within the Learning team. Teachable’s Learning team is composed of three pods, and so the engineers on those three pods formed our Beta group. Critical to the success of this process in satisfactorily addressing the pain points and hitting our goals was interview training. Gabby and I put together two and a half hours of training sessions for the Beta group to go through, covering the take-home technical assessment and technical project walkthrough. These training sessions were very tactical. The take-home training went into the nitty-gritty details of the prompt and its requirements and walked through some example submissions. We covered the expectations of the take-home review session, focusing on the delivery of feedback to properly get at the signals we’re looking for. The technical project walkthrough training session introduced this new interview format to the Beta group and showcased the types of discussions that they could expect to have with candidates. Of course, each training went into detail about the evaluation criteria. This was very important to ensure consistency and fairness across interviewers.

Launching the new process and training materials to the Beta group affords us the opportunity to refine our communications and artifacts for the next open role we have, and sets us up for implementing this training consistently as new engineers join the team (that’s what this is all about, right?). At this point, we’d done all we could do. It was time for all of our efforts to come to life for our interviewers and our candidates. We were thrilled to see it in action, and eager to collect feedback, both internally and externally. I’ll hand it back to Gabby to share some of our early learnings.

Learnings So Far (Gabby)

The team has been very receptive to this new process, and feedback has been helpful for refinement and continuous improvement!

From the candidate’s experience upon submitting the take-home technical assessment, I’ve received a lot of positive comments:

“I love this take home assignment so much I’ve been treating it as a pet project.…”

“I genuinely enjoyed the exercise, the documentation around the Teachable public API reminded me of some of my favorite libraries to work with.”

“I had a fun time working through the problem…”

Additionally, in speaking to parents and self-disclosing neurodiverse candidates, they appreciated the flexibility of our take-home assessment. Many take-home exercises are usually timed for an unreasonable 5–8 hours, and the alternative is a pair programming session which could be very uncomfortable and during working hours.

We’re continuing to iterate based on our team’s feedback for the Take-home Technical assessment evaluation and prompt so both candidates and our interviewers are more empowered to assess and execute.

Concerning our Technical Project Walkthrough, the team of interviewers is enjoying this new session! See our team’s feedback below:

“As the first time giving this question, it was a breath of fresh air to have a conversation with a candidate from a different perspective.”

“This interview gives us great insight into an engineer’s ability to think and execute on projects as a big picture.”

Through some data, we can also tell that candidates are moving to an onsite at roughly the same speed as a year ago during this same time period (~10 days). But arguably, we are moving faster in this new process as the pre-onsite round includes BOTH the Hiring Manager Phone Screen and the Take-Home Technical Assessment. We’re tracking time to hire and time to fill currently but have some limitations that will be remedied by late April, when we hope to have more open roles to utilize this process with, as well.

While we’re really proud of what we’ve done so far, we are also excited to continue hearing feedback and working iteratively to improve the process even further from here. Special thanks to the whole team here at Teachable that was involved in this initiative — it certainly was a collaborative and cross-functional feat!

--

--

Meredith Marks
Teachable

Senior Engineering Manager, passionate about building highly effective and healthy teams that make a positive impact on the world.