Engineering Engineering Recruiting

Dan Simon
The Qventus Nudge
Published in
7 min readMar 4, 2022

Engineering recruiting is hard and only getting harder. In the world before COVID most companies were geographically limited to hiring in their local markets, which meant that places like the Bay Area competed to hire from a very limited supply of engineers. As a Bay Area company, Qventus felt this pinch and we made the decision to start hiring remotely across the U.S. Opening up our engineering positions across the U.S. had a noticeably positive impact on our recruiting efforts, but then the pandemic arrived.

As the tech world has become largely remote, what had been a strategic advantage for us with our remote recruiting efforts has over time become table stakes as competition for remote talent has steadily increased. We needed new, innovative ways to think about how we recruit, and as engineers we started thinking about how to engineer better engineering recruiting (incase you’re wondering, the title of this post was not a typo).

Requirements Gathering

The first step in engineering our engineering recruiting was to do some requirements gathering, which meant we needed to understand the problems we were trying to solve. We needed to increase the number of candidates coming into the top of the funnel, then make sure we were managing them efficiently through our recruiting pipelines, and finally add sufficient governance to track the effectiveness of our efforts and thus improve iteratively. To do this, we identified three broad areas to address: top of funnel candidate volume, pipeline efficiency, and operational governance.

Feeding the Funnel

We wanted to find ways to supplement our existing sourcing efforts, and increasing internal referrals was an obvious direction. We already have a good internal referral program, but it was too passive. We needed to find ways to more effectively activate and organize our internal team, which have traditionally been a strong source of solid talent.

Pipeline Performance

As we looked at our pipelines, and discussed the status of each of our open roles, we often found ourselves using the phrase “its feels like”. We needed to find a way to make more objective, data driven determination of the health of each pipeline, but more importantly understand where and why a pipeline was unhealthy.

Gaining Governance

Even if we had been able to objectively understand pipeline health/performance, we needed a mechanism to improve it. As we dug into pipeline stage metrics we found far too many gaps in how we track and manage our pipeline. For example we found too many instances where candidates sat idle in specific stages. We needed a way to track pipeline progression, and more importantly take action when a candidate started to idle.

The TDD + Implementation

Once we gathered requirements it was time to start planning and then building. As the title of this post suggested, we were aiming to engineer a solution to our engineering recruiting challenges, which meant some coding in addition to some creative change management.

Release the Referrals!

Qventus has always had great results with our internal referrals, with quite a few Qventoids on the team today who joined as the result of an internal referral (our last 4 engineering hires were all internal referrals!). The challenge with referrals though is that folks will typically tap their networks sporadically to refer candidates, and it’s spotty and pretty passive at best. We wanted to provide a more structured referral program and make it fun and engaging at the same time. We needed a recruit-a-thon!

The objective with the recruit-a-thon was to set aside some dedicated time for the team to source, but also provide everything they needed to do it effectively. To that end we provided an easy way for them to upload their referrals, access to LinkedIn Recruiter to scour their adjacent networks, and some pre-canned blurbs they could use to help introduce prospects to the various roles we were hiring. This very limited 2 hour activity produced 50+ solid referrals, but we needed a sustained effort, not just a one time thing. We needed the FOREVER recruit-a-thon!

Attempting to create a recruit-a-thon that would last forever was clearly impossible (and I’m sure would have violated some labor laws?), but we wanted a way to capitalize and build on the momentum we had established with the first recruit-a-thon. To do this we introduced weekly competitive referring. The idea was pretty simple:

  1. Ask people to refer as many folks as possible each week.
  2. Maintain a leaderboard of who refers the most people.
  3. Announce the winner of the leaderboard each week and send them a mystery prize!

We ran this for a month and got about 10 to 20 referrals a week from the team. Our recruit-a-thon’s and competitive referrals worked well and have helped feed our top of funnel.

Heightened Health (Scores)

To understand pipeline health we needed to establish a few inputs:

  • Req Open Date: When the req was first opened.
  • Target Hire Date: When we need the candidate hired.
  • Pipeline Stage Conversion Rates: The historical conversion rate between pipeline stages.
  • Close Rate: The percentage of candidates that will get to offer acceptance from the top of the funnel, derived by looking at close rates across each stage of the pipeline.
  • Time to Hire: The aggregate time required to hire someone assuming efficient movement through the pipeline.
  • Pipeline Stage SLAs: The average amount of time we expect candidates to sit in each stage. These are ideal, but conservative values informed by past performance for similar roles, understanding that we do not have full control of each stage (e.g. how long it takes a candidate to respond to us when we reach out).
  • Target Pipeline Stage Quota per Day: The number of candidates that should exist in each stage of the pipeline for a given day assuming linear candidate flow and based on target Hire Date, Pipeline Stage SLAs, and Pipeline Stage Conversion Rates.

Given these inputs we calculated pipeline health by looking at each stage of the pipeline for a given role to determine whether we had a “healthy” number of candidates in that stage at a given point in time, and then looked at the pipeline as a whole to understand macro health, weighting stages accordingly.

To this end we were looking to understand how the number of candidates in a stage compares to Target Pipeline Stage Quota per Day. If we were higher than or equal to the required quota in a stage, we were healthy. If we were below, we were unhealthy.

Furthermore, the number of candidates in a given stage would be able to tell us if we have enough to feed the next stage. E.g. if stage A had 10 candidates, and stage B needed 5 candidates, but stage A only had a conversation rate of 20% and candidates typically stayed in stage A for 7 days, then it’d take 21 days to get 6 of the candidates from stage A to stage B (10 * .2 * 3 = 6), which would increase hiring time by 14 days in theory.

The amount of time required to begin to start filling a stage also needed to be considered, e.g. at day 0 (when the recruiting starts) it makes no sense to expect that stage 4 of the pipeline has candidates in it. Instead we needed to expect that it takes the sum of all stage SLAs days to get to stage 4. As an example, if it took 4 days to get through stage 1, 3 days through stage 2, 5 days through stage 3, then we shouldn’t have expected stage 4 to have any candidates until day 13 (4 + 3 + 5).

We used the Lever API (our applicant tracking system) to pull all the associated inputs, derive other necessary inputs, and programmatically compute the health score using the aforementioned method, writing it out to a Google Sheet. We also captured the unhealthy stages and reported on them, specifically calling out which stages of each pipeline needed the most attention. Not only did this help us quickly and consistently determine pipeline health, but focused our recruiting efforts on the right parts of the pipeline when there was a problem.

Gaming Governance

Understanding pipeline health was important, but we needed to ensure that any action taken to correct and manage a candidate pipeline was being done so efficiently. We needed governance.

We had a pretty strong perspective on how long it should ideally take candidates to progress between stages in the pipeline. Using the Lever API we were able to pull all active candidates for all open roles and then determine how long they had been inactive in a given stage. We simply compared their associated current stage against a set of stage SLAs and then surfaced all candidates who had been idle too long relative to a pre-established set of pipeline stage SLAs. This was really helpful in building a list of candidates that needed attention, but was not sufficient to create governance that would drive more efficient pipeline maintenance.

To establish governance we then setup a daily email sent to each set of candidate “owners” (as determined by Lever) with a list of their candidates that needed attention. Establishing this system helped us clean up our pipelines and keep them clean, ensuring that we keep candidates moving efficiently through the pipeline.

Recruiting Retro

So with all these changes, what worked well, what didn’t? What would we have done differently? Here are few of the key learnings from this endeavor:

  • Be deliberate: We have had a referral program for a while, but carving out explicit time for folks to scour their networks and then following up frequently really increased the volume of referrals.
  • Normalize analysis: Deterministically establishing health scores was key to shifting our discussions from debating the state of our recruiting pipelines to focusing on the solutions needed to address pipeline issues.
  • Automation set us free: Automating how we manage our pipeline really helped reduce the amount of time required to tend to our pipelines, freeing up time to focus on sourcing and working with candidates directly, which is where we get the most value.
  • People make the difference: We established a lot of new processes, but ultimately the willingness of the team to try something new and embrace change made the difference.

Like software engineering, recruiting is an iterative process. We established a pretty good baseline set of tools to help us scale to an increasingly larger set of candidates and roles, which is especially important given our growth targets this year on the heals of our new round of funding. There is certainly a lot more to be done, but we’re off to a good start.

--

--

The Qventus Nudge
The Qventus Nudge

Published in The Qventus Nudge

Follow along as we discuss all things Qventus engineering — from the trials and tribulations of recruiting, to how think about and try to drive DEI, to the complexities of building machine learning models at scale on some of the most challenging data sets around!