Introducing NextGen Hiring — The way to hire 10x developer who is ready for the Gen AI world

Komal Agarwal
12 min readMay 25, 2024

--

~ Designing a real world hiring experience to hire 10x developers in this emerging era of Gen AI world.

Summary

Hola! Here’s a quick overview of what you can expect from this blog:

  1. The “5 whys” of the problem statement, following HackerRank’s V2SMOM framework — Vision, Values, Methods, Obstacles, and Measures.
    — — By using this framework, product teams ensure a clear understanding of the problem and a structured approach to solving it, increasing the likelihood of developing a successful product aligned with user needs and company objectives.
  2. MLP design rollout journey
  3. Key takeaways from the concept to creation process (Mostly it will spill over in a next blog!)

Context

Advancements in AI are significantly impacting software development and developer productivity. Tools are emerging that handle various aspects of the development cycle, from transforming design files into front-end code to code generation and review. For example, Magic is aiming to create the world’s first AI software developer, and GitHub Copilot workspaces help developers with project planning, code generation, review, and deployment.

These developments suggest that the role of developers will evolve with AI models like LLMs (Large Language Models) capable of handling extensive context. Google’s latest model can process 2 million tokens, allowing for more comprehensive and meaningful outputs. Eventually, these AI models will act as agents, assisting developers throughout the entire software development lifecycle.

Currently, developers need to understand requirements, design systems, write and review code, ensure code quality, debug, deploy, and iterate based on feedback. In the future, AI agents will handle these tasks, either individually or collectively. Developers will transition from coding to orchestrating these AI agents, ensuring efficient and effective outcomes for businesses.

These developments reshape the expectations of new-age developers:

  1. Developers would become code orchestrators. They may not be responsible for authoring code from scratch. It will be essential for them to be exposed to validating and testing pre-authored code generated by AI.
  2. Strong fundamentals in coding would help in building and maintaining code. An entry-level developer will need to prioritise the fundamentals of software and coding over the choice of a specific programming language they should start to learn.
  3. Developers must be experts in areas like code review, prompt engineering, testing, training large language models or dealing with non-deterministic outcomes.
  4. The distinction between Front-End and Back-End developers diminishes, with Full-Stack developers leveraging AI tools to build end-to-end applications.
  5. With the evolution of LLMs, their coding capabilities will improve over time. Leveraging AI to deliver high-value output would become more critical than creating a piece of code. Reference: Link 1; Google Goose Update, Cognition AI builds the first AI software developer

The key thesis areas that, we believe, are critical at this juncture are:

1. Code review would become a critical skill for developers to work in the AI first world.

2. Developers evolve into code orchestrators.

3. Hiring managers seek candidates empowered with all available tools.

My Role:

Within this project, I held the position of senior designer, collaborating closely with a principal product manager on research and strategy, while spearheading the design and execution efforts.

The core stakeholders involved included:

  • A Principal PM — who also served as the project captain,
  • A Senior Engineering Manager,
  • and myself as the Senior Product Designer

Other key stakeholders comprised:

  • Our CEO,
  • A couple of Directors of Product Management and Engineering (given the project’s significant horizontal impact),
  • AI/ML Engineering Manager,
  • A Director of Content and
  • The Head of Design

Now let’s talk about…

What’s the problem at hand?

Our challenge was to develop a product that assists companies in hiring developers in the new AI-first world, aligning seamlessly with the real-world Software Developer life cycle.

Our Solution for NextGen Hiring

The NextGen hiring product is built on these fundamentals where we test for the coding fundamentals of a candidate through tasks like feature building and bug fixing. We assess their ability to collaborate with AI in developing features, a process we call AI orchestration. Additionally, we evaluate their skills in code review within a large codebase and their ability to take relevant actions.

Finally, candidates are given a system design question. Importantly, these tasks are not isolated puzzles but are integrated to mirror real-world scenarios. All tasks are based on a real-world code repository, ensuring a practical and relevant assessment experience.

Values

When considering the values we aimed to uphold, it’s essential to strike a balance between serving our users’ needs and delivering value to the company. For me the importance of both is almost 50–50.

Here are some key values from a customer perspective:

  1. Establish trust as a reliable partner during their organizational paradigm shift.
  2. Proactively anticipate real-world requirements and stay ahead of the curve in understanding potential needs.

From an operational perspective, we prioritized the following values:

  1. Continuous learning: Acknowledging that our thesis must evolve with technological changes, we remain agile and adaptable.
  2. Agile testing and iteration: Given the high level of uncertainty, we adopted a scrappy approach, focusing on MVPs to validate assumptions before iterating and enhancing.
  3. Real-world simulation: Interviews are a proxy for judging a candidate’s potential performance in the job. On a spectrum from judging someone based on their resume to spending a few days with the candidate coding in your actual codebase, the current interview process is somewhere in the middle, mostly due to time & bandwidth limitations. This involved pushing boundaries to mimic real-world scenarios through tools, workflows, and skill assessments, acknowledging that this evolution is inevitable and positioning ourselves as leaders in driving it forward.

Strategy

Before delving into solutioning, it’s essential to define the strategic pillars of the project:

S1: Establish the foundational code repository infrastructure to empower the hiring process.

  • Code repositories serve as the bedrock upon which real-world questions are built.
  • These repositories facilitate the creation of authentic problem scenarios, allowing candidates to showcase their ability to comprehend large code blocks and solve posed questions effectively.

S2: Develop the interview experience for hiring developers in an AI-first world.

  • The live interview experience will feature questions assessing candidates’ capabilities in:
  • — Code review
  • — System design (potentially in the same round or following)
  • — Feature enhancement/new feature building leveraging AI (to demonstrate AI orchestration)
  • — Debugging
  • — Test case review/writing

S3: Design an AI Copilot experience with a capability controller

  • Building upon the core thesis, interviewers aim to witness candidates at their best, empowering them with available tools to explore their creativity and problem-solving approach.
  • The AI Copilot can assume various roles based on customer requirements, such as an onboarding assistant, a full-fledged copilot, or an AI guidance tool that does not generate code.
  • Therefore, the ability to control the AI’s capabilities becomes crucial, enabling the development of an AI interviewer for asynchronous processes.
  • Confidence in our capability to control the output directly impacts customer adoption.

S4: Develop an AI Interviewer to guide candidates through asynchronous tests.

  • Addressing the significant dissonance with developers regarding take-home LeetCode-style questions, we aim to provide relevant real-world case study questions leveraging our code repository infrastructure.
  • This approach enables candidates to solve meaningful problems and gain insight into a developer’s life within the organization.
  • To streamline the process, our hiring experience includes an AI interviewer to assist candidates throughout, breaking the stereotype of take-home assessments and transitioning to AI interviews.

S5: Create an AI evaluation system to enhance evaluation quality in the process.

  • Addressing the subjectivity of the current evaluation system in synchronous processes, we aim to improve evaluation quality.
  • Our asynchronous tests are currently evaluated in a binary fashion, solely based on task correctness, prompting increased feedback from customers on the need for qualitative evaluation.
  • The proposed evaluation framework will guide graders on qualitative aspects like code quality, communication skills, and problem-solving approach.
  • Additionally, it will facilitate structured feedback submission by providing a summary of feedback and the interview, while suggesting grading rubrics and evaluation parameters for softer/qualitative feedback, ultimately providing a holistic performance summary.

Methods

After establishing our strategic pillars, we defined the scope and rollout plan. We decided to focus on S1, S2, and S3 for our Minimum Lovable Product (MLP).

Scope: S1 + S2 + S3 -> Build an interview experience for hiring developers in the AI-first world.

  • Sync Interviews: We will test skills such as code review, AI orchestration for feature building or enhancement, system design, and debugging.
  • These tasks will be built on a code repository to simulate a real-world experience.
  • Candidates will have access to a Copilot to assist with solving tasks.
  • Interviewers will have the flexibility to control the AI’s capabilities during the interview.

By focusing on S1, S2, and S3, the MLP will be tailored for a front-end engineer in a live interview scenario. The second milestone will be creating the async test for the same role.

Scope

Our approach involves understanding the product’s key value propositions and aligning them with the strategic objectives, similar to the Lean Startup methodology which emphasizes creating a product based on validated learning.

Goal 🧐: To ideate on how these functional requirements will work together and interconnect, we began with comprehensive user flows, much like creating customer journey maps in UX design. These flows will guide users in achieving their objectives.

Instead of tackling everything at once and achieving only 80% effectiveness, we aim to address each user persona’s specific problems one at a time, a practice reminiscent of Design Thinking.

This focused approach ensures we deliver high-quality solutions that meet the unique needs of each user persona.

We picked up the F2F Interview workflow first.

Forces of Progress

After defining the workflow priorities, we delved into the forces of progress for each persona involved.

We analyzed the demand generation and reduction forces, identifying the pushes and pulls that shape the Jobs to Be Done (JTBD) framework. These forces are crucial in understanding customer motivation and demand.

Recent case studies have frequently referenced the four key forces: pushes, pulls, habits, and anxieties. These elements work together to generate and shape customer demand. The JTBD framework’s focus on customer motivation and the interplay of these forces distinguishes it from other theories of innovation and design processes.

Reference doc:

Solutioning

Goal: Our goal is to make the interview process realistic and relatable for both interviewers and candidates. For interviewers, this means using a code repository that mirrors the company’s structure, ensuring they hire the right individuals. For candidates, it means providing a true representation of what they can expect in their role.

Philosophy for Zero-to-One Products:

In developing zero-to-one products, my approach is not to reinvent the wheel. Early-stage start-ups often go through phases of rapid testing and iteration to refine their feature sets. Therefore, it’s best to lean heavily into established best practices for UI/UX frameworks. While I value adding moments of surprise and delight, I strategically prioritise feature functionality and user experience in the early stages.

Given the project’s significant horizontal impact across HackerRank, we maintained the base flows for the MLP, focusing on validating the flow and ensuring it resonated with users.

Interviewer User Flow:

1. Interview Creation Flow:

We mapped out all user flows in HackerRank’s interview creation process, categorizing them into modules. Users primarily followed three routes: ATS, HackerRank for Work (HRW), and API Integrations. To base our approach on data rather than hypothesis, we explored the following questions:

How are our users (mostly pilot customers) creating interviews today? Is it through ATS, HRW, or Integrations?

What are the current user behaviors and preferences?

Our findings revealed that:

100% Interviews are being created through HRW by our pilot customers.

This led us to prioritize the HRW flow, ensuring we provided an option rather than forcing users to reinvent their processes.

Consequently, the onboarding flow remained largely unchanged in terms of design and user experience.

2. In Interview Experience:

This was our main focus area for the quarter, aimed at creating a smooth and progressive interview experience.

• Code Review Experience:

As per our thesis:

Code review would become a critical skill for developers to work in the AI first world.

As stated, design principles for this project were to avoid reinventing the wheel. I examined how GitHub, GitLab, and other competitors approach code reviews and integrated these insights with the anxieties of our personas — both interviewers and candidates.

Code Review Experience Design

• Observation mode:

One crucial insight from my 4+ years of experience in Interviews team aka CodePair, is the importance of collaboration.

In code review scenarios, interviewers often experience significant anxieties, such as:

“I don’t know where the candidate is in the file.”

“What is the candidate thinking while commenting on a line?”

To address these concerns, we refined an existing feature — Observation Mode. This involved tackling several design and experience complexities, including:

  • Functional Differences: Ensuring consistency between the interviewer and candidate screens, particularly in terms of grading rubrics and functionality.
  • Interviewer Notes: Integrating interviewer notes on the right side of the screen to streamline the evaluation process.

By addressing these complexities, we aimed to create a more collaborative and transparent code review experience, reducing interviewer anxieties and improving overall interview effectiveness.

• AI Orchestration:

Nailing the IDE experience for the AI Orchestration task was essential. Our goal was to make it as close to the local VS Code environment as possible. This issue has long been on our backlog, especially in Interviews (CodePair), where multiple panels often lead to excessive cognitive load.

To address this, we applied the principle of reducing cognitive load by integrating question panels directly into the VS Code IDE rather than having separate ones. This approach streamlined the UI, making it cleaner and more intuitive.

We chose to use a third-party IDE, VS Code, because it closely mirrors the local development environment users are most familiar with. This decision aimed to provide a seamless and immersive experience without altering much of the user’s accustomed workflow.

Additionally, we placed the AI Assistant on the left, in line with popular AI-assisted editors like Copilot, to ensure familiarity and ease of use.

Before and After we made the question description inclusive in VS Code
AI Assistant

• Interviewer Notes:

Consider this: if you regularly conduct interviews, or have done so even once:

how often have you filled out the actual scorecard during the interview itself?

The percentage is likely very low. This was the hypothesis we wanted to validate with NextGen. In envisioning the ideal interview experience, the scorecard should not be part of the live interview process but rather completed afterward.

Our assumption is based on the principle that it is challenging to perform two tasks requiring equal attention simultaneously. By shifting the scorecard completion to post-interview and having a markdown text area, we aim to enhance focus and accuracy during the evaluation process.

Obstacles

  1. Laws or internal policies related to AI can slow down the pace of pilot progress
  2. Identifying the early adopters of change could be challenging; most enterprises would be followers or laggards.

Metrics

  1. Success: Successfully done 50+ interviews in this new format across customers -Q2
  2. Customer Feedback: No detractor in the pilot — Q2
  3. Metrics on AI response, behavior etc will also come in -Q2
  4. DevEx score: 4+/5 — Q3

Learnings from Q2:

  1. Interest and Variability: There is interest in the next-gen interview process, but the implementation varies across companies. Common processes include:
  • Screening + 2–3 technical rounds
  • Async tests + short interviews
  • Only interviews for senior hires

2. Preference for Real-World Testing: Companies are moving away from Leetcode-style tests, favoring real-world project-based assessments.

3. Flexibility Required: The hiring process needs to be flexible to accommodate different scenarios. Changes should cater to both front-end and back-end roles comprehensively.

4. Low AI Adoption: Adoption of AI in the SDLC is low among current customers, posing a barrier to AI-based tasks. However, signals of AI adoption include:

  • Companies like Goldman Sachs enabling GitHub Copilot from day one.
  • Services companies like Accenture and TCS investing significantly in AI readiness.

5. Intuitive AI Assistant: Candidates prefer an in-line AI assistant over a chat interface for a more intuitive experience.

6. Enterprise Concerns:

  • Brand Safety: Ensuring AI does not introduce bias or unintended favoritism.
  • Internal Policy: Lack of clear AI usage policies due to safety and data protection concerns.
  • Fear of Lack of Control: Customers expect demonstrated control over AI outputs to mitigate fears of recklessness.

P.S. This project is set to launch by the end of Q2. We are eagerly awaiting the results. Fingers crossed!

Thank you for reading!

To see my work, check out my website🎨
Or if you just want to have a chat, hit me up on LinkedIn 💬

--

--

Komal Agarwal
Komal Agarwal

Written by Komal Agarwal

Senior Product Designer at HackerRank | Creator | Coffee Lover | Designing products my grandma would understand ❤️

No responses yet