Redefining Candidate Reports in HackerRank
A user-centered approach to enhance recruiter decision-making
--
Overview
HackerRank is a comprehensive platform that streamlines developer hiring by offering a suite of products designed to assess candidates effectively. I lead the design for our flagship product, Screen, which enables organizations to evaluate candidates quickly through skill-based assessments.
Familiar with this? On LinkedIn, most jobs attract hundreds of applications quickly. Screen’s core solution helps customers evaluate candidates efficiently, especially at high volumes like in university hiring.
At HackerRank, we firmly believe in prioritizing skills over pedigree in the hiring process. We advocate that a candidate’s abilities should take precedence over their educational background or previous job titles.
Once candidates complete their assessment, we generate reports that serve as the definitive source of truth regarding their performance. We provide two types of reports: summary and detailed, with this case study focusing specifically on the summary report.
The primary goal of the summary report is straightforward: it should empower recruiters and hiring managers to make quick, informed decisions about whether a candidate should advance to the next stage of the interview process. This streamlined approach enhances efficiency and ensures that the best candidates are identified quickly.
My Role
I drove the entire design process — from initial concept to final release in July 2023. I collaborated with a cross-functional team — including 1 product manager, 1 product analyst and a team of engineers, to bring the project to life. Post-launch, I spearheaded the effort to analyze the impact of the new features and facilitated regular sync-ups to ensure continuous improvement.
Why revamp such a major flow — especially when users hate change?
Identifying Key Problems
Let’s start from the beginning. Up until early 2023, this is what our summary report looked like:
The summary report was flagged quite a few times by our GTM teams, and we got on calls with customers to find out what exactly was not working for them.
These are some of the common things we heard across these conversations
Taking these customer quotes into account, we categorised the problem broadly into 3 buckets:
- Cognitive Overload: The old summary report wasn’t truly a summary. It required excessive scrolling, especially for users on lower-resolution screens. More than 50% of users were using devices with a low screen resolution (<1920x1080)
- Poor Feature Discoverability: Key features, such as the candidate timeline and plagiarism indicators, were not easily accessible or noticeable. These factors tend to go into the decision making process of the evaluator.
- Outdated UX and Slow Performance:
(1) We observed that the time to make a decision on a candidate’s performance was a few days. This eventually slows down the interview process.
(2) The report took about 8 seconds to load in a new tab. While this can be overlooked as an engineering task, I thought there was some scope of improvement even from a UX point of view.
(3) In the process, upgrade to the latest design system
Business Goals
- Screen contributes to ~75% of HackerRank’s revenue.
- Improving the summary report was critical, as it’s a core component of the candidate evaluation process — an essential part of the user journey that significantly impacts customer retention.
- Enhancing this experience would also align with the company’s broader goal of raising the product’s quality and design standards.
Early Exploration
I began by creating low-fidelity designs to align on information architecture, prioritizing essential elements and handling plagiarism signals effectively. Early iterations focused on translating ideas quickly to gather feedback and iterate, setting the stage for deeper exploration.
Each iteration addressed most of the issues mentioned above, but some challenges remained unresolved. The goal of this exercise was to translate as many ideas as possible from my mind to the canvas, gather feedback, and iterate swiftly. Ultimately, while a PRD can be thoroughly critiqued, having a visual alongside it significantly aids in guiding the direction for both product and engineering teams.
Several Failed Ideas Later…
Why color within the lines when you can paint the whole canvas?
The Breakthrough Moment💡
After several iterations that didn’t quite hit the mark, I began reflecting on the feedback from my stakeholders. I realized that we needed to break out of convention and rethink the report’s format altogether. Instead of a full-page view, I proposed a drawer-style summary that would reside within the same page. This approach was risky — there were concerns about losing information and context — but I looked at it as an option to condense critical data into a concise, easy-to-access format.
As I began the wireframing process, my intention was to provoke early feedback and if necessary, stir up some healthy controversy. By introducing these rough drafts early on, I aimed to shift left by surfacing differing opinions and potential challenges sooner rather than later. We ensured that any critical issues were addressed before moving further into higher fidelity.
High Fidelity✨
With alignment from stakeholders, we moved to high-fidelity designs, refining the look and feel to match our new visual language.
While these options were a decent start, we fine tuned these to meet a balance of good visual design and AA a11y compliance, while sticking to the core problems to be solved. The final design divided content into 3 clear sections:
- (1) Performance: Displays score, plagiarism activity, and question-level details.
- (2) Attempt Activity: Shows the candidate’s test-taking process, including timestamps for proctoring information
- (3) Candidate Info: Provides basic candidate information like email and other metadata.
Impact
- The time taken to make a decision on a candidate (mark the candidate status, or to create an interview) reduced by 36%.
- The timeline feature is being used over 50K+ times per month.
We also recorded feedback through an NPS survey. Some quotes from customers:
“This looks awesome! Clicking Create Interview and having it open a new tab was a nice change too!”
“Good update. Would prefer enlarging proctor images though.”
“Way cleaner and snappier!!”
We took a lot of feedback directly from customers that would improve the their workflow and implemented in the following quarter. 🎉
Takeaways
The redesigned summary report enabled me to rethink conventional approaches. It underscored the importance of willing to take calculated risks to achieve superior outcomes. It also highlighted the value of close collaboration with cross-functional teams, where diverse perspectives contributed to a well-rounded solution!