On Assessments

Luce Liu
nwPlus
Published in
5 min readDec 27, 2017

In light of the inquiries our organizing team has received in the last week regarding our application assessment process, we felt that it was important to be publicly transparent and give insight into how applications this year were handled internally.

Before we dive into that, I’d like to elaborate on the fact that applications for nwHacks 2018 were, without a doubt, the most competitive yet. This isn’t only due to our natural growth as an annual tech event, but also to this year being the first that we have the budget to offer flight reimbursements. Between 15–20% of our 2000+ applicants were from outside of BC, and collectively raised the bar for all applicants. (And despite the fact that we were only able to offer reimbursement to a very limited number of them, over 80 — excluding UW bus folks — have RSVP’d “Going.” Props to you guys.)

A key objective for our 2018 event was to formalize our application assessment process and eliminate bias. In November, we completely reworked our system to be significantly more methodical and objective. Unlike in previous years, where each organizer was assigned a subset of applications to assess in their entireties, this winter we implemented an assembly-line system that allowed us to be (1) quantitative, (2) consistent, and still (3) forgiving.

(1) Quantitative

Applications were assessed with respect to two overarching sections: Portfolio (GitHub/personal site, and resume/LinkedIn) and Written Response (2 questions). Each of the 4 components was scored out of 5, giving each section a score out of 10. Subsequently, based on the strength of the Portfolio, an overall score out of 10 (allowing up to 2 decimal places) was assigned to the application. We automatically accepted all applicants who scored above a certain threshold.

(2) Consistent

Each of the 14 organizers involved with application assessment was responsible for assessing a single component, e.g. resume. Because it wasn’t practical (or nice) to assign one person to mark 2000 resumes, we split into teams and developed a highly comprehensive assessment package/rubric for reference, which included specific criteria to look for as well as example responses (for the long answer questions) taken directly from our application data.

(3) Forgiving (for all applicants)

We acknowledge that first-timers generally have less technical experience, while hackathon veterans tend to submit shorter long answer responses and let their portfolios do the talking. We weighed the 2 long answer questions differently for each applicant, depending on how strong their portfolio was (i.e. resume/LinkedIn and GitHub/personal site). Higher portfolio scores resulted in lower weight placed on the written responses, while weaker scores increased the response weighting.

Moreover, in the Portfolio section, our system took the better of the 2 scores for each component. For instance, if an applicant’s LinkedIn was a 3 but their resume was a 4, we only used their resume score. Our aim was to be forgiving when applicants were able to present themselves more effectively on some platforms over others, and to account for those who had a GitHub but no personal site, or vice versa.

Upon post-assessment review, many trends surfaced from our pool of applicants who we couldn’t admit. Some common application pitfalls we noticed were:

  • Experienced hackers not including a link to GitHub or personal site
  • GitHub accounts with (close to) zero public repo contributions, or contributions being largely class notes/assignments
  • Noticeably low effort responses to long answer questions (for some less experienced applicants, this made the difference between being above or under the acceptance threshold)

At this point, it’s probably more than apparent that we greatly value not only concrete evidence of technical skills/experience, but also contributions to open source projects of all scales. This is not unique to our team or event — the collaborative, ‘DIY’ spirit is an inherent and defining quality of the hackathon community as a whole. In our assessment process, we organizers aimed to identify individuals who would most positively contribute to our 24-hour microcosm. In hindsight, we recognize that these values may not be shared or recognized by all applicants, and will strive to clarify our criteria in future years.

However, another important point to make regarding acceptances is that regarding our two application rounds. Early on, we decided that our second round would be reserved for filling in leftover spots from the first, and ensured we marketed it as such. With an overwhelming 89% RSVP rate from our first round (to draw comparisons, last year’s rate was 77%), we quickly maxed out of available spots. There were numerous more-than-qualified applicants who we could not invite simply because they applied after we filled up.

To applicants who were not admitted and who fall under that category, we urge you to apply as early as you can next year, and seize opportunities to improve your technical portfolio in the meantime, e.g.:

  • LumoHacks (Canada’s largest health hackathon, September)
  • EduHacks (Education-oriented hackathon, September/October)
  • UBC Hacks (Local Hack Day, Dec 2)
  • UBC Launchpad (Build software in teams for a variety of platforms/topics, from mobile apps to cryptocurrency)
  • UBC Code the Change (Collab with other students to build software for non-profits)
  • Emerging Media Lab (Gain hands-on experience with cutting-edge tech such as VR/AR)

As well, we encourage everyone who has read this far (thanks!) to let us know if you have any further questions regarding our 2018 application assessments.

With all this being said — our team is humbled by and grateful for the substantial attention and interest our event has received this year, and we only wish we had the means to accommodate more attendees. At the end of the day, we’re just a group of UBC students who are trying to better the Lower Mainland’s student tech community in whatever capacity we can. Though each iteration of our event is better than the last, we are not exempt from human error and as a team continue to learn from our mistakes (next machine learning project idea: hackathon-planning bot). We hope this has been helpful, and that you’ll stick around and keep learning with us.

TL;DR

  1. We made our 2018 assessment process much more objective and, while reviewing applications, prioritized concrete evidence that the individual embodied the spirit of hackathons.
  2. Our event reached full capacity in early December with an unprecedented RSVP rate from our first round of acceptances; we were unable to invite qualified applicants who submitted later.

--

--