A Hackathon’s “Hybrid” Lie
I write this in the hopes that hackathons can be more clear, more transparent, and more considerate of the expectations of hybrid hackathons.
Regardless of who I am, my role, and my inherent bias as a contender I hope to speak on behalf of all hackathon competitors. In spite of that, I will provide some details about me in relation to this subject. I am a full-stack developer with 4 years of focused practice and within the past 4 months I have attended 4 hackathons with the goal of winning.
The specific hackathon and names of projects will be kept anonymous and all statements are considered in relation to my previous hackathon experiences.
First of all I would like to give MLH its deserved appreciation for making hackathons more accessible and facilitating involvement within the hacker community. That being said, the case I raise could be loosely tied to MLH management of the hackathons it supports.
The case I bring up involves a hackathon I recently attended which marketed itself as a “hybrid hackathon”. A hybrid hackathon is a hackathon which has a in-person event but is also open to remote competitors (who are to be considered equally in competition) and attendants. This hackathon from the very beginning had observable flaws and mismanagement all over. Their site frequently crashed to the extent that a remote applicant’s participation in the hackathon was left uncertain until the last day because the information could not be retrieved from the crashed site. Much of their information delivery was sub-par, their website, for example, provided the bare minimum information (personally, the reason I attended was only due to their exceptional sponsors). Lastly, they provided a means of communication for the hackathon (through slack) a mere day before the hackathon began. These 3 examples, albeit non exhaustive, already demonstrate a clear lack of care and consideration for the virtual hackers who have an outweighed dependence on these tools.
What truly demonstrated the hackathons partiality within the kinds of participants, and disregard for the virtual hackers was the homogeneousness of their awarding. I, as a competitor, understand that I have an outweighed bias, nonetheless the following aspects are objectively too obvious to ignore. Firstly, and most importantly, 11 of the 12 participants that made up the 3 teams that were awarded the “overall” prizes were in-person participants if not students to the institution that hosted it as well. This would be perfectly fine (even as a hybrid hackathon) if the projects proved to have earned the award by merit of the project alone, that was not the case.
One of the three projects, which I will refer to as “A”, fully deserved a spot on the podium, but the other two “B” and “C”, were much less impressive. For contrast I will be referencing project “D” for contrast. The judging criteria according to the hackathon consisted of: Creativity, Technical Complexity, and Marketability/Flexibility. Creativity is subjective by nature, so I won't be going into it. Project A had serious marketability as it seemed to resolve a real problem which it did so effectively and thus it would be marketable and useful. The other two had useful goals but were produced too simply to provide any real value, whereas D like the rest had useful goals like B and C but it paired advanced methods and technologies providing substantially more Marketability and Flexibility.
Reflecting on the final criteria, technical complexity, is what had me aghast. Project A had complexity justifiably high enough for an award, if anything, only off of the fact that they used a technology not so readily available. B and C are rudimentary at best. Specifically, C, which is a mystery as to how it received an award with only a barebones technology stack, where its equivalent is present in most projects by default. As a point of comparison D consisted of the same technology stack as only a base, from that base an AI model was developed along with linking multiple datasets to the project which provided abundantly greater value.
In summary, only one of the three “overall winners” were deserving of being on the podium. It is telling that there was more at play beyond merit if representation within those recipients was 11 of 12 in-person attendants. Although I have not seen much of the rest of the competition, I have no doubt that there are many projects like D that were far more deserving than the lesser 2 of the recipients.
It is not my intent to ridicule the aforementioned projects, but I mentioned them to make a point as they serve as outputs of a system that frankly offended and disrespected its virtual attendants. In retrospect, it appears as if the hackathon only broadened its participation to virtual attendants in the goal of increasing its attendance numbers while not giving anything back. I hope hackathons learn from this and be more considerate when promoting themselves as a “hybrid” hackathon so as to not dismay and lose the trust its virtual attendants like this hackathon did for me.