How we hosted a custom hackathon registration portal for 8 cents
Hello, world! Adi is forcing me to introduce myself so: I’m Daniel and I work on tech at Freetail. OK now that that is out of the way…
For the past couple of years, we have used HackMIT’s Quill to manage applications for our events and other logistics. Quill has served us and dozens of other events well over the years. The two main issues we were running into was difficulties in customization and a high hosting cost. Whenever we needed to add a custom registration question, we would have to touch multiple files, and somebody would inevitably forget one of them, leading to situations where we would have questions on the application that don’t show up on the admissions page, making them effectively useless. In addition, we found that the cheapest EC2 instance that could comfortably run Quill cost $1.50 a day. But since AWS would report our Quill instances as having degraded health, we had two instances minimum running at any given time, which added up to $3 a day. This doesn’t even include the $15 a month of Elastic Load Balancing or database costs. There are definitely cheaper options at the possible cost of some fault tolerance, but the core issue was that Quill was built in 2015, and the technology landscape was vastly different eight years ago from today.
Over the years we have seen the meteoric rise of something called serverless computing. The term “serverless” has been abused by marketing to mean any number of things now, but the gist is that your backend runs in lightweight containers that are provisioned on demand (like when your website gets a request) and decommissioned when they are no longer needed. This is in contrast to the traditional model where you buy a certain amount of computing resources to run your application code 24/7. The serverless model gives us two advantages:
- Infinite upwards scaling: the cloud vendor automatically starts more containers whenever your traffic spikes, so it takes zero effort on your end to prevent your backend from crashing. With a traditional server, you would have to anticipate high traffic ahead of big launches and purchase extra capacity in advance, and predicting demand is hard. That means your servers will invariably either become overloaded and suffer degraded performance or you’ll end up wasting money on unused capacity.
- Infinite downwards scaling: just like how an increase in traffic triggers additional containers to be spawned, a decrease in traffic will prompt those containers to be discarded. In fact, when your site isn’t receiving any visits, a serverless deployment can scale all the way down to zero. That means instead of paying for an entire machine when your website isn’t getting any hits, you don’t pay at all.
I consider the second point to be the most compelling aspect of serverless computing by far for the typical hobbyist or small organization, as a low-traffic site will be paying nothing when it is not being actively utilized. And although we like to imagine our hackathons are oh so grandiose in popularity and scale, even for a hackathon the size of HackTX, which consistently gets a four-digit application count, our backend on average receives nowhere enough load to saturate a Raspberry Pi. Let’s do some math: if we get 1000 applications over 30 days, and each applicant makes 100 requests on average (not an unrealistic number with autosave), then that works out to be 1000 * 100 / 30 / 24 / 60 = 2.3 requests a minute, each of which takes probably less than a hundred milliseconds to serve. That means your CPU is sitting idle over 99% of the time. I’ve noticed that many people severely underestimate how powerful a single well-configured server running code that’s not accidentally quadratic can be. Did you know that Hacker news runs off not just a single machine, but a single *core* of a ten-year-old CPU?
Of course, this load is not evenly distributed; a more likely scenario is a spike of traffic when you first open applications, then a steady trickle throughout the application cycle and perhaps another spike near the deadline. Then a flatline until the event day-of, where traffic rises again and stays at an elevated level until closing ceremony as people check the schedule and do other shenanigans on the live site. But that’s the beauty of serverless: since it automatically scales your infrastructure for you in both directions, it is priced as if your traffic *was* predictable; that is, you’ll never have to pay for overprovisioned capacity. Serverless also allows cloud vendors to offer more “generous” free tiers. I put generous in quotes: compare Vercel’s offering of 100 GB-hours of serverless function execution per month with AWS’s offering of 750 hours of a t3.micro EC2 instance with 2 vCPUs and 1 GB RAM per month. Technically, AWS’s free tier offers 7.5 times more value, but Vercel is far more flexible if your app isn’t being used 24/7 since the free tier quota can be spread across multiple projects.
How does that translate into numbers? Well, it turns out that over the complete lifecycle of our 100-person spring hackathon, our new registration portal built on top of serverless technologies, codenamed Rodeo, used a grand total of 1.8% of Vercel’s free tier. Yup, that’s right: we used 1.8 GB-hours out of 100 over the course of a month, with about 40,000 serverless function invocations. (As for bandwidth, we used 0.2%: 256 MB out of 100 GB, but this has nothing to the fact that it’s serverless.) Combined with the fact that Supabase generously offers a free 500 MB hosted Postgres instance, that means our costliest technology was AWS Simple Email Service — we spent a total of 8 cents from registration opening to closing ceremony to send roughly 800 emails. This represents a thousandfold decrease from before!
So, how do you switch to serverless? I can’t provide a universal tutorial since the process is different for whatever framework you use. But there should be an adapter for most libraries that will facilitate deployment onto AWS Lambda or another platform. If you’re starting a new project from scratch, I would recommend going with an opinionated full-stack framework like Next, Nuxt, or (for Rodeo) SvelteKit, as these provide sensible conventions for writing serverless functions. Platforms such as Vercel, Netlify, and Cloudflare Pages come with default configurations for common frameworks that will make it easier for you to deploy your app.
You’re probably thinking that there has to be a catch, otherwise everything would be on serverless. And you would be correct: although serverless is a great fit for a stateless REST API, it is not a suitable fit for applications with long-lived state, like a game server. In addition, you are giving up a certain degree of control, since you’re letting the vendor manage the OS and runtime for you. Finally, there is a brief delay every time you invoke a serverless function after a long time as the container is being booted up. This is known as a “cold start” and typically results in a 3–5 second delay on the first request to Rodeo after an extended period of inactivity. You will have to decide if these tradeoffs are acceptable. For us at Freetail, we have found that serverless has vastly decreased our infrastructure costs, both monetary and the labor required to maintain its scalability and fault-tolerance. And we have thusly solved all of our cost problems in one fell swoop. As for customizability, we’re still working on building those features into Rodeo, but we have big plans for it, including fully customizable application questions and statistics, an integrated live site and Hacker ID, and automatic handling of sponsors and volunteers. If you’re interested let us know and we might provide sneak peeks of development! (Or you can see for yourself at our GitLab repo.)