A few days ago, we launched our new Progressive Web App (krayx.com). Krayx is a personal finance application to gain valuable insights into your finances. We did this without setting up any servers. We didn’t have to worry about regions or uptime. Instead, we could focus on delivering the best experience to our users. In this article, we’ll explain how it works and why we love it.
Disclaimer: we wrote this article based on our own experience with Cloudflare Workers. We did not get paid by Cloudflare, nor are we experts in this field.
Cloudflare Workers and the serverless paradigm
If you ever set up a server you know how much time you loose with setting up the machine, making it secure and updating it.
But of course, you don’t want your service to be offline when your server fails, so you need more servers. Now you also need to set up those. And make sure they can take over when the first one fails.
For a startup like us, spending so much time on managing servers is expensive. You want to be able to iterate fast. Built a proof-of-concept to validate your idea. And scale up when you attract more users.
All this is possible with the new Serverless-paradigm, also called Function-as-a-Service (FaaS). Which allows you to focus on writing code instead of managing infrastructure.
Deploy serverless code to data centers across 180 cities in 80 countries to give it exceptional performance and…
Let’s start deploying our static website
Our Progressive Web App runs entirely in the browser on the client-side. We don’t use techniques like server-side-rendering or templating engines. All users receive the same static code.
Cloudflare Workers allows you to use a storage bucket (e.g. Amazon S3) to fetch static files and serve them to your users. You can use features like the Cloudflare edge cache to make future requests go even faster.
While this doesn’t require us to manage any servers. We still need two services (Cloudflare Workers and Amazon S3) to deploy our website. And each time we update our website, we need to purge the Cloudflare cache and let our workers fetch the files again from S3.
Could we drop Amazon S3 and serve our static website entirely from Cloudflare? Turns out this is pretty easy for text files. You can inline them as a big string inside your worker-script. The only thing your worker needs to do is generate a response with that string and the correct headers. It’s blazingly fast since it only needs to return a simple string from a Cloudflare data center close to the user.
So far so good. A simple built-script takes our PWA-files, minifies it and inlines the code inside our worker script.
Let’s introduce WebAssembly
We could have used Go or Rust, but since the rest of our PWA is written in TypeScript, we choose for AssemblyScript. AssemblyScript is a subset of TypeScript, which compiles to WebAssembly.
Performance all the way
We didn’t stop there. If you ever built a website, you know you should compress the resources to save your users’ bandwidth and make your website load fast. We compressed all our text-files with Brotli (a more efficient compression than gzip) and received… binary files. We cannot inline those in our worker-script. But as you might have guessed, we can put them in our WebAssembly file.
This is the final result. We have one WebAssembly file which contains all our static resources. Compressed with the highest Brotli-level. Our worker-script loads these resources from WebAssembly and serves them directly to our users. Which means we get very fast response times. Cloudflare will use a worker close to the user. This worker only needs to send the requested resource (which is already in memory) back to the user.
Ready to start managing your money?
Start using our app now if you want to start managing your money and get insights into your finances.