Serverless NodeJS for your pet project

Pavlik Kiselev
JS Planet
Published in
8 min readMar 12, 2019

Once upon a time when the only platform capable of executing JavaScript was a browser, amount of kilobytes needed to transfer through the network to download a web page was reasonably small, and the sky was blue there was a Startup. My Startup.

Lookslike.me

Lookslike.me is a service to find visually similar people in the Russian social network VK.com. The process is very straightforward — since it’s already a social network where people store their photos, the only thing they need to do to start using the service is to choose the photo where the face is clearly visible. Then the service with our algorithm found the most alike person among other users of the service. We did not dare to search the whole social network as others.

Back in 2011, we were picked by Skolkovo startup accelerator and TV Rain`s show Starting Capital to compete among other startups (only in Russian) for a prize of 3 000 000 rubles (€75 000 back then). Unfortunately, we have not won, but hopefully, this fact from our history shows the level of seriousness of the project.

At that moment I already had some development experience: Python and Django, Perl (not a joke) for half of the year, PHP for a couple of years, JavaScript together with Linux as my both work and home operational system. I could set up the basic Ubuntu system with Apache2 as a server, MySQL for database and PHP to run the code. But if I had chosen well-known technologies for me, it would have been boring, right?

I was the only developer there. I would even say now that this freedom spoiled me a bit, but at that moment it kept me motivated for a long time. You would agree that it’s every developer’s dream to pick the tools one want and use them straight away in a greenfield project.

My preferences were: Ruby on Rails, CoffeeScript, MySQL, Nginx, Puma, Elastic Search, Redis. I liked the Ruby as a language, Rails as a framework, I had some experience with Nginx, I heard buzzwords about Elastic Search and Redis, and I wanted it all.

The development process was quite smooth. Apart from my daily job every evening I had a chance to learn something new, to play with technologies. After some time (8–10 months) the MVP was finally finished. I purchased one of the cheapest servers and started setting it up and running.

Problem #1 — Provisioning

You know, it turned out that it’s not the most straightforward process to set up the server. You need to install different libraries and dependencies, pair different versions of packages, read a vast amount of documentation and spent hours to find the correct configuration. To be able to run background tasks you need something like Sidekiq. Sidekiq required Redis 2.8 or higher. To be able to run real-time chat you need Websockets. For WebSockets + Nginx back in 2011 you need a particular plugin. Moreover, this plugin can easily require Redis not higher than one or something like that. You get the idea.

Problem #2 — Maintenance

All right, assume you passed it to that moment. Then one day your hosting provider notifies you that in two weeks due to network maintenance your server will be unavailable for two hours. Well, time to time everyone needs to update something, not a big deal I suppose, right? Not really… After these two hours, you try to open your service, and you can’t. Nginx is not running. You try to start Nginx, and it hanged immediately. Why? Because Redis is not running and Nginx’s plugin for Websockets cannot create a pool of incoming connections. So then you again spend hours to find out what is the root cause and how to fix everything.

A week after your server goes out of space in the disk because users have uploaded too many photos. Also, to increase the space, you need to restart the server, which is painful because of the previous paragraph.

Two weeks after your server goes out of memory, because there are many people online which trigger the spawning of way too many RoR’s processes.

Three weeks after your SSL certificate get outdated.

Four weeks after… and the list goes on and on. Not because it’s technically impossible to install and configure everything correctly so that it worked like a clock, but just because I had no idea how to do this. In the end, I am a developer, not a system engineer. ¯\_(ツ)_/¯

Six years after

Six years after I’m still a developer. Now I know better, how to write code, know more tools and technologies, but still, don’t know how to maintain the server.

New project — NodeJS, JavaScript, React, Nginx, Koa, Graph.cool (MySQL), Docker.

So my new project growity.me has precisely the same old problems, even with the help of Docker. Does it mean that the situation is hopeless and the only way to make a project yourself is to learn all this system administration stuff?

I believe the answer is “No.”

Serverless

What can we do to improve the situation? How can we care only about the code we write, rather than the configuration of the server? One of the things which can definitely help us with this is Serverless.

Even though the name suggests the absence of a server, it’s not completely true. A server is still there, but a developer does not interact with it, meaning that all the problems described above are gone as well. To prove it I will try to explain the architecture of the current project together with the time I believe we saved with the fact of using serverless. We selected Google as a vendor for us, but currently, there are plenty of options like Amazon Web Services or Azure.

Google App Engine (GAE)

Services

First of all, we need a runtime to execute the code. We chose the Google App Engine (GAE) with the support of NodeJS. You can think of it as a regular 24/7 server that executes your code upon the HTTPS requests. It does not have much — only the name of the service and the version which is the number of times we deployed our application. Very important is that you can go directly to logs in case of any error or even to set up a debugger to be able to check the state of the application in any given time. I guessed a month ago, or so I deployed the broken version, and it took me less than ten minutes to spot the error and deploy the fix.

Google App Engine Services

We can compare it with the usual server with Docker with Nginx (optional, but better for performance and to serve static files) and Node containers. It takes one-two days to set up both containers, Let`s Encrypt certificate and kind of CI process (or at least capability to deploy). However, that is only half of the service. The second half is debugging and logging. Let’s assume we do not need to rotate the logs because they are small and it’s enough to have them all in one file. To debug the app, we can log in to the Node container and use good old console.log, which is maybe enough for a pet project but needs to be improved to be production ready.

Versions

Versions contain the deployed applications. Each time you deploy the app a new version is created. The benefits the versions provide are

  • Traffic allocation — you can rollback in a matter of second by allocating the traffic to the previous version and gradually deploy the app by allocating only a small percent of traffic to the new version.
  • Traffic splitting can be done by cookie or IP which very helpful for creation of Staging environment.
  • Again, logs are available for all previous versions together with the other information about the version. I know, not really impressive to see the deployed date, but with a pet project, you don’t usually have it.
Google App Engine Versions

For comparison, we can take the simplest option — no gradual rollout, no A/B testing, just deployed version, fast rollback and Staging environment. Unfortunately, here I can’t provide the required time, since I’ve never configured the Staging environment for Docker containers, but my gut feeling tells me it’s something close to two-three days.

Instances (auto scalability)

Basically, instances are the number of processes of the application (service). The more requests are needed to process the bigger is the number of instances if the mode is set to dynamic. It’s also possible to set it to the fixed number with the resident mode. I personally never tried the resident mode because fully satisfied with the dynamic. Load balancing with this mode goes absolutely for free. When our load increased by two times with a new client we did not notice it (even on the billing!).

This I’m not able to compare in terms of time because I’ve never set up the load balancer and autoscaling. The only thing I’ve heard is that its kind of possible with Kubernetes and Docker

Conclusion

There are many more features in Google App Engine that I’ve never tried yet. Task queues to send transactional emails or perform computational-heavy cache invalidation and reporting, Cron jobs for recurrent tasks, Memcache for in-memory cache and so on.

These features by far cover all basic need of a pretty big variety of pet projects. With the ease of deployment and maintenance provided by GAE, you can focus on the application code instead of problems coming with your own server.

Since the price is pretty low with a big free tier you don’t need to worry about billing in the first year or so.

If you want to learn by example how to set up the project with Google App Engine you might be interested in the Serverless workshop performed by JavaScript Planet — a group of software engineers, instructors, and Web enthusiasts. Our goal is to help developers to extend their knowledge and improve software programming skills.

--

--