How I Started Working on a Personal Project — Part 2

Keloysius Mak
5 min readMar 23, 2019

--

This is part 2 of my thought processes and learning points from the development of my personal project. If you’d like to read part 1, you can read it here: https://medium.com/@keloysiusmak/how-i-started-working-on-my-personal-project-b4a8b7ce13a7

In this post, I wanted to detail a bit more about why I made certain technological decisions.

I wanted to build, from scratch, a working emulation of the backend which I spent my summer working on at my internship. I was very intrigued to learn the inner workings, and why they chose to go with a microservices architecture with gRPC.

Why Microservices?

I asked this question to the members of the engineering team, on why they decided to go with a microservices architecture early on — how this impacted the productivity and efficiency of a relatively small startup. We spoke at length about the benefits of microservices and how it trumps monolithic applications — how it was extremely scalable, organised and compartmentalised, which was something he really wanted moving forward. Kubernetes was also employed as a tool to spin up more containers as and when it was required, allowing the backend to scale and respond to surges in demand, usually during morning and evening peak hours. Keeping everything in small, modularised chunks made it extremely easy to maintain, and that meant we could work faster to deploy code to production.

I worked extensively with SQL from my days working with PHP, and was very familiar with thinking in a structured and tabled way, as opposed to working with documents as you would with Mongo. It represented a huge change in the way I had to think, and I knew right from the get-go Mongo was the way I wanted to go. I appreciated the way things worked with Mongo, and wanted to give it a go — why not? There was much to learn, and learning by implementing Mongo opened me up to the possibility of using a full Javascript stack with MERN/MEAN/MEVN, which was again very enticing.

With the decision to go full on with Javascript with a microservices architecture and MongoDB, I started to design the various components that made up the backend of the application — schedules, activities, budgets, records and accounts. It was a breeze to set up, and with some tinkering and setting up, a full fledged backend was up and running. I was using docker-compose to orchestrate my components during the development phase, but wanted to move on to Kubernetes later on in the development.

Progress was great and efficient for a good part of the development. CRUD was easy to set up, functionality was easy to implement. I achieved everything I wanted to build, along with the authentication flow for the application. Everything looked good to go, and I started to look at deployment of the backend on AWS. I chose AWS because of my prior experience working with AWS, and because I wanted to spend more time building the product rather than tinkering and worrying about hardware.

I was this close to deploying it on AWS, and trying to get it to work proved to be a bit of a challenge. Hours was spent trying to get containers to talk to each other, or even getting them to run smoothly. I attribute the troubles to my own lack of knowledge, but nonetheless it did provide to be a major obstacle and I was spending way too much time trying to get it to work. Wayyyy too much time.

Then, I came across AWS Lambda.

I worked with serverless during my internship, albeit with Microsoft Azure Functions as opposed to AWS Lambda, and was very fascinated about how serverless worked and how simple it looked — you write functions, you deploy it, problem solved. Apart from being ridiculously simple, it was extremely scalable as well — which allowed it to adapt to ever changing demands without having to over-allocate resources, which was exactly what I wanted.

To illustrate how easy it is to get things up and going with serverless (and Vue), and how easy it is to scale, check out this post by @jbesw:

“The purpose of FaaS is to help developers take over the world rapidly build, deploy, and extend apps by letting other people (e.g. the Ops folk in DevOps) create and manage the complete server environment on which code can be run.” — PubNub

Rapidly take over the world — that sounds enticing. I dug deeper into Lambda, to learn more about how it worked. At that point, the problems with deploying my containers gave me some sleepless nights, and I wanted a solution to abstract this complications away from my development process.

To explore how simple Lambda was, I wrote a simple function in Node, deployed it and got it up and running. All in the space of 3 minutes.

“That was remarkably easy.”, I mumbled as I went back to fiddling with my present deployment situation. As things continued to fail on me, I started to entertain the idea — what if I moved everything to serverless. It sounded extremely far-fetched, but theoretically possible, so I went to dig up some thoughts on the plausibility of this solution.

Perhaps the most pertinent issue to discuss here would be the issue of “cold starts” — how serverless functions take a while to “warm up”, especially after not being invoked for a duration of time. While it did cross my mind, I didn’t make too much of it, since there were ways to keep functions “warm”, and Lambda is getting better at resolving cold start issues.

I decided to pursue this solution, despite the issue of “cold starts”. Surely it was a major drawback in the case for serverless, but the benefits it gave me — the speed of development, the elimination of hardware troubles — far outweighted the performance hit of “cold starts”. In addition, it gave me the best use out of my time, and allowed me to focus more on development, building more useful features as opposed to a miniscule performance hit. The application was not time-sensitive like healthcare or logistics applications, and so I decided it was a worthwhile investment to make the switch.

Making the switch was easy, and with custom authorisers, it really emphasised the idea of separation of concerns — each function tackled one concern — and kept things neat and tidy.

Deployment was a breeze: sls deploy.

This gave rise to an extremely efficient pipeline moving forward on getting code to production. Organising functions with a simple serverless.yml was easy, deploying it to Lambda was easy as well. Managing secrets and environment variables were a piece of cake.

It gave rise to weeks of rapid development, building the foundation of new and improved functionality within the envisioned application, all while not having to worry about infrastructure — it allowed me to work fast and smart.

Moving to serverless was one of the best decisions I made when designing and building my backend. It freed up a lot of precious time maintaining infrastructure, allowing me to focus on building a useful product. It was also cheap to run, AWS’s free tier gave me ample room to work with, and it was an extremely pleasant experience.

In the next post, I’ll go into the challenges faced when building the web application of this project.

Part 3: https://medium.com/@keloysiusmak/how-i-started-working-on-a-personal-project-part-3-b753d1372ff2

--

--