Handling Multiple Projects: Node Package Dependencies At Getir
At Getir we prefer microservice architecture and use Node.js in several of our core projects. For that reason, we rely on multiple public and private node packages across multiple projects. We constantly change our architecture to become more service-oriented, with this mindset resulting in a fast-growing environment.
Here are some challenges we had back in the day as we experienced a dramatic increase in the number of npm modules.
- One of our node modules which exist across multiple projects had stopped working after a minor update (for our node versions). After that, we had to find repositories which had this module and their corresponding versions to decide if we had to take any action.
- One of our private node modules had a major update which had some issues with backward compatibility. This was a critical package and we had to find out the projects and versions of this package in repositories manually so that we could update the relevant ones.
We admit to using spreadsheets and other manual documents to solve these challenges back in the day. This was cumbersome and took some time. The worst part was that whenever a new challenge came up, we had to recreate an updated spreadsheet. Boring.
An example spreadsheet we created back in the olden days.
You may think that this is unsustainable. It was. Node module versions change time to time, new projects are frequently added. As new features add new packages into package.json, we could easily forget to update the spreadsheet. Also, spreadsheets just aren’t very tech.
What did we build?
We built a simple tool which automates this manual process; a web page in our internal dashboard which retrieves current projects and current package versions. We can see our projects (row), our node modules (column) and their corresponding versions. It also has a search feature which enables us to search with a repo or package name.
A simple web interface in our internal dashboard (green ones indicate the latest version of the package)
How did we do it?
First of all, we use Bitbucket and it’s API as our main git tool to achieve this result. The main process goes like this:
- Our administration panel sends a request to API Gateway.
- API Gateway checks the authorization header and calls an AWS Lambda function.
- AWS Lambda function first checks the Redis cache and if any result is found from the cache it returns that directly.
- If the cache is not found Lambda function gets all the package.json files from Bitbucket API and parses the result.
- Lambda function prepares a list of unique package names from that result and gets the latest versions of the packages using Npm API.
- Lambda function formats the results, sets it to Redis cache with an expire duration and returns it.
Why use Lambda and API Gateway?
We choose serverless architecture because we prefer not to manage and maintain any service. This project helps us organize our packages without the extra effort of having to manage a server. Furthermore, we didn’t need a service that works all the time, so we decided to use Lambda and API gateway.
Why use Redis?
As you know API Gateway has a 30 seconds response limit. We have lots of repositories and Bitbucket API has a strict rate limit. If we send too much request from our dashboard, it can be a problem to show the right package versions. Thus, Redis.
Code example (Lambda function)
Sometimes basic solutions can save a lot of time and effort. We like to come up with basic solutions to optimize our processes and reduce our mistakes.