Managing a large scale Node.js project for rapid customer driven development
At StashAway, we aim to provide an unparalleled user experience for our customers — from on-boarding to the investment process. We strive to avoid typical pitfalls one encounters when interacting with complex financial applications such as cumbersome user interfaces or slow responses. Moreover, our Node.js applications interact with many other microservices in our ecosystem and perform non-trivial tasks. It’s easy to create a monolithic system that becomes unmaintainable after a while.
To solve this, we employed different design patterns that help us to create a flexible and maintainable system when architecting our Node.js application. We often conduct user interviews to elicit feedback from our customers. These feedback are then translated into features that help us improve our product. Our application is tested extensively to provide us with confidence that any change introduced does not break the system
Before we begin, we need to define what a large-scale application is and contrast that with common practices. To us, a large-scale application goes beyond lines of code; it is an application that interacts with multiple sub-systems, performs non-trivial tasks (e.g. compute intensive computations), be highly performant with low latency responses and should scale gracefully.
In code examples or starter kits found online for Node.js projects, one would typically find projects represented in the traditional MVC architectural pattern:
We found this approach to be limiting as most of the business logic would reside in controllers, which may involve multiple imports from different models (e.g. Customer, Goal, Deposit Plans). Furthermore, we had other systems to interact with that added to the complexity of the business logic. As we were only interested in building an API, we could disregard the view component in our code (we built an awesome frontend application in React).
We began development from an empty slate with an ambitious target to launch within 6 months. This involved a trading backend (written in Scala), a Node.js/Express API that connected the backend with our front end, which consisted of a web application and (later) mobile applications. It had to be flexible enough to allow changes to be introduced as business requirements shifted.
We decided early on to adopt an architecture that resembles a service-oriented architecture, while maintaining certain elements of the MVC approach. Business logic is separated along services such as authentication, customers’ requests or goal management. One principle that guided us was that each functional component could be spun off into its own service if required, which helped us to prevent instances where refactoring would be difficult due to the ever growing dependencies. By segregating the various services, we could spin off existing services for scalability and maintainability. Each service is self-contained and abstracted from other services.
From the diagram above, one can see how the Node.js application became increasingly connected with other applications:
- Trading backend
- Authentication service
- Voucher service
- Admin system
- Web application
- Mobile applications
- Mongo database
The Node.js application handles all customer related functions:
- Sign up
- On-boarding process
- Goals/Investment management
As we can see, the complexity increases exponentially and managing a codebase with so many moving parts can get unwieldy. Also, requirements were added over time on how each component should adhere to regulatory or business needs and we could not afford taking too much time to release new features. In the next section, we will explain some design patterns that we employed to better manage our processes.
We employed patterns such as chain of responsibility when managing slightly complicated processes. For example, when creating an account, multiple services are involved in performing their respective processes. First, the User service needs to create the User object, followed by creation of the user on the Authentication service. After this, the Email Service can then send a welcome email containing an email verification token.
One design pattern that we leveraged on successfully is the Observer pattern. Node.js has the Event Emitter interface that allows non-blocking code to be executed, hence allowing tasks to be performed in the background as the user continues with their interactions on the frontend.
We also relied on Mongoose features such as virtual and static methods that allowed us to encapsulate business logic within models. For example, in the User model, we have a static method that generates a unique token required for email verification; this method can be reused later in our Authentication service where there is a need to generate new tokens.
As there are many asynchronous calls within our code, we needed a way to better manage how our callbacks were handled. At the beginning, we relied on callbacks , and quickly ended up in callback hell. Then we thought we could enter the promised land with promises but it was basically the same problem masqueraded in shinier clothes. We finally landed on using async/await, which allowed better control flow as we could wait for multiple promises to be resolved before proceeding. We also relied on bluebird to further super charge promises handling, e.g. map-reduce an array of promises.
Putting It All Together
In this section, we explain how we utilised the techniques mentioned earlier through some code examples. We will go through the typical user sign up process, which may seem to be a trivial task at first.
We have an Auth Service that handles all user inputs related to authentication — register, login, change password etc. Here’s a stripped down version of it:
At the beginning, we thought user sign up was simple; we just needed to collect the user’s email address and password along with a boolean value for agreeing to our terms and conditions. On the frontend, after a successful call to the register endpoint, the user would be redirected immediately to a form requesting for customer profile data.
However, these are some events that should happen concurrently after a user submits:
- Creation of Auth user (managed by the Authentication service)
- Creation of Customer object in Mongo and also a corresponding record on Trading service
- Send welcome email
- Sign up to MailChimp
- Application of voucher (if any)
Any one of these processes could fail and our user would not be able to move on to the next step. For example, trying to register an email like
email@example.com(a common error) on MailChimp would throw an error. If we had put all the calls above in the register method, any point of failure would stop the user’s progress. The user after receiving an error message will most likely try to register again, leading to a “duplicate user found” error and frustration for the user. We solve this by firing an event with the Event Emitter and let observers perform their necessary tasks.
As you can see from above, observers help to break complex calls into multiple non-blocking calls and prevent unnecessary dependencies. We are free to add as many events as we desire, keeping dependent codes in the same block.
Lastly, we will talk about our testing approach. Our requirements shift fluidly due to changing needs, and this can cause the number of bugs to rise as new code is constantly being added to the system. We need to ensure that our code is robust to handle any change introduced and not break what was previously working. How many times have you encountered situations when after adding a new feature, something down the chain stopped working and was only discovered in production?
At the onset, we strived to maintain high code coverage (>95%). We used mocha for our unit tests and Istanbul for coverage. By writing tests at the start, it was easy to just add new tests whenever we wrote code. Even though we were in a rush to get things shipped, we never allowed that to compromise our quality. Testing itself became a serious endeavour and we also learnt many lessons along the way. We will probably cover in details in a later post on how our tests are written. We relied on testing libraries such as supertest and sinon, and custom mock providers to make tests a breeze to write. While unit tests are good, we also have a separate integration test project (written in Python) that ensures all of our systems are working as intended.
By putting our customers at the forefront of our development processes, we are able to iterate rapidly given the constraint in resources. By relying on well known design patterns, we avoided having to reinvent the wheel and could deliver faster. Having high test coverage gave us the confidence to introduce new (breaking) changes. Feedback from our customers can then be quickly introduced, allowing us to provide better service, and letting them know that we have their interests at the core of our product.
Want to learn about how we tackle testing or other topics? Feel free to drop us a comment or email!
We are constantly on the lookout for great tech talent to join our engineering team — visit our website to learn more and feel free to reach out to us!