The Ninja Tech Journey

Ivan Kenneth Wang
Ninja Van Tech
Published in
4 min readAug 20, 2021

Innovation never sleeps.

As I reflect on my 5 wonderful years in Ninja Van’s Engineering team, I thought of writing an article to share about Ninja Van’s Tech Journey.

Back in 2014, Ninja Van powered up its first set of on-premise production servers and started serving requests to our initial group of shippers in Singapore. It was a mere 8-core server with shared broadband internet. It ran the initial version of our application that was written in PHP (Laravel), a less resource intensive programming language compared to others. We also ran our deployments manually — copying packaged files by SCP and starting them via SSH. Imagine how tedious that was? Or was it, really? We started off with a single monolith MVP, like most start-ups do.

Figure 1: Ninja Van’s on premise server.

In the following year of 2015, we decided to shift towards Java (Play), a type-safe language that is also more suitable for enterprise grade applications. With this move to a compiled language, we also saw an increase in manual repetitive deployment tasks. As such, we felt it was the right time to introduce some automation. We wrote Ansible playbooks to fulfill these tasks, as shown in Figure 2.

Figure 2: Automation powered by Ansible Playbook

Fun fact: Ansible has been powering our CI-CD stack underneath to this date.

As our business grew, we needed to build more product features and improve our tech to be robust and resilient. We grew our team to address this. But we faced new technical challenges as more and more engineers came onboard.

Fun fact: Today, we have over 150 members in Ninja Van Tech.

The first challenge was due to having the MVP as a single code base. We found ourselves running into code conflicts more frequently and felt its detrimental effect on our velocity. We then decided to re-architect our monolithic app into microservices, as illustrated in Figure 3.

Figure 3: Re-architecting our monolith into microservices

Spoiler alert — this was not a silver bullet. Each microservice requires some base CPU and memory, and Java, being a memory intensive language, consequently requires us to have more hardware resources. It was around this time that we migrated to Cloud, which was fortunate, as we could easily scale up our resources. With just a few clicks of a button, we would have already started a few new virtual machines (VM). We deployed new applications in VMs with less resources, while keeping the monolith in a bigger one, see Figure 4.

Figure 4: Microservices deployed in individual VMs

We found it troublesome to manage a huge fleet of VMs as we rolled out more microservices. In 2016, we adopted containerization, specifically Docker, to reduce our number of VMs. We ran multiple containers in CoreOS VMs with the help of now deprecated Fleetd. Ninja Van is proud to be one of the first few companies to be 100% dockerized in production in Southeast Asia.

We gladly accept there is no perfect system and servers are bound to crash one day. As such, we design our system to have high availability. We ensure every single microservice has at least 2 container replicas running in different VMs as shown in Figure 5.

Figure 5: Running services with high availability

We do our best to keep our tech infrastructure up-to-date. Today, we run our fleet of services in Kubernetes. It addresses many well-known challenges in managing containers, such as but not limited to

  1. Scheduling containers to suitable VMs
  2. Service mesh discovery
  3. Graceful deployment to avoid losing any request
  4. Autoscaling containers and VMs based on usage and demand

— all of which we take advantage of.

Fun fact: Ninja Van CI is built on top of Kubernetes too! Read more here.

P.S. Ninja Van is hiring! Interested in shaping the future of Ninja Van’s tech? Apply here.

P.P.S. Thanks to my beloved wife, Kat Guevara, for helping me edit this piece.

--

--